Compare commits

...

2 commits

Author SHA1 Message Date
5b56a8b6ed feat(spaces): add per-Space cascading toggle with server-wide default
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Has been skipped
Checks / Prek / Pre-commit & Formatting (pull_request) Failing after 4s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 5s
Update flake hashes / update-flake-hashes (pull_request) Failing after 14s
Add com.continuwuity.space.cascading state event for per-Space override
of the server-wide space_permission_cascading config. Add enable/disable/
status admin commands. Strip superfluous comments throughout.
2026-03-19 16:33:15 +01:00
53d4fb892c chore(spaces): fix formatting, add changelog, remove design docs
Run cargo +nightly fmt, add towncrier news fragment, remove plan
documents that served their purpose during development.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 16:10:53 +01:00
14 changed files with 384 additions and 2852 deletions

View file

@ -0,0 +1 @@
Add Space permission cascading: power levels cascade from Spaces to child rooms, role-based room access with custom roles, continuous enforcement (auto-join/kick), and admin commands for role management. Controlled by `space_permission_cascading` config flag (off by default).

View file

@ -470,9 +470,10 @@
# #
#suspend_on_register = false #suspend_on_register = false
# Enable space permission cascading (power levels and role-based access). # Server-wide default for space permission cascading (power levels and
# When enabled, power levels cascade from Spaces to child rooms and rooms # role-based access). Individual Spaces can override this via the
# can require roles for access. Applies to all Spaces on this server. # `com.continuwuity.space.cascading` state event or the admin command
# `!admin space roles enable/disable <space>`.
# #
#space_permission_cascading = false #space_permission_cascading = false

View file

@ -1,225 +0,0 @@
# Space Permission Cascading — Design Document
**Date:** 2026-03-17
**Status:** Implemented
## Overview
Server-side feature that allows user rights in a Space to cascade down to its
direct child rooms. Includes power level cascading and role-based room access
control. Enabled via a server-wide configuration flag, disabled by default.
## Requirements
1. Power levels defined in a Space cascade to all direct child rooms (Space
always wins over per-room overrides).
2. Admins can define custom roles in a Space and assign them to users.
3. Child rooms can require one or more roles for access.
4. Enforcement is continuous — role revocation auto-kicks users from rooms they
no longer qualify for.
5. Users are auto-joined to all qualifying child rooms when they join a Space or
receive a new role.
6. Cascading applies to direct parent Space only; no nested cascade through
sub-spaces.
7. Feature is toggled by a single server-wide config flag
(`space_permission_cascading`), off by default.
## Configuration
```toml
# conduwuit-example.toml
# Enable space permission cascading (power levels and role-based access).
# When enabled, power levels cascade from Spaces to child rooms and rooms
# can require roles for access. Applies to all Spaces on this server.
# Default: false
space_permission_cascading = false
```
## Custom State Events
All events live in the Space room.
### `com.continuwuity.space.roles` (state key: `""`)
Defines the available roles for the Space. Two default roles (`admin` and `mod`)
are created automatically when a Space is first encountered with the feature
enabled.
```json
{
"roles": {
"admin": {
"description": "Space administrator",
"power_level": 100
},
"mod": {
"description": "Space moderator",
"power_level": 50
},
"nsfw": {
"description": "Access to NSFW content"
},
"vip": {
"description": "VIP member"
}
}
}
```
- `description` (string, required): Human-readable description.
- `power_level` (integer, optional): If present, users with this role receive
this power level in all child rooms. When a user holds multiple roles with
power levels, the highest value wins.
### `com.continuwuity.space.role.member` (state key: user ID)
Assigns roles to a user within the Space.
```json
{
"roles": ["nsfw", "vip"]
}
```
### `com.continuwuity.space.role.room` (state key: room ID)
Declares which roles a child room requires. A user must hold **all** listed
roles to access the room.
```json
{
"required_roles": ["nsfw"]
}
```
## Enforcement Rules
All enforcement is skipped when `space_permission_cascading = false`.
### 1. Join gating
When a user attempts to join a room that is a direct child of a Space:
- Look up the room's `com.continuwuity.space.role.room` event in the parent Space.
- If the room has `required_roles`, check the user's `com.continuwuity.space.role.member`.
- Reject the join if the user is missing any required role.
### 2. Power level override
For every user in a child room of a Space:
- Look up their roles via `com.continuwuity.space.role.member` in the parent Space.
- For each role that has a `power_level`, take the highest value.
- Override the user's power level in the child room's `m.room.power_levels`.
- Reject attempts to manually set per-room power levels that conflict with
Space-granted levels.
### 3. Role revocation
When a `com.continuwuity.space.role.member` event is updated and a role is removed:
- Identify all child rooms that require the removed role.
- Auto-kick the user from rooms they no longer qualify for.
- Recalculate and update the user's power level in all child rooms.
### 4. Room requirement change
When a `com.continuwuity.space.role.room` event is updated with new requirements:
- Check all current members of the room.
- Auto-kick members who do not hold all newly required roles.
### 5. Auto-join on role grant
When a `com.continuwuity.space.role.member` event is updated and a role is added:
- Find all child rooms where the user now meets all required roles.
- Auto-join the user to qualifying rooms they are not already in.
This also applies when a user first joins the Space — they are auto-joined to
all child rooms they qualify for. Rooms with no role requirements auto-join all
Space members.
### 6. New child room
When a new `m.space.child` event is added to a Space:
- Auto-join all qualifying Space members to the new child room.
## Caching & Indexing
The source of truth is always the state events. The server maintains an
in-memory index for fast enforcement lookups, following the same patterns as the
existing `roomid_spacehierarchy_cache`.
### Index structures
| Index | Source event |
|------------------------------|------------------------|
| Space → roles defined | `com.continuwuity.space.roles` |
| Space → user → roles | `com.continuwuity.space.role.member` |
| Space → room → required roles| `com.continuwuity.space.role.room` |
| Room → parent Spaces | `m.space.child` (reverse lookup) |
| Space → child rooms | `m.space.child` (forward index) |
### Cache invalidation triggers
| Event changed | Action |
|----------------------------|-----------------------------------------------------|
| `com.continuwuity.space.roles` | Refresh role definitions, revalidate all members |
| `com.continuwuity.space.role.member` | Refresh user's roles, trigger auto-join/kick |
| `com.continuwuity.space.role.room` | Refresh room requirements, trigger auto-join/kick |
| `m.space.child` added | Index new child, auto-join qualifying members |
| `m.space.child` removed | Remove from index (no auto-kick) |
| Server startup | Full rebuild from state events |
## Admin Room Commands
Roles are managed via the existing admin room interface, which sends the
appropriate state events under the hood and triggers enforcement.
```
!admin space roles list <space>
!admin space roles add <space> <role_name> [description] [power_level]
!admin space roles remove <space> <role_name>
!admin space roles assign <space> <user_id> <role_name>
!admin space roles revoke <space> <user_id> <role_name>
!admin space roles require <space> <room_id> <role_name>
!admin space roles unrequire <space> <room_id> <role_name>
!admin space roles user <space> <user_id>
!admin space roles room <space> <room_id>
```
## Architecture
**Approach:** Hybrid — state events for definition, database cache for
enforcement.
- State events are the source of truth and federate normally.
- The server maintains an in-memory cache/index for fast enforcement.
- Cache is invalidated on relevant state event changes and fully rebuilt on
startup.
- All enforcement hooks (join gating, PL override, auto-join, auto-kick) check
the feature flag first and no-op when disabled.
- Existing clients can manage roles via Developer Tools (custom state events).
The admin room commands provide a user-friendly interface.
## Scope
### In scope
- Server-wide feature flag
- Custom state events for role definition, assignment, and room requirements
- Power level cascading (Space always wins)
- Continuous enforcement (auto-join, auto-kick)
- Admin room commands
- In-memory caching with invalidation
- Default `admin` (PL 100) and `mod` (PL 50) roles
### Out of scope
- Client-side UI for role management
- Nested cascade through sub-spaces
- Per-space opt-in/opt-out (it is server-wide)
- Federation-specific logic beyond normal state event replication

File diff suppressed because it is too large Load diff

View file

@ -1,37 +1,36 @@
use std::fmt::Write; use std::fmt::Write;
use clap::Subcommand; use clap::Subcommand;
use conduwuit::{Err, Event, Result}; use conduwuit::{Err, Event, Result, matrix::pdu::PduBuilder};
use conduwuit_core::matrix::space_roles::{ use conduwuit_core::matrix::space_roles::{
RoleDefinition, SpaceRoleMemberEventContent, SpaceRoleRoomEventContent, RoleDefinition, SPACE_CASCADING_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE,
SpaceRolesEventContent, SPACE_ROLES_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE, SPACE_ROLE_ROOM_EVENT_TYPE, SPACE_ROLES_EVENT_TYPE, SpaceCascadingEventContent,
SPACE_ROLE_ROOM_EVENT_TYPE, SpaceRoleMemberEventContent, SpaceRoleRoomEventContent, SpaceRolesEventContent,
}; };
use futures::StreamExt;
use ruma::{OwnedRoomId, OwnedRoomOrAliasId, OwnedUserId, events::StateEventType}; use ruma::{OwnedRoomId, OwnedRoomOrAliasId, OwnedUserId, events::StateEventType};
use serde_json::value::to_raw_value; use serde_json::value::to_raw_value;
use conduwuit::matrix::pdu::PduBuilder;
use futures::StreamExt;
use crate::{admin_command, admin_command_dispatch}; use crate::{admin_command, admin_command_dispatch};
macro_rules! require_enabled {
($self:expr) => {
if !$self.services.rooms.roles.is_enabled() {
return $self
.write_str(
"Space permission cascading is disabled. \
Enable it with `space_permission_cascading = true` in your config.",
)
.await;
}
};
}
macro_rules! resolve_space { macro_rules! resolve_space {
($self:expr, $space:expr) => {{ ($self:expr, $space:expr) => {{
require_enabled!($self);
let space_id = $self.services.rooms.alias.resolve(&$space).await?; let space_id = $self.services.rooms.alias.resolve(&$space).await?;
if !$self
.services
.rooms
.roles
.is_enabled_for_space(&space_id)
.await
{
return $self
.write_str(
"Space permission cascading is disabled for this Space. Enable it \
server-wide with `space_permission_cascading = true` in your config, or \
per-Space with `!admin space roles enable <space>`.",
)
.await;
}
if !matches!( if !matches!(
$self $self
.services .services
@ -51,10 +50,11 @@ macro_rules! custom_state_pdu {
($event_type:expr, $state_key:expr, $content:expr) => { ($event_type:expr, $state_key:expr, $content:expr) => {
PduBuilder { PduBuilder {
event_type: $event_type.to_owned().into(), event_type: $event_type.to_owned().into(),
content: to_raw_value($content) content: to_raw_value($content).map_err(|e| {
.map_err(|e| conduwuit::Error::Err(format!( conduwuit::Error::Err(
"Failed to serialize custom state event content: {e}" format!("Failed to serialize custom state event content: {e}").into(),
).into()))?, )
})?,
state_key: Some($state_key.to_owned().into()), state_key: Some($state_key.to_owned().into()),
..PduBuilder::default() ..PduBuilder::default()
} }
@ -116,6 +116,21 @@ pub enum SpaceRolesCommand {
space: OwnedRoomOrAliasId, space: OwnedRoomOrAliasId,
room_id: OwnedRoomId, room_id: OwnedRoomId,
}, },
/// Enable space permission cascading for a specific space (overrides
/// server config)
Enable {
space: OwnedRoomOrAliasId,
},
/// Disable space permission cascading for a specific space (overrides
/// server config)
Disable {
space: OwnedRoomOrAliasId,
},
/// Show whether cascading is enabled for a space and the source (server
/// default or per-space override)
Status {
space: OwnedRoomOrAliasId,
},
} }
#[admin_command] #[admin_command]
@ -244,9 +259,7 @@ async fn remove(&self, space: OwnedRoomOrAliasId, role_name: String) -> Result {
for (state_key, event_id) in user_entries { for (state_key, event_id) in user_entries {
if let Ok(pdu) = self.services.rooms.timeline.get_pdu(&event_id).await { if let Ok(pdu) = self.services.rooms.timeline.get_pdu(&event_id).await {
if let Ok(mut member_content) = if let Ok(mut member_content) = pdu.get_content::<SpaceRoleMemberEventContent>() {
pdu.get_content::<SpaceRoleMemberEventContent>()
{
if member_content.roles.contains(&role_name) { if member_content.roles.contains(&role_name) {
member_content.roles.retain(|r| r != &role_name); member_content.roles.retain(|r| r != &role_name);
self.services self.services
@ -281,9 +294,7 @@ async fn remove(&self, space: OwnedRoomOrAliasId, role_name: String) -> Result {
for (state_key, event_id) in room_entries { for (state_key, event_id) in room_entries {
if let Ok(pdu) = self.services.rooms.timeline.get_pdu(&event_id).await { if let Ok(pdu) = self.services.rooms.timeline.get_pdu(&event_id).await {
if let Ok(mut room_content) = if let Ok(mut room_content) = pdu.get_content::<SpaceRoleRoomEventContent>() {
pdu.get_content::<SpaceRoleRoomEventContent>()
{
if room_content.required_roles.contains(&role_name) { if room_content.required_roles.contains(&role_name) {
room_content.required_roles.retain(|r| r != &role_name); room_content.required_roles.retain(|r| r != &role_name);
self.services self.services
@ -319,7 +330,6 @@ async fn assign(
) -> Result { ) -> Result {
let space_id = resolve_space!(self, space); let space_id = resolve_space!(self, space);
// Read current role definitions to validate the role name
let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned()); let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned());
let role_defs: SpaceRolesEventContent = self let role_defs: SpaceRolesEventContent = self
.services .services
@ -363,9 +373,7 @@ async fn assign(
) )
.await?; .await?;
self.write_str(&format!( self.write_str(&format!("Assigned role '{role_name}' to {user_id} in space {space_id}."))
"Assigned role '{role_name}' to {user_id} in space {space_id}."
))
.await .await
} }
@ -408,9 +416,7 @@ async fn revoke(
) )
.await?; .await?;
self.write_str(&format!( self.write_str(&format!("Revoked role '{role_name}' from {user_id} in space {space_id}."))
"Revoked role '{role_name}' from {user_id} in space {space_id}."
))
.await .await
} }
@ -423,7 +429,6 @@ async fn require(
) -> Result { ) -> Result {
let space_id = resolve_space!(self, space); let space_id = resolve_space!(self, space);
// Read current role definitions to validate the role name
let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned()); let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned());
let role_defs: SpaceRolesEventContent = self let role_defs: SpaceRolesEventContent = self
.services .services
@ -540,10 +545,9 @@ async fn user(&self, space: OwnedRoomOrAliasId, user_id: OwnedUserId) -> Result
)) ))
.await .await
}, },
| _ => { | _ =>
self.write_str(&format!("User {user_id} has no roles in space {space_id}.")) self.write_str(&format!("User {user_id} has no roles in space {space_id}."))
.await .await,
},
} }
} }
@ -569,11 +573,123 @@ async fn room(&self, space: OwnedRoomOrAliasId, room_id: OwnedRoomId) -> Result
)) ))
.await .await
}, },
| _ => { | _ =>
self.write_str(&format!( self.write_str(&format!(
"Room {room_id} has no role requirements in space {space_id}." "Room {room_id} has no role requirements in space {space_id}."
)) ))
.await,
}
}
#[admin_command]
async fn enable(&self, space: OwnedRoomOrAliasId) -> Result {
let space_id = self.services.rooms.alias.resolve(&space).await?;
if !matches!(
self.services
.rooms
.state_accessor
.get_room_type(&space_id)
.await,
Ok(ruma::room::RoomType::Space)
) {
return Err!("The specified room is not a Space.");
}
let content = SpaceCascadingEventContent { enabled: true };
let state_lock = self.services.rooms.state.mutex.lock(&space_id).await;
let server_user = &self.services.globals.server_user;
self.services
.rooms
.timeline
.build_and_append_pdu(
custom_state_pdu!(SPACE_CASCADING_EVENT_TYPE, "", &content),
server_user,
Some(&space_id),
&state_lock,
)
.await?;
self.services
.rooms
.roles
.ensure_default_roles(&space_id)
.await?;
self.write_str(&format!("Space permission cascading enabled for {space_id}."))
.await .await
},
} }
#[admin_command]
async fn disable(&self, space: OwnedRoomOrAliasId) -> Result {
let space_id = self.services.rooms.alias.resolve(&space).await?;
if !matches!(
self.services
.rooms
.state_accessor
.get_room_type(&space_id)
.await,
Ok(ruma::room::RoomType::Space)
) {
return Err!("The specified room is not a Space.");
}
let content = SpaceCascadingEventContent { enabled: false };
let state_lock = self.services.rooms.state.mutex.lock(&space_id).await;
let server_user = &self.services.globals.server_user;
self.services
.rooms
.timeline
.build_and_append_pdu(
custom_state_pdu!(SPACE_CASCADING_EVENT_TYPE, "", &content),
server_user,
Some(&space_id),
&state_lock,
)
.await?;
self.write_str(&format!("Space permission cascading disabled for {space_id}."))
.await
}
#[admin_command]
async fn status(&self, space: OwnedRoomOrAliasId) -> Result {
let space_id = self.services.rooms.alias.resolve(&space).await?;
if !matches!(
self.services
.rooms
.state_accessor
.get_room_type(&space_id)
.await,
Ok(ruma::room::RoomType::Space)
) {
return Err!("The specified room is not a Space.");
}
let global_default = self.services.rooms.roles.is_enabled();
let cascading_event_type = StateEventType::from(SPACE_CASCADING_EVENT_TYPE.to_owned());
let per_space_override: Option<bool> = self
.services
.rooms
.state_accessor
.room_state_get_content::<SpaceCascadingEventContent>(
&space_id,
&cascading_event_type,
"",
)
.await
.ok()
.map(|c| c.enabled);
let effective = per_space_override.unwrap_or(global_default);
let source = match per_space_override {
| Some(v) => format!("per-Space override (enabled: {v})"),
| None => format!("server default (space_permission_cascading: {global_default})"),
};
self.write_str(&format!(
"Cascading status for {space_id}:\n- Effective: **{effective}**\n- Source: {source}"
))
.await
} }

View file

@ -347,9 +347,7 @@ pub async fn join_room_by_id_helper(
} }
} }
// Space permission cascading: check if user has required roles {
// User must qualify in at least one parent space (if any exist)
if services.rooms.roles.is_enabled() {
let parent_spaces = services.rooms.roles.get_parent_spaces(room_id).await; let parent_spaces = services.rooms.roles.get_parent_spaces(room_id).await;
if !parent_spaces.is_empty() { if !parent_spaces.is_empty() {
let mut qualifies_in_any = false; let mut qualifies_in_any = false;

View file

@ -603,9 +603,10 @@ pub struct Config {
#[serde(default)] #[serde(default)]
pub suspend_on_register: bool, pub suspend_on_register: bool,
/// Enable space permission cascading (power levels and role-based access). /// Server-wide default for space permission cascading (power levels and
/// When enabled, power levels cascade from Spaces to child rooms and rooms /// role-based access). Individual Spaces can override this via the
/// can require roles for access. Applies to all Spaces on this server. /// `com.continuwuity.space.cascading` state event or the admin command
/// `!admin space roles enable/disable <space>`.
/// ///
/// default: false /// default: false
#[serde(default)] #[serde(default)]

View file

@ -1,56 +1,39 @@
//! Custom state event content types for space permission cascading.
//!
//! These events live in Space rooms and define roles, user-role assignments,
//! and room-role requirements.
use std::collections::BTreeMap; use std::collections::BTreeMap;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
/// Custom event type for space role definitions.
pub const SPACE_ROLES_EVENT_TYPE: &str = "com.continuwuity.space.roles"; pub const SPACE_ROLES_EVENT_TYPE: &str = "com.continuwuity.space.roles";
/// Custom event type for per-user role assignments within a space.
pub const SPACE_ROLE_MEMBER_EVENT_TYPE: &str = "com.continuwuity.space.role.member"; pub const SPACE_ROLE_MEMBER_EVENT_TYPE: &str = "com.continuwuity.space.role.member";
/// Custom event type for per-room role requirements within a space.
pub const SPACE_ROLE_ROOM_EVENT_TYPE: &str = "com.continuwuity.space.role.room"; pub const SPACE_ROLE_ROOM_EVENT_TYPE: &str = "com.continuwuity.space.role.room";
pub const SPACE_CASCADING_EVENT_TYPE: &str = "com.continuwuity.space.cascading";
/// Content for `com.continuwuity.space.roles` (state key: "")
///
/// Defines available roles for a Space.
#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)] #[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceRolesEventContent { pub struct SpaceRolesEventContent {
pub roles: BTreeMap<String, RoleDefinition>, pub roles: BTreeMap<String, RoleDefinition>,
} }
/// A single role definition within a Space.
#[derive(Clone, Debug, Deserialize, Serialize, PartialEq, Eq)] #[derive(Clone, Debug, Deserialize, Serialize, PartialEq, Eq)]
pub struct RoleDefinition { pub struct RoleDefinition {
pub description: String, pub description: String,
/// If present, users with this role receive this power level in child
/// rooms.
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub power_level: Option<i64>, pub power_level: Option<i64>,
} }
/// Content for `com.continuwuity.space.role.member` (state key: user ID)
///
/// Assigns roles to a user within a Space.
#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)] #[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceRoleMemberEventContent { pub struct SpaceRoleMemberEventContent {
pub roles: Vec<String>, pub roles: Vec<String>,
} }
/// Content for `com.continuwuity.space.role.room` (state key: room ID)
///
/// Declares which roles a child room requires for access.
#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)] #[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceRoleRoomEventContent { pub struct SpaceRoleRoomEventContent {
pub required_roles: Vec<String>, pub required_roles: Vec<String>,
} }
#[derive(Clone, Debug, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceCascadingEventContent {
pub enabled: bool,
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
@ -58,20 +41,14 @@ mod tests {
#[test] #[test]
fn serialize_space_roles() { fn serialize_space_roles() {
let mut roles = BTreeMap::new(); let mut roles = BTreeMap::new();
roles.insert( roles.insert("admin".to_owned(), RoleDefinition {
"admin".to_owned(),
RoleDefinition {
description: "Space administrator".to_owned(), description: "Space administrator".to_owned(),
power_level: Some(100), power_level: Some(100),
}, });
); roles.insert("nsfw".to_owned(), RoleDefinition {
roles.insert(
"nsfw".to_owned(),
RoleDefinition {
description: "NSFW access".to_owned(), description: "NSFW access".to_owned(),
power_level: None, power_level: None,
}, });
);
let content = SpaceRolesEventContent { roles }; let content = SpaceRolesEventContent { roles };
let json = serde_json::to_string(&content).unwrap(); let json = serde_json::to_string(&content).unwrap();
let deserialized: SpaceRolesEventContent = serde_json::from_str(&json).unwrap(); let deserialized: SpaceRolesEventContent = serde_json::from_str(&json).unwrap();
@ -92,9 +69,7 @@ mod tests {
#[test] #[test]
fn serialize_role_room() { fn serialize_role_room() {
let content = SpaceRoleRoomEventContent { let content = SpaceRoleRoomEventContent { required_roles: vec!["nsfw".to_owned()] };
required_roles: vec!["nsfw".to_owned()],
};
let json = serde_json::to_string(&content).unwrap(); let json = serde_json::to_string(&content).unwrap();
let deserialized: SpaceRoleRoomEventContent = serde_json::from_str(&json).unwrap(); let deserialized: SpaceRoleRoomEventContent = serde_json::from_str(&json).unwrap();
assert_eq!(deserialized.required_roles, vec!["nsfw"]); assert_eq!(deserialized.required_roles, vec!["nsfw"]);
@ -142,9 +117,7 @@ mod tests {
#[test] #[test]
fn empty_room_requirements() { fn empty_room_requirements() {
let content = SpaceRoleRoomEventContent { let content = SpaceRoleRoomEventContent { required_roles: vec![] };
required_roles: vec![],
};
let json = serde_json::to_string(&content).unwrap(); let json = serde_json::to_string(&content).unwrap();
let deserialized: SpaceRoleRoomEventContent = serde_json::from_str(&json).unwrap(); let deserialized: SpaceRoleRoomEventContent = serde_json::from_str(&json).unwrap();
assert!(deserialized.required_roles.is_empty()); assert!(deserialized.required_roles.is_empty());

View file

@ -7,7 +7,7 @@
use std::collections::{BTreeMap, HashMap, HashSet}; use std::collections::{BTreeMap, HashMap, HashSet};
use conduwuit_core::matrix::space_roles::RoleDefinition; use conduwuit_core::matrix::space_roles::RoleDefinition;
use ruma::{room_id, user_id, OwnedRoomId, OwnedUserId}; use ruma::{OwnedRoomId, OwnedUserId, room_id, user_id};
use super::tests::{make_requirements, make_roles, make_user_roles}; use super::tests::{make_requirements, make_roles, make_user_roles};
@ -75,10 +75,7 @@ impl MockCache {
room: &OwnedRoomId, room: &OwnedRoomId,
user: &OwnedUserId, user: &OwnedUserId,
) -> bool { ) -> bool {
let reqs = self let reqs = self.room_requirements.get(space).and_then(|r| r.get(room));
.room_requirements
.get(space)
.and_then(|r| r.get(room));
match reqs { match reqs {
| None => true, | None => true,
@ -117,10 +114,7 @@ fn cache_populate_and_lookup() {
let child = room_id!("!child:example.com").to_owned(); let child = room_id!("!child:example.com").to_owned();
let alice = user_id!("@alice:example.com").to_owned(); let alice = user_id!("@alice:example.com").to_owned();
cache.add_space( cache.add_space(space.clone(), make_roles(&[("admin", Some(100)), ("nsfw", None)]));
space.clone(),
make_roles(&[("admin", Some(100)), ("nsfw", None)]),
);
cache.add_child(&space, child.clone()); cache.add_child(&space, child.clone());
cache.assign_role(&space, alice.clone(), "nsfw".to_owned()); cache.assign_role(&space, alice.clone(), "nsfw".to_owned());
cache.set_room_requirements(&space, child.clone(), make_requirements(&["nsfw"])); cache.set_room_requirements(&space, child.clone(), make_requirements(&["nsfw"]));
@ -154,21 +148,14 @@ fn cache_invalidation_on_requirement_change() {
let child = room_id!("!room:example.com").to_owned(); let child = room_id!("!room:example.com").to_owned();
let alice = user_id!("@alice:example.com").to_owned(); let alice = user_id!("@alice:example.com").to_owned();
cache.add_space( cache.add_space(space.clone(), make_roles(&[("nsfw", None), ("vip", None)]));
space.clone(),
make_roles(&[("nsfw", None), ("vip", None)]),
);
cache.assign_role(&space, alice.clone(), "vip".to_owned()); cache.assign_role(&space, alice.clone(), "vip".to_owned());
cache.set_room_requirements(&space, child.clone(), make_requirements(&["vip"])); cache.set_room_requirements(&space, child.clone(), make_requirements(&["vip"]));
assert!(cache.user_qualifies(&space, &child, &alice)); assert!(cache.user_qualifies(&space, &child, &alice));
// Add nsfw requirement // Add nsfw requirement
cache.set_room_requirements( cache.set_room_requirements(&space, child.clone(), make_requirements(&["vip", "nsfw"]));
&space,
child.clone(),
make_requirements(&["vip", "nsfw"]),
);
assert!(!cache.user_qualifies(&space, &child, &alice)); assert!(!cache.user_qualifies(&space, &child, &alice));
} }
@ -177,11 +164,7 @@ fn cache_clear_empties_all() {
let mut cache = MockCache::new(); let mut cache = MockCache::new();
let space = room_id!("!space:example.com").to_owned(); let space = room_id!("!space:example.com").to_owned();
cache.add_space(space.clone(), make_roles(&[("admin", Some(100))])); cache.add_space(space.clone(), make_roles(&[("admin", Some(100))]));
cache.assign_role( cache.assign_role(&space, user_id!("@alice:example.com").to_owned(), "admin".to_owned());
&space,
user_id!("@alice:example.com").to_owned(),
"admin".to_owned(),
);
cache.clear(); cache.clear();
@ -204,7 +187,10 @@ fn cache_reverse_lookup_consistency() {
assert!(cache.room_to_space.get(&child1).unwrap().contains(&space)); assert!(cache.room_to_space.get(&child1).unwrap().contains(&space));
assert!(cache.room_to_space.get(&child2).unwrap().contains(&space)); assert!(cache.room_to_space.get(&child2).unwrap().contains(&space));
assert!( assert!(
cache.room_to_space.get(room_id!("!unknown:example.com")).is_none() cache
.room_to_space
.get(room_id!("!unknown:example.com"))
.is_none()
); );
} }
@ -214,10 +200,7 @@ fn cache_power_level_updates_on_role_change() {
let space = room_id!("!space:example.com").to_owned(); let space = room_id!("!space:example.com").to_owned();
let alice = user_id!("@alice:example.com").to_owned(); let alice = user_id!("@alice:example.com").to_owned();
cache.add_space( cache.add_space(space.clone(), make_roles(&[("admin", Some(100)), ("mod", Some(50))]));
space.clone(),
make_roles(&[("admin", Some(100)), ("mod", Some(50))]),
);
// No roles -> no PL // No roles -> no PL
assert_eq!(cache.get_power_level(&space, &alice), None); assert_eq!(cache.get_power_level(&space, &alice), None);

View file

@ -2,8 +2,10 @@ use std::collections::{HashMap, HashSet};
use ruma::{room_id, user_id}; use ruma::{room_id, user_id};
use super::{compute_user_power_level, roles_satisfy_requirements}; use super::{
use super::tests::{make_requirements, make_roles, make_user_roles}; compute_user_power_level, roles_satisfy_requirements,
tests::{make_requirements, make_roles, make_user_roles},
};
#[test] #[test]
fn scenario_user_gains_and_loses_access() { fn scenario_user_gains_and_loses_access() {
@ -53,11 +55,7 @@ fn scenario_multiple_rooms_different_requirements() {
#[test] #[test]
fn scenario_power_level_cascading_highest_wins() { fn scenario_power_level_cascading_highest_wins() {
let roles = make_roles(&[ let roles = make_roles(&[("admin", Some(100)), ("mod", Some(50)), ("helper", Some(25))]);
("admin", Some(100)),
("mod", Some(50)),
("helper", Some(25)),
]);
let admin_mod = make_user_roles(&["admin", "mod"]); let admin_mod = make_user_roles(&["admin", "mod"]);
assert_eq!(compute_user_power_level(&roles, &admin_mod), Some(100)); assert_eq!(compute_user_power_level(&roles, &admin_mod), Some(100));
@ -114,10 +112,7 @@ fn scenario_identify_kick_candidates_after_role_revocation() {
rooms.insert("general".to_owned(), HashSet::new()); rooms.insert("general".to_owned(), HashSet::new());
rooms.insert("nsfw-chat".to_owned(), make_requirements(&["nsfw"])); rooms.insert("nsfw-chat".to_owned(), make_requirements(&["nsfw"]));
rooms.insert("vip-lounge".to_owned(), make_requirements(&["vip"])); rooms.insert("vip-lounge".to_owned(), make_requirements(&["vip"]));
rooms.insert( rooms.insert("nsfw-vip".to_owned(), make_requirements(&["nsfw", "vip"]));
"nsfw-vip".to_owned(),
make_requirements(&["nsfw", "vip"]),
);
let kick_from: Vec<_> = rooms let kick_from: Vec<_> = rooms
.iter() .iter()

View file

@ -13,15 +13,13 @@ use std::{
use async_trait::async_trait; use async_trait::async_trait;
use conduwuit::{ use conduwuit::{
Event, Result, Server, debug, debug_warn, implement, info, Event, Result, Server, debug, debug_warn, implement, info, matrix::pdu::PduBuilder, warn,
matrix::pdu::PduBuilder,
warn,
}; };
use conduwuit_core::{ use conduwuit_core::{
matrix::space_roles::{ matrix::space_roles::{
RoleDefinition, SpaceRoleMemberEventContent, SpaceRoleRoomEventContent, RoleDefinition, SPACE_CASCADING_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE,
SpaceRolesEventContent, SPACE_ROLES_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE, SPACE_ROLE_ROOM_EVENT_TYPE, SPACE_ROLES_EVENT_TYPE, SpaceCascadingEventContent,
SPACE_ROLE_ROOM_EVENT_TYPE, SpaceRoleMemberEventContent, SpaceRoleRoomEventContent, SpaceRolesEventContent,
}, },
utils::{ utils::{
future::TryExtExt, future::TryExtExt,
@ -30,7 +28,7 @@ use conduwuit_core::{
}; };
use futures::{StreamExt, TryFutureExt}; use futures::{StreamExt, TryFutureExt};
use ruma::{ use ruma::{
Int, OwnedEventId, OwnedRoomId, OwnedUserId, RoomId, UserId, room::RoomType, Int, OwnedEventId, OwnedRoomId, OwnedUserId, RoomId, UserId,
events::{ events::{
StateEventType, StateEventType,
room::{ room::{
@ -39,6 +37,7 @@ use ruma::{
}, },
space::child::SpaceChildEventContent, space::child::SpaceChildEventContent,
}, },
room::RoomType,
}; };
use serde_json::value::to_raw_value; use serde_json::value::to_raw_value;
use tokio::sync::RwLock; use tokio::sync::RwLock;
@ -130,10 +129,6 @@ impl crate::Service for Service {
} }
async fn worker(self: Arc<Self>) -> Result<()> { async fn worker(self: Arc<Self>) -> Result<()> {
if !self.is_enabled() {
return Ok(());
}
info!("Rebuilding space roles cache from all known rooms"); info!("Rebuilding space roles cache from all known rooms");
let mut space_count: usize = 0; let mut space_count: usize = 0;
@ -148,6 +143,11 @@ impl crate::Service for Service {
for room_id in &room_ids { for room_id in &room_ids {
match self.services.state_accessor.get_room_type(room_id).await { match self.services.state_accessor.get_room_type(room_id).await {
| Ok(RoomType::Space) => { | Ok(RoomType::Space) => {
// Check per-Space override — skip spaces where cascading is
// disabled
if !self.is_enabled_for_space(room_id).await {
continue;
}
debug!(room_id = %room_id, "Populating space roles cache"); debug!(room_id = %room_id, "Populating space roles cache");
self.populate_space(room_id).await; self.populate_space(room_id).await;
space_count = space_count.saturating_add(1); space_count = space_count.saturating_add(1);
@ -163,22 +163,30 @@ impl crate::Service for Service {
fn name(&self) -> &str { crate::service::make_name(std::module_path!()) } fn name(&self) -> &str { crate::service::make_name(std::module_path!()) }
} }
/// Check whether space permission cascading is enabled in the server config.
#[implement(Service)] #[implement(Service)]
pub fn is_enabled(&self) -> bool { self.server.config.space_permission_cascading } pub fn is_enabled(&self) -> bool { self.server.config.space_permission_cascading }
/// Ensure a Space has the default admin/mod roles defined. #[implement(Service)]
/// pub async fn is_enabled_for_space(&self, space_id: &RoomId) -> bool {
/// Checks whether a `com.continuwuity.space.roles` state event exists in the given space. let cascading_event_type = StateEventType::from(SPACE_CASCADING_EVENT_TYPE.to_owned());
/// If not, creates default roles (admin at PL 100, mod at PL 50) and sends if let Ok(content) = self
/// the state event as the server user. .services
.state_accessor
.room_state_get_content::<SpaceCascadingEventContent>(space_id, &cascading_event_type, "")
.await
{
return content.enabled;
}
self.server.config.space_permission_cascading
}
#[implement(Service)] #[implement(Service)]
pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result { pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
if !self.is_enabled() { if !self.is_enabled_for_space(space_id).await {
return Ok(()); return Ok(());
} }
// Check if com.continuwuity.space.roles already exists
let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned()); let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned());
if self if self
.services .services
@ -190,22 +198,15 @@ pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
return Ok(()); return Ok(());
} }
// Create default roles
let mut roles = BTreeMap::new(); let mut roles = BTreeMap::new();
roles.insert( roles.insert("admin".to_owned(), RoleDefinition {
"admin".to_owned(),
RoleDefinition {
description: "Space administrator".to_owned(), description: "Space administrator".to_owned(),
power_level: Some(100), power_level: Some(100),
}, });
); roles.insert("mod".to_owned(), RoleDefinition {
roles.insert(
"mod".to_owned(),
RoleDefinition {
description: "Space moderator".to_owned(), description: "Space moderator".to_owned(),
power_level: Some(50), power_level: Some(50),
}, });
);
let content = SpaceRolesEventContent { roles }; let content = SpaceRolesEventContent { roles };
@ -214,8 +215,11 @@ pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
let pdu = PduBuilder { let pdu = PduBuilder {
event_type: ruma::events::TimelineEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned()), event_type: ruma::events::TimelineEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned()),
content: to_raw_value(&content) content: to_raw_value(&content).map_err(|e| {
.map_err(|e| conduwuit::Error::Err(format!("Failed to serialize SpaceRolesEventContent: {e}").into()))?, conduwuit::Error::Err(
format!("Failed to serialize SpaceRolesEventContent: {e}").into(),
)
})?,
state_key: Some(String::new().into()), state_key: Some(String::new().into()),
..PduBuilder::default() ..PduBuilder::default()
}; };
@ -230,18 +234,15 @@ pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
Ok(()) Ok(())
} }
/// Populate the in-memory caches from state events for a single Space room.
///
/// Reads `com.continuwuity.space.roles`, `com.continuwuity.space.role.member`, `com.continuwuity.space.role.room`, and
/// `m.space.child` state events and indexes them for fast lookup.
#[implement(Service)] #[implement(Service)]
pub async fn populate_space(&self, space_id: &RoomId) { pub async fn populate_space(&self, space_id: &RoomId) {
if !self.is_enabled() { if !self.is_enabled_for_space(space_id).await {
return; return;
} }
// Check cache capacity — if over limit, clear and let spaces repopulate on demand if self.roles.read().await.len()
if self.roles.read().await.len() >= usize::try_from(self.server.config.space_roles_cache_capacity).unwrap_or(usize::MAX) { >= usize::try_from(self.server.config.space_roles_cache_capacity).unwrap_or(usize::MAX)
{
self.roles.write().await.clear(); self.roles.write().await.clear();
self.user_roles.write().await.clear(); self.user_roles.write().await.clear();
self.room_requirements.write().await.clear(); self.room_requirements.write().await.clear();
@ -250,7 +251,6 @@ pub async fn populate_space(&self, space_id: &RoomId) {
debug_warn!("Space roles cache exceeded capacity, cleared"); debug_warn!("Space roles cache exceeded capacity, cleared");
} }
// 1. Read com.continuwuity.space.roles (state key: "")
let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned()); let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned());
if let Ok(content) = self if let Ok(content) = self
.services .services
@ -264,14 +264,8 @@ pub async fn populate_space(&self, space_id: &RoomId) {
.insert(space_id.to_owned(), content.roles); .insert(space_id.to_owned(), content.roles);
} }
// 2. Read all com.continuwuity.space.role.member state events (state key: user ID)
let member_event_type = StateEventType::from(SPACE_ROLE_MEMBER_EVENT_TYPE.to_owned()); let member_event_type = StateEventType::from(SPACE_ROLE_MEMBER_EVENT_TYPE.to_owned());
let shortstatehash = match self let shortstatehash = match self.services.state.get_room_shortstatehash(space_id).await {
.services
.state
.get_room_shortstatehash(space_id)
.await
{
| Ok(hash) => hash, | Ok(hash) => hash,
| Err(e) => { | Err(e) => {
debug_warn!(space_id = %space_id, error = ?e, "Failed to get shortstatehash, cache may be stale"); debug_warn!(space_id = %space_id, error = ?e, "Failed to get shortstatehash, cache may be stale");
@ -309,7 +303,6 @@ pub async fn populate_space(&self, space_id: &RoomId) {
.await .await
.insert(space_id.to_owned(), user_roles_map); .insert(space_id.to_owned(), user_roles_map);
// 3. Read all com.continuwuity.space.role.room state events (state key: room ID)
let room_event_type = StateEventType::from(SPACE_ROLE_ROOM_EVENT_TYPE.to_owned()); let room_event_type = StateEventType::from(SPACE_ROLE_ROOM_EVENT_TYPE.to_owned());
let mut room_reqs_map: HashMap<OwnedRoomId, HashSet<String>> = HashMap::new(); let mut room_reqs_map: HashMap<OwnedRoomId, HashSet<String>> = HashMap::new();
@ -341,7 +334,6 @@ pub async fn populate_space(&self, space_id: &RoomId) {
.await .await
.insert(space_id.to_owned(), room_reqs_map); .insert(space_id.to_owned(), room_reqs_map);
// 4. Read all m.space.child state events → build room_to_space reverse index
let mut child_rooms: Vec<OwnedRoomId> = Vec::new(); let mut child_rooms: Vec<OwnedRoomId> = Vec::new();
self.services self.services
@ -373,16 +365,12 @@ pub async fn populate_space(&self, space_id: &RoomId) {
}) })
.await; .await;
// Lock ordering: room_to_space before space_to_rooms.
// This order must be consistent to avoid deadlocks.
{ {
let mut room_to_space = self.room_to_space.write().await; let mut room_to_space = self.room_to_space.write().await;
// Remove this space from all existing entries
room_to_space.retain(|_, parents| { room_to_space.retain(|_, parents| {
parents.remove(space_id); parents.remove(space_id);
!parents.is_empty() !parents.is_empty()
}); });
// Insert fresh children
for child_room_id in &child_rooms { for child_room_id in &child_rooms {
room_to_space room_to_space
.entry(child_room_id.clone()) .entry(child_room_id.clone())
@ -391,7 +379,6 @@ pub async fn populate_space(&self, space_id: &RoomId) {
} }
} }
// Update forward index (after room_to_space to maintain lock ordering)
{ {
let mut space_to_rooms = self.space_to_rooms.write().await; let mut space_to_rooms = self.space_to_rooms.write().await;
space_to_rooms.insert(space_id.to_owned(), child_rooms.into_iter().collect()) space_to_rooms.insert(space_id.to_owned(), child_rooms.into_iter().collect())
@ -399,7 +386,6 @@ pub async fn populate_space(&self, space_id: &RoomId) {
} }
} }
/// Compute the maximum power level from a user's assigned roles.
#[must_use] #[must_use]
pub fn compute_user_power_level<S: ::std::hash::BuildHasher>( pub fn compute_user_power_level<S: ::std::hash::BuildHasher>(
role_defs: &BTreeMap<String, RoleDefinition>, role_defs: &BTreeMap<String, RoleDefinition>,
@ -411,7 +397,6 @@ pub fn compute_user_power_level<S: ::std::hash::BuildHasher>(
.max() .max()
} }
/// Check if a set of assigned roles satisfies all requirements.
#[must_use] #[must_use]
pub fn roles_satisfy_requirements<S: ::std::hash::BuildHasher>( pub fn roles_satisfy_requirements<S: ::std::hash::BuildHasher>(
required: &HashSet<String, S>, required: &HashSet<String, S>,
@ -420,20 +405,20 @@ pub fn roles_satisfy_requirements<S: ::std::hash::BuildHasher>(
required.iter().all(|r| assigned.contains(r)) required.iter().all(|r| assigned.contains(r))
} }
/// Get a user's effective power level from Space roles.
/// Returns None if user has no roles with power levels.
#[implement(Service)] #[implement(Service)]
pub async fn get_user_power_level( pub async fn get_user_power_level(&self, space_id: &RoomId, user_id: &UserId) -> Option<i64> {
&self,
space_id: &RoomId,
user_id: &UserId,
) -> Option<i64> {
let role_defs = { self.roles.read().await.get(space_id).cloned()? }; let role_defs = { self.roles.read().await.get(space_id).cloned()? };
let user_assigned = { self.user_roles.read().await.get(space_id)?.get(user_id).cloned()? }; let user_assigned = {
self.user_roles
.read()
.await
.get(space_id)?
.get(user_id)
.cloned()?
};
compute_user_power_level(&role_defs, &user_assigned) compute_user_power_level(&role_defs, &user_assigned)
} }
/// Check if a user has all required roles for a room.
#[implement(Service)] #[implement(Service)]
pub async fn user_qualifies_for_room( pub async fn user_qualifies_for_room(
&self, &self,
@ -467,25 +452,25 @@ pub async fn user_qualifies_for_room(
roles_satisfy_requirements(&required, &user_assigned) roles_satisfy_requirements(&required, &user_assigned)
} }
/// Get the parent Spaces of a child room, if any.
///
/// Only direct parent spaces are returned. Nested sub-space cascading
/// is not supported (see design doc requirement 6).
#[implement(Service)] #[implement(Service)]
pub async fn get_parent_spaces(&self, room_id: &RoomId) -> Vec<OwnedRoomId> { pub async fn get_parent_spaces(&self, room_id: &RoomId) -> Vec<OwnedRoomId> {
if !self.is_enabled() { let all_parents: Vec<OwnedRoomId> = self
return Vec::new(); .room_to_space
}
self.room_to_space
.read() .read()
.await .await
.get(room_id) .get(room_id)
.map(|set| set.iter().cloned().collect()) .map(|set| set.iter().cloned().collect())
.unwrap_or_default() .unwrap_or_default();
let mut enabled_parents = Vec::new();
for parent in all_parents {
if self.is_enabled_for_space(&parent).await {
enabled_parents.push(parent);
}
}
enabled_parents
} }
/// Get all child rooms of a Space from the forward index.
#[implement(Service)] #[implement(Service)]
pub async fn get_child_rooms(&self, space_id: &RoomId) -> Vec<OwnedRoomId> { pub async fn get_child_rooms(&self, space_id: &RoomId) -> Vec<OwnedRoomId> {
self.space_to_rooms self.space_to_rooms
@ -496,15 +481,12 @@ pub async fn get_child_rooms(&self, space_id: &RoomId) -> Vec<OwnedRoomId> {
.unwrap_or_default() .unwrap_or_default()
} }
/// Synchronize power levels in a child room based on Space roles.
/// This overrides per-room power levels with Space-granted levels.
#[implement(Service)] #[implement(Service)]
pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Result { pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Result {
if !self.is_enabled() { if !self.is_enabled_for_space(space_id).await {
return Ok(()); return Ok(());
} }
// Check if server user is joined to the room
let server_user = self.services.globals.server_user.as_ref(); let server_user = self.services.globals.server_user.as_ref();
if !self if !self
.services .services
@ -516,7 +498,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
return Ok(()); return Ok(());
} }
// 1. Get current power levels for the room
let mut power_levels_content: RoomPowerLevelsEventContent = self let mut power_levels_content: RoomPowerLevelsEventContent = self
.services .services
.state_accessor .state_accessor
@ -524,7 +505,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
.await .await
.unwrap_or_default(); .unwrap_or_default();
// 2. Get all members of the room
let members: Vec<OwnedUserId> = self let members: Vec<OwnedUserId> = self
.services .services
.state_cache .state_cache
@ -533,7 +513,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
.collect() .collect()
.await; .await;
// 3. For each member, check their space role power level
let mut changed = false; let mut changed = false;
for user_id in &members { for user_id in &members {
if user_id == server_user { if user_id == server_user {
@ -547,7 +526,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
.copied() .copied()
.unwrap_or(power_levels_content.users_default); .unwrap_or(power_levels_content.users_default);
// 4. If the space PL differs from room PL, update it
if current_pl != space_pl_int { if current_pl != space_pl_int {
power_levels_content power_levels_content
.users .users
@ -555,7 +533,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
changed = true; changed = true;
} }
} else { } else {
// Check if any other parent space manages this user's PL
let parents = self.get_parent_spaces(room_id).await; let parents = self.get_parent_spaces(room_id).await;
let mut managed_by_other = false; let mut managed_by_other = false;
for parent in &parents { for parent in &parents {
@ -575,7 +552,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
} }
} }
// 5. If changed, send updated power levels event
if changed { if changed {
let state_lock = self.services.state.mutex.lock(room_id).await; let state_lock = self.services.state.mutex.lock(room_id).await;
@ -593,32 +569,20 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
Ok(()) Ok(())
} }
/// Auto-join a user to all qualifying child rooms of a Space.
///
/// Iterates over all child rooms in the `space_to_rooms` forward index,
/// checks whether the user qualifies via their assigned roles, and
/// force-joins them if they are not already a member.
#[implement(Service)] #[implement(Service)]
pub async fn auto_join_qualifying_rooms( pub async fn auto_join_qualifying_rooms(&self, space_id: &RoomId, user_id: &UserId) -> Result {
&self, if !self.is_enabled_for_space(space_id).await {
space_id: &RoomId,
user_id: &UserId,
) -> Result {
if !self.is_enabled() {
return Ok(()); return Ok(());
} }
// Skip server user — it doesn't need role-based auto-join
let server_user = self.services.globals.server_user.as_ref(); let server_user = self.services.globals.server_user.as_ref();
if user_id == server_user { if user_id == server_user {
return Ok(()); return Ok(());
} }
// Get all child rooms via the space_to_rooms forward index
let child_rooms = self.get_child_rooms(space_id).await; let child_rooms = self.get_child_rooms(space_id).await;
for child_room_id in &child_rooms { for child_room_id in &child_rooms {
// Skip if already joined
if self if self
.services .services
.state_cache .state_cache
@ -628,7 +592,6 @@ pub async fn auto_join_qualifying_rooms(
continue; continue;
} }
// Check if user qualifies
if !self if !self
.user_qualifies_for_room(space_id, child_room_id, user_id) .user_qualifies_for_room(space_id, child_room_id, user_id)
.await .await
@ -636,7 +599,6 @@ pub async fn auto_join_qualifying_rooms(
continue; continue;
} }
// Check if server user is joined to the child room
if !self if !self
.services .services
.state_cache .state_cache
@ -649,7 +611,6 @@ pub async fn auto_join_qualifying_rooms(
let state_lock = self.services.state.mutex.lock(child_room_id).await; let state_lock = self.services.state.mutex.lock(child_room_id).await;
// First invite the user (server user as sender)
if let Err(e) = self if let Err(e) = self
.services .services
.timeline .timeline
@ -668,7 +629,6 @@ pub async fn auto_join_qualifying_rooms(
continue; continue;
} }
// Then join (user as sender)
if let Err(e) = self if let Err(e) = self
.services .services
.timeline .timeline
@ -690,12 +650,6 @@ pub async fn auto_join_qualifying_rooms(
Ok(()) Ok(())
} }
/// Handle a state event change that may require enforcement.
///
/// Spawns a background task (gated by the enforcement semaphore) to
/// repopulate the cache and trigger enforcement actions based on the
/// event type. Deduplicated per-space to avoid redundant work during
/// bulk operations.
impl Service { impl Service {
pub fn handle_state_event_change( pub fn handle_state_event_change(
self: &Arc<Self>, self: &Arc<Self>,
@ -703,14 +657,13 @@ impl Service {
event_type: String, event_type: String,
state_key: String, state_key: String,
) { ) {
if !self.is_enabled() {
return;
}
let this = Arc::clone(self); let this = Arc::clone(self);
self.server.runtime().spawn(async move { self.server.runtime().spawn(async move {
// Deduplicate: if enforcement is already pending for this space, skip. if event_type != SPACE_CASCADING_EVENT_TYPE
// The running task's populate_space will pick up the latest state. && !this.is_enabled_for_space(&space_id).await
{
return;
}
{ {
let mut pending = this.pending_enforcement.write().await; let mut pending = this.pending_enforcement.write().await;
if pending.contains(&space_id) { if pending.contains(&space_id) {
@ -723,21 +676,16 @@ impl Service {
return; return;
}; };
// Always repopulate cache first
this.populate_space(&space_id).await; this.populate_space(&space_id).await;
match event_type.as_str() { match event_type.as_str() {
| SPACE_ROLES_EVENT_TYPE => { | SPACE_ROLES_EVENT_TYPE => {
// Role definitions changed — sync PLs in all child rooms
let child_rooms = this.get_child_rooms(&space_id).await; let child_rooms = this.get_child_rooms(&space_id).await;
for child_room_id in &child_rooms { for child_room_id in &child_rooms {
if let Err(e) = if let Err(e) = this.sync_power_levels(&space_id, child_room_id).await {
this.sync_power_levels(&space_id, child_room_id).await
{
debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels"); debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels");
} }
} }
// Revalidate all space members against all child rooms
let space_members: Vec<OwnedUserId> = this let space_members: Vec<OwnedUserId> = this
.services .services
.state_cache .state_cache
@ -754,10 +702,8 @@ impl Service {
} }
}, },
| SPACE_ROLE_MEMBER_EVENT_TYPE => { | SPACE_ROLE_MEMBER_EVENT_TYPE => {
// User's roles changed — auto-join/kick + PL sync
if let Ok(user_id) = UserId::parse(state_key.as_str()) { if let Ok(user_id) = UserId::parse(state_key.as_str()) {
if let Err(e) = if let Err(e) = this.auto_join_qualifying_rooms(&space_id, user_id).await
this.auto_join_qualifying_rooms(&space_id, user_id).await
{ {
debug_warn!(user_id = %user_id, error = ?e, "Space role auto-join failed"); debug_warn!(user_id = %user_id, error = ?e, "Space role auto-join failed");
} }
@ -766,11 +712,9 @@ impl Service {
{ {
debug_warn!(user_id = %user_id, error = ?e, "Space role auto-kick failed"); debug_warn!(user_id = %user_id, error = ?e, "Space role auto-kick failed");
} }
// Sync power levels in all child rooms
let child_rooms = this.get_child_rooms(&space_id).await; let child_rooms = this.get_child_rooms(&space_id).await;
for child_room_id in &child_rooms { for child_room_id in &child_rooms {
if let Err(e) = if let Err(e) = this.sync_power_levels(&space_id, child_room_id).await
this.sync_power_levels(&space_id, child_room_id).await
{ {
debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels"); debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels");
} }
@ -778,7 +722,6 @@ impl Service {
} }
}, },
| SPACE_ROLE_ROOM_EVENT_TYPE => { | SPACE_ROLE_ROOM_EVENT_TYPE => {
// Room requirements changed — kick unqualified members
if let Ok(target_room) = RoomId::parse(state_key.as_str()) { if let Ok(target_room) = RoomId::parse(state_key.as_str()) {
let members: Vec<OwnedUserId> = this let members: Vec<OwnedUserId> = this
.services .services
@ -789,15 +732,11 @@ impl Service {
.await; .await;
for member in &members { for member in &members {
if !this if !this
.user_qualifies_for_room( .user_qualifies_for_room(&space_id, target_room, member)
&space_id,
target_room,
member,
)
.await .await
{ {
if let Err(e) = Box::pin(this if let Err(e) =
.kick_unqualified_from_rooms(&space_id, member)) Box::pin(this.kick_unqualified_from_rooms(&space_id, member))
.await .await
{ {
debug_warn!(user_id = %member, error = ?e, "Space role requirement kick failed"); debug_warn!(user_id = %member, error = ?e, "Space role requirement kick failed");
@ -809,33 +748,24 @@ impl Service {
| _ => {}, | _ => {},
} }
// Remove from pending set so future events can trigger enforcement
this.pending_enforcement.write().await.remove(&space_id); this.pending_enforcement.write().await.remove(&space_id);
}); });
} }
/// Handle a new `m.space.child` event — update index and auto-join
/// qualifying members.
///
/// If the child event's `via` field is empty the child is removed from
/// both the forward and reverse indexes. Otherwise the child is added
/// and all qualifying space members are auto-joined.
pub fn handle_space_child_change( pub fn handle_space_child_change(
self: &Arc<Self>, self: &Arc<Self>,
space_id: OwnedRoomId, space_id: OwnedRoomId,
child_room_id: OwnedRoomId, child_room_id: OwnedRoomId,
) { ) {
if !self.is_enabled() {
return;
}
let this = Arc::clone(self); let this = Arc::clone(self);
self.server.runtime().spawn(async move { self.server.runtime().spawn(async move {
if !this.is_enabled_for_space(&space_id).await {
return;
}
let Ok(_permit) = this.enforcement_semaphore.acquire().await else { let Ok(_permit) = this.enforcement_semaphore.acquire().await else {
return; return;
}; };
// Read the actual m.space.child state event to check via
let child_event_type = StateEventType::SpaceChild; let child_event_type = StateEventType::SpaceChild;
let is_removal = match this let is_removal = match this
.services .services
@ -852,8 +782,6 @@ impl Service {
}; };
if is_removal { if is_removal {
// Lock ordering: room_to_space before space_to_rooms.
// This order must be consistent to avoid deadlocks.
let mut room_to_space = this.room_to_space.write().await; let mut room_to_space = this.room_to_space.write().await;
if let Some(parents) = room_to_space.get_mut(&child_room_id) { if let Some(parents) = room_to_space.get_mut(&child_room_id) {
parents.remove(&space_id); parents.remove(&space_id);
@ -861,7 +789,6 @@ impl Service {
room_to_space.remove(&child_room_id); room_to_space.remove(&child_room_id);
} }
} }
// Remove child from space_to_rooms forward index
let mut space_to_rooms = this.space_to_rooms.write().await; let mut space_to_rooms = this.space_to_rooms.write().await;
if let Some(children) = space_to_rooms.get_mut(&space_id) { if let Some(children) = space_to_rooms.get_mut(&space_id) {
children.remove(&child_room_id); children.remove(&child_room_id);
@ -869,7 +796,6 @@ impl Service {
return; return;
} }
// Add child to reverse index
this.room_to_space this.room_to_space
.write() .write()
.await .await
@ -877,7 +803,6 @@ impl Service {
.or_default() .or_default()
.insert(space_id.clone()); .insert(space_id.clone());
// Add child to forward index
this.space_to_rooms this.space_to_rooms
.write() .write()
.await .await
@ -885,7 +810,6 @@ impl Service {
.or_default() .or_default()
.insert(child_room_id.clone()); .insert(child_room_id.clone());
// Check if server user is joined to the child room before enforcement
let server_user = this.services.globals.server_user.as_ref(); let server_user = this.services.globals.server_user.as_ref();
if !this if !this
.services .services
@ -897,7 +821,6 @@ impl Service {
return; return;
} }
// Auto-join qualifying space members to this specific child room
let space_members: Vec<OwnedUserId> = this let space_members: Vec<OwnedUserId> = this
.services .services
.state_cache .state_cache
@ -920,7 +843,6 @@ impl Service {
let state_lock = let state_lock =
this.services.state.mutex.lock(&child_room_id).await; this.services.state.mutex.lock(&child_room_id).await;
// Invite
if let Err(e) = this if let Err(e) = this
.services .services
.timeline .timeline
@ -941,7 +863,6 @@ impl Service {
continue; continue;
} }
// Join
if let Err(e) = this if let Err(e) = this
.services .services
.timeline .timeline
@ -966,28 +887,21 @@ impl Service {
}); });
} }
/// Handle a user joining a Space — auto-join them to qualifying child
/// rooms.
///
/// Spawns a background task that auto-joins the user into every child
/// room they qualify for, then synchronizes their power levels across
/// all child rooms.
pub fn handle_space_member_join( pub fn handle_space_member_join(
self: &Arc<Self>, self: &Arc<Self>,
space_id: OwnedRoomId, space_id: OwnedRoomId,
user_id: OwnedUserId, user_id: OwnedUserId,
) { ) {
if !self.is_enabled() {
return;
}
// Skip if the user is the server user
if user_id == self.services.globals.server_user { if user_id == self.services.globals.server_user {
return; return;
} }
let this = Arc::clone(self); let this = Arc::clone(self);
self.server.runtime().spawn(async move { self.server.runtime().spawn(async move {
if !this.is_enabled_for_space(&space_id).await {
return;
}
let Ok(_permit) = this.enforcement_semaphore.acquire().await else { let Ok(_permit) = this.enforcement_semaphore.acquire().await else {
return; return;
}; };
@ -995,12 +909,9 @@ impl Service {
if let Err(e) = this.auto_join_qualifying_rooms(&space_id, &user_id).await { if let Err(e) = this.auto_join_qualifying_rooms(&space_id, &user_id).await {
debug_warn!(user_id = %user_id, error = ?e, "Auto-join on Space join failed"); debug_warn!(user_id = %user_id, error = ?e, "Auto-join on Space join failed");
} }
// Also sync their power levels
let child_rooms = this.get_child_rooms(&space_id).await; let child_rooms = this.get_child_rooms(&space_id).await;
for child_room_id in &child_rooms { for child_room_id in &child_rooms {
if let Err(e) = if let Err(e) = this.sync_power_levels(&space_id, child_room_id).await {
this.sync_power_levels(&space_id, child_room_id).await
{
debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels on join"); debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels on join");
} }
} }
@ -1008,18 +919,9 @@ impl Service {
} }
} }
/// Remove a user from all child rooms they no longer qualify for.
///
/// Iterates over child rooms that have role requirements for the given
/// space, checks whether the user still qualifies, and kicks them with a
/// reason if they do not.
#[implement(Service)] #[implement(Service)]
pub async fn kick_unqualified_from_rooms( pub async fn kick_unqualified_from_rooms(&self, space_id: &RoomId, user_id: &UserId) -> Result {
&self, if !self.is_enabled_for_space(space_id).await {
space_id: &RoomId,
user_id: &UserId,
) -> Result {
if !self.is_enabled() {
return Ok(()); return Ok(());
} }
@ -1028,7 +930,6 @@ pub async fn kick_unqualified_from_rooms(
return Ok(()); return Ok(());
} }
// Get child rooms that have requirements
let child_rooms: Vec<OwnedRoomId> = self let child_rooms: Vec<OwnedRoomId> = self
.room_requirements .room_requirements
.read() .read()
@ -1038,7 +939,6 @@ pub async fn kick_unqualified_from_rooms(
.unwrap_or_default(); .unwrap_or_default();
for child_room_id in &child_rooms { for child_room_id in &child_rooms {
// Check if server user is joined to the child room
if !self if !self
.services .services
.state_cache .state_cache
@ -1048,7 +948,6 @@ pub async fn kick_unqualified_from_rooms(
debug_warn!(room_id = %child_room_id, "Server user is not joined, skipping kick enforcement"); debug_warn!(room_id = %child_room_id, "Server user is not joined, skipping kick enforcement");
continue; continue;
} }
// Skip if not joined
if !self if !self
.services .services
.state_cache .state_cache
@ -1058,7 +957,6 @@ pub async fn kick_unqualified_from_rooms(
continue; continue;
} }
// Check if user still qualifies
if self if self
.user_qualifies_for_room(space_id, child_room_id, user_id) .user_qualifies_for_room(space_id, child_room_id, user_id)
.await .await
@ -1066,7 +964,6 @@ pub async fn kick_unqualified_from_rooms(
continue; continue;
} }
// Get existing member event content for the kick
let Ok(member_content) = self let Ok(member_content) = self
.services .services
.state_accessor .state_accessor
@ -1079,22 +976,18 @@ pub async fn kick_unqualified_from_rooms(
let state_lock = self.services.state.mutex.lock(child_room_id).await; let state_lock = self.services.state.mutex.lock(child_room_id).await;
// Kick the user by setting membership to Leave with a reason
if let Err(e) = self if let Err(e) = self
.services .services
.timeline .timeline
.build_and_append_pdu( .build_and_append_pdu(
PduBuilder::state( PduBuilder::state(user_id.to_string(), &RoomMemberEventContent {
user_id.to_string(),
&RoomMemberEventContent {
membership: MembershipState::Leave, membership: MembershipState::Leave,
reason: Some("No longer has required Space roles".into()), reason: Some("No longer has required Space roles".into()),
is_direct: None, is_direct: None,
join_authorized_via_users_server: None, join_authorized_via_users_server: None,
third_party_invite: None, third_party_invite: None,
..member_content ..member_content
}, }),
),
server_user, server_user,
Some(child_room_id), Some(child_room_id),
&state_lock, &state_lock,

View file

@ -1,7 +1,7 @@
use std::collections::{BTreeMap, HashMap, HashSet}; use std::collections::{BTreeMap, HashMap, HashSet};
use conduwuit_core::matrix::space_roles::RoleDefinition; use conduwuit_core::matrix::space_roles::RoleDefinition;
use ruma::{room_id, OwnedRoomId}; use ruma::{OwnedRoomId, room_id};
use super::{compute_user_power_level, roles_satisfy_requirements}; use super::{compute_user_power_level, roles_satisfy_requirements};
@ -10,13 +10,10 @@ pub fn make_roles(entries: &[(&str, Option<i64>)]) -> BTreeMap<String, RoleDefin
entries entries
.iter() .iter()
.map(|(name, pl)| { .map(|(name, pl)| {
( ((*name).to_owned(), RoleDefinition {
(*name).to_owned(),
RoleDefinition {
description: format!("{name} role"), description: format!("{name} role"),
power_level: *pl, power_level: *pl,
}, })
)
}) })
.collect() .collect()
} }
@ -38,11 +35,7 @@ fn power_level_single_role() {
#[test] #[test]
fn power_level_multiple_roles_takes_highest() { fn power_level_multiple_roles_takes_highest() {
let roles = make_roles(&[ let roles = make_roles(&[("admin", Some(100)), ("mod", Some(50)), ("helper", Some(25))]);
("admin", Some(100)),
("mod", Some(50)),
("helper", Some(25)),
]);
let user_assigned = make_user_roles(&["mod", "helper"]); let user_assigned = make_user_roles(&["mod", "helper"]);
assert_eq!(compute_user_power_level(&roles, &user_assigned), Some(50)); assert_eq!(compute_user_power_level(&roles, &user_assigned), Some(50));
} }
@ -120,7 +113,11 @@ fn room_to_space_lookup() {
.or_default() .or_default()
.insert(space.clone()); .insert(space.clone());
assert!(room_to_space.get(&child).unwrap().contains(&space)); assert!(room_to_space.get(&child).unwrap().contains(&space));
assert!(room_to_space.get(room_id!("!unknown:example.com")).is_none()); assert!(
room_to_space
.get(room_id!("!unknown:example.com"))
.is_none()
);
} }
#[test] #[test]

View file

@ -10,7 +10,8 @@ use conduwuit_core::{
event::Event, event::Event,
pdu::{PduCount, PduEvent, PduId, RawPduId}, pdu::{PduCount, PduEvent, PduId, RawPduId},
space_roles::{ space_roles::{
SPACE_ROLES_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE, SPACE_ROLE_ROOM_EVENT_TYPE, SPACE_CASCADING_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE, SPACE_ROLE_ROOM_EVENT_TYPE,
SPACE_ROLES_EVENT_TYPE,
}, },
}, },
utils::{self, ReadyExt}, utils::{self, ReadyExt},
@ -362,14 +363,13 @@ where
| _ => {}, | _ => {},
} }
// Space permission cascading: react to role-related state events
if self.services.roles.is_enabled() {
if let Some(state_key) = pdu.state_key() { if let Some(state_key) = pdu.state_key() {
let event_type_str = pdu.event_type().to_string(); let event_type_str = pdu.event_type().to_string();
match event_type_str.as_str() { match event_type_str.as_str() {
| SPACE_ROLES_EVENT_TYPE | SPACE_ROLES_EVENT_TYPE
| SPACE_ROLE_MEMBER_EVENT_TYPE | SPACE_ROLE_MEMBER_EVENT_TYPE
| SPACE_ROLE_ROOM_EVENT_TYPE => { | SPACE_ROLE_ROOM_EVENT_TYPE
| SPACE_CASCADING_EVENT_TYPE => {
if matches!( if matches!(
self.services.state_accessor.get_room_type(room_id).await, self.services.state_accessor.get_room_type(room_id).await,
Ok(ruma::room::RoomType::Space) Ok(ruma::room::RoomType::Space)
@ -386,20 +386,14 @@ where
| _ => {}, | _ => {},
} }
} }
// Handle m.space.child changes
if *pdu.kind() == TimelineEventType::SpaceChild { if *pdu.kind() == TimelineEventType::SpaceChild {
if let Some(state_key) = pdu.state_key() { if let Some(state_key) = pdu.state_key() {
if let Ok(child_room_id) = ruma::RoomId::parse(state_key) { if let Ok(child_room_id) = ruma::RoomId::parse(state_key) {
let roles: Arc<crate::rooms::roles::Service> = let roles: Arc<crate::rooms::roles::Service> = Arc::clone(&*self.services.roles);
Arc::clone(&*self.services.roles); roles.handle_space_child_change(room_id.to_owned(), child_room_id.to_owned());
roles.handle_space_child_change(
room_id.to_owned(),
child_room_id.to_owned(),
);
} }
} }
} }
// Handle m.room.member join in a Space — auto-join child rooms
if *pdu.kind() == TimelineEventType::RoomMember if *pdu.kind() == TimelineEventType::RoomMember
&& let Some(state_key) = pdu.state_key() && let Some(state_key) = pdu.state_key()
&& let Ok(content) = && let Ok(content) =
@ -409,13 +403,10 @@ where
&& matches!( && matches!(
self.services.state_accessor.get_room_type(room_id).await, self.services.state_accessor.get_room_type(room_id).await,
Ok(ruma::room::RoomType::Space) Ok(ruma::room::RoomType::Space)
) ) {
{ let roles: Arc<crate::rooms::roles::Service> = Arc::clone(&*self.services.roles);
let roles: Arc<crate::rooms::roles::Service> =
Arc::clone(&*self.services.roles);
roles.handle_space_member_join(room_id.to_owned(), user_id.to_owned()); roles.handle_space_member_join(room_id.to_owned(), user_id.to_owned());
} }
}
// CONCERN: If we receive events with a relation out-of-order, we never write // CONCERN: If we receive events with a relation out-of-order, we never write
// their relation / thread. We need some kind of way to trigger when we receive // their relation / thread. We need some kind of way to trigger when we receive

View file

@ -3,12 +3,10 @@ use std::{
iter::once, iter::once,
}; };
use conduwuit_core::matrix::space_roles::RoleDefinition;
use conduwuit::{debug_warn, trace}; use conduwuit::{debug_warn, trace};
use conduwuit_core::{ use conduwuit_core::{
Err, Result, implement, Err, Result, implement,
matrix::{event::Event, pdu::PduBuilder}, matrix::{event::Event, pdu::PduBuilder, space_roles::RoleDefinition},
utils::{IterStream, ReadyExt}, utils::{IterStream, ReadyExt},
}; };
use futures::{FutureExt, StreamExt}; use futures::{FutureExt, StreamExt};
@ -104,12 +102,15 @@ pub async fn build_and_append_pdu(
} }
// Space permission cascading: reject power level changes that conflict // Space permission cascading: reject power level changes that conflict
// with Space-granted levels (exempt the server user so sync_power_levels works) // with Space-granted levels (exempt the server user so sync_power_levels works)
type SpaceEnforcementData = type SpaceEnforcementData = (
(ruma::OwnedRoomId, Vec<(OwnedUserId, HashSet<String>)>, BTreeMap<String, RoleDefinition>); ruma::OwnedRoomId,
Vec<(OwnedUserId, HashSet<String>)>,
BTreeMap<String, RoleDefinition>,
);
if self.services.roles.is_enabled() if *pdu.kind() == TimelineEventType::RoomPowerLevels
&& *pdu.kind() == TimelineEventType::RoomPowerLevels && pdu.sender()
&& pdu.sender() != <OwnedUserId as AsRef<UserId>>::as_ref(&self.services.globals.server_user) != <OwnedUserId as AsRef<UserId>>::as_ref(&self.services.globals.server_user)
{ {
use ruma::events::room::power_levels::RoomPowerLevelsEventContent; use ruma::events::room::power_levels::RoomPowerLevelsEventContent;
@ -118,8 +119,11 @@ pub async fn build_and_append_pdu(
for parent_space in &parent_spaces { for parent_space in &parent_spaces {
// Check proposed users don't conflict with space-granted PLs // Check proposed users don't conflict with space-granted PLs
for (user_id, proposed_pl) in &proposed.users { for (user_id, proposed_pl) in &proposed.users {
if let Some(space_pl) = if let Some(space_pl) = self
self.services.roles.get_user_power_level(parent_space, user_id).await .services
.roles
.get_user_power_level(parent_space, user_id)
.await
{ {
if i64::from(*proposed_pl) != space_pl { if i64::from(*proposed_pl) != space_pl {
debug_warn!( debug_warn!(
@ -142,15 +146,21 @@ pub async fn build_and_append_pdu(
let space_data: Vec<SpaceEnforcementData> = { let space_data: Vec<SpaceEnforcementData> = {
let user_roles_guard = self.services.roles.user_roles.read().await; let user_roles_guard = self.services.roles.user_roles.read().await;
let roles_guard = self.services.roles.roles.read().await; let roles_guard = self.services.roles.roles.read().await;
parent_spaces.iter().filter_map(|ps| { parent_spaces
.iter()
.filter_map(|ps| {
let space_users = user_roles_guard.get(ps)?; let space_users = user_roles_guard.get(ps)?;
let role_defs = roles_guard.get(ps)?; let role_defs = roles_guard.get(ps)?;
Some(( Some((
ps.clone(), ps.clone(),
space_users.iter().map(|(u, r)| (u.clone(), r.clone())).collect(), space_users
.iter()
.map(|(u, r)| (u.clone(), r.clone()))
.collect(),
role_defs.clone(), role_defs.clone(),
)) ))
}).collect() })
.collect()
}; };
// Guards dropped here // Guards dropped here
@ -174,7 +184,8 @@ pub async fn build_and_append_pdu(
"Rejecting PL change: space-managed user omitted" "Rejecting PL change: space-managed user omitted"
); );
return Err!(Request(Forbidden( return Err!(Request(Forbidden(
"Cannot omit a user whose power level is managed by Space roles" "Cannot omit a user whose power level is managed by Space \
roles"
))); )));
}, },
| Some(pl) if i64::from(*pl) != space_pl => { | Some(pl) if i64::from(*pl) != space_pl => {