Compare commits

...

2 commits

Author SHA1 Message Date
5b56a8b6ed feat(spaces): add per-Space cascading toggle with server-wide default
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Has been skipped
Checks / Prek / Pre-commit & Formatting (pull_request) Failing after 4s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 5s
Update flake hashes / update-flake-hashes (pull_request) Failing after 14s
Add com.continuwuity.space.cascading state event for per-Space override
of the server-wide space_permission_cascading config. Add enable/disable/
status admin commands. Strip superfluous comments throughout.
2026-03-19 16:33:15 +01:00
53d4fb892c chore(spaces): fix formatting, add changelog, remove design docs
Run cargo +nightly fmt, add towncrier news fragment, remove plan
documents that served their purpose during development.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 16:10:53 +01:00
14 changed files with 384 additions and 2852 deletions

View file

@ -0,0 +1 @@
Add Space permission cascading: power levels cascade from Spaces to child rooms, role-based room access with custom roles, continuous enforcement (auto-join/kick), and admin commands for role management. Controlled by `space_permission_cascading` config flag (off by default).

View file

@ -470,9 +470,10 @@
#
#suspend_on_register = false
# Enable space permission cascading (power levels and role-based access).
# When enabled, power levels cascade from Spaces to child rooms and rooms
# can require roles for access. Applies to all Spaces on this server.
# Server-wide default for space permission cascading (power levels and
# role-based access). Individual Spaces can override this via the
# `com.continuwuity.space.cascading` state event or the admin command
# `!admin space roles enable/disable <space>`.
#
#space_permission_cascading = false

View file

@ -1,225 +0,0 @@
# Space Permission Cascading — Design Document
**Date:** 2026-03-17
**Status:** Implemented
## Overview
Server-side feature that allows user rights in a Space to cascade down to its
direct child rooms. Includes power level cascading and role-based room access
control. Enabled via a server-wide configuration flag, disabled by default.
## Requirements
1. Power levels defined in a Space cascade to all direct child rooms (Space
always wins over per-room overrides).
2. Admins can define custom roles in a Space and assign them to users.
3. Child rooms can require one or more roles for access.
4. Enforcement is continuous — role revocation auto-kicks users from rooms they
no longer qualify for.
5. Users are auto-joined to all qualifying child rooms when they join a Space or
receive a new role.
6. Cascading applies to direct parent Space only; no nested cascade through
sub-spaces.
7. Feature is toggled by a single server-wide config flag
(`space_permission_cascading`), off by default.
## Configuration
```toml
# conduwuit-example.toml
# Enable space permission cascading (power levels and role-based access).
# When enabled, power levels cascade from Spaces to child rooms and rooms
# can require roles for access. Applies to all Spaces on this server.
# Default: false
space_permission_cascading = false
```
## Custom State Events
All events live in the Space room.
### `com.continuwuity.space.roles` (state key: `""`)
Defines the available roles for the Space. Two default roles (`admin` and `mod`)
are created automatically when a Space is first encountered with the feature
enabled.
```json
{
"roles": {
"admin": {
"description": "Space administrator",
"power_level": 100
},
"mod": {
"description": "Space moderator",
"power_level": 50
},
"nsfw": {
"description": "Access to NSFW content"
},
"vip": {
"description": "VIP member"
}
}
}
```
- `description` (string, required): Human-readable description.
- `power_level` (integer, optional): If present, users with this role receive
this power level in all child rooms. When a user holds multiple roles with
power levels, the highest value wins.
### `com.continuwuity.space.role.member` (state key: user ID)
Assigns roles to a user within the Space.
```json
{
"roles": ["nsfw", "vip"]
}
```
### `com.continuwuity.space.role.room` (state key: room ID)
Declares which roles a child room requires. A user must hold **all** listed
roles to access the room.
```json
{
"required_roles": ["nsfw"]
}
```
## Enforcement Rules
All enforcement is skipped when `space_permission_cascading = false`.
### 1. Join gating
When a user attempts to join a room that is a direct child of a Space:
- Look up the room's `com.continuwuity.space.role.room` event in the parent Space.
- If the room has `required_roles`, check the user's `com.continuwuity.space.role.member`.
- Reject the join if the user is missing any required role.
### 2. Power level override
For every user in a child room of a Space:
- Look up their roles via `com.continuwuity.space.role.member` in the parent Space.
- For each role that has a `power_level`, take the highest value.
- Override the user's power level in the child room's `m.room.power_levels`.
- Reject attempts to manually set per-room power levels that conflict with
Space-granted levels.
### 3. Role revocation
When a `com.continuwuity.space.role.member` event is updated and a role is removed:
- Identify all child rooms that require the removed role.
- Auto-kick the user from rooms they no longer qualify for.
- Recalculate and update the user's power level in all child rooms.
### 4. Room requirement change
When a `com.continuwuity.space.role.room` event is updated with new requirements:
- Check all current members of the room.
- Auto-kick members who do not hold all newly required roles.
### 5. Auto-join on role grant
When a `com.continuwuity.space.role.member` event is updated and a role is added:
- Find all child rooms where the user now meets all required roles.
- Auto-join the user to qualifying rooms they are not already in.
This also applies when a user first joins the Space — they are auto-joined to
all child rooms they qualify for. Rooms with no role requirements auto-join all
Space members.
### 6. New child room
When a new `m.space.child` event is added to a Space:
- Auto-join all qualifying Space members to the new child room.
## Caching & Indexing
The source of truth is always the state events. The server maintains an
in-memory index for fast enforcement lookups, following the same patterns as the
existing `roomid_spacehierarchy_cache`.
### Index structures
| Index | Source event |
|------------------------------|------------------------|
| Space → roles defined | `com.continuwuity.space.roles` |
| Space → user → roles | `com.continuwuity.space.role.member` |
| Space → room → required roles| `com.continuwuity.space.role.room` |
| Room → parent Spaces | `m.space.child` (reverse lookup) |
| Space → child rooms | `m.space.child` (forward index) |
### Cache invalidation triggers
| Event changed | Action |
|----------------------------|-----------------------------------------------------|
| `com.continuwuity.space.roles` | Refresh role definitions, revalidate all members |
| `com.continuwuity.space.role.member` | Refresh user's roles, trigger auto-join/kick |
| `com.continuwuity.space.role.room` | Refresh room requirements, trigger auto-join/kick |
| `m.space.child` added | Index new child, auto-join qualifying members |
| `m.space.child` removed | Remove from index (no auto-kick) |
| Server startup | Full rebuild from state events |
## Admin Room Commands
Roles are managed via the existing admin room interface, which sends the
appropriate state events under the hood and triggers enforcement.
```
!admin space roles list <space>
!admin space roles add <space> <role_name> [description] [power_level]
!admin space roles remove <space> <role_name>
!admin space roles assign <space> <user_id> <role_name>
!admin space roles revoke <space> <user_id> <role_name>
!admin space roles require <space> <room_id> <role_name>
!admin space roles unrequire <space> <room_id> <role_name>
!admin space roles user <space> <user_id>
!admin space roles room <space> <room_id>
```
## Architecture
**Approach:** Hybrid — state events for definition, database cache for
enforcement.
- State events are the source of truth and federate normally.
- The server maintains an in-memory cache/index for fast enforcement.
- Cache is invalidated on relevant state event changes and fully rebuilt on
startup.
- All enforcement hooks (join gating, PL override, auto-join, auto-kick) check
the feature flag first and no-op when disabled.
- Existing clients can manage roles via Developer Tools (custom state events).
The admin room commands provide a user-friendly interface.
## Scope
### In scope
- Server-wide feature flag
- Custom state events for role definition, assignment, and room requirements
- Power level cascading (Space always wins)
- Continuous enforcement (auto-join, auto-kick)
- Admin room commands
- In-memory caching with invalidation
- Default `admin` (PL 100) and `mod` (PL 50) roles
### Out of scope
- Client-side UI for role management
- Nested cascade through sub-spaces
- Per-space opt-in/opt-out (it is server-wide)
- Federation-specific logic beyond normal state event replication

File diff suppressed because it is too large Load diff

View file

@ -1,37 +1,36 @@
use std::fmt::Write;
use clap::Subcommand;
use conduwuit::{Err, Event, Result};
use conduwuit::{Err, Event, Result, matrix::pdu::PduBuilder};
use conduwuit_core::matrix::space_roles::{
RoleDefinition, SpaceRoleMemberEventContent, SpaceRoleRoomEventContent,
SpaceRolesEventContent, SPACE_ROLES_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE,
SPACE_ROLE_ROOM_EVENT_TYPE,
RoleDefinition, SPACE_CASCADING_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE,
SPACE_ROLE_ROOM_EVENT_TYPE, SPACE_ROLES_EVENT_TYPE, SpaceCascadingEventContent,
SpaceRoleMemberEventContent, SpaceRoleRoomEventContent, SpaceRolesEventContent,
};
use futures::StreamExt;
use ruma::{OwnedRoomId, OwnedRoomOrAliasId, OwnedUserId, events::StateEventType};
use serde_json::value::to_raw_value;
use conduwuit::matrix::pdu::PduBuilder;
use futures::StreamExt;
use crate::{admin_command, admin_command_dispatch};
macro_rules! require_enabled {
($self:expr) => {
if !$self.services.rooms.roles.is_enabled() {
return $self
.write_str(
"Space permission cascading is disabled. \
Enable it with `space_permission_cascading = true` in your config.",
)
.await;
}
};
}
macro_rules! resolve_space {
($self:expr, $space:expr) => {{
require_enabled!($self);
let space_id = $self.services.rooms.alias.resolve(&$space).await?;
if !$self
.services
.rooms
.roles
.is_enabled_for_space(&space_id)
.await
{
return $self
.write_str(
"Space permission cascading is disabled for this Space. Enable it \
server-wide with `space_permission_cascading = true` in your config, or \
per-Space with `!admin space roles enable <space>`.",
)
.await;
}
if !matches!(
$self
.services
@ -51,10 +50,11 @@ macro_rules! custom_state_pdu {
($event_type:expr, $state_key:expr, $content:expr) => {
PduBuilder {
event_type: $event_type.to_owned().into(),
content: to_raw_value($content)
.map_err(|e| conduwuit::Error::Err(format!(
"Failed to serialize custom state event content: {e}"
).into()))?,
content: to_raw_value($content).map_err(|e| {
conduwuit::Error::Err(
format!("Failed to serialize custom state event content: {e}").into(),
)
})?,
state_key: Some($state_key.to_owned().into()),
..PduBuilder::default()
}
@ -116,6 +116,21 @@ pub enum SpaceRolesCommand {
space: OwnedRoomOrAliasId,
room_id: OwnedRoomId,
},
/// Enable space permission cascading for a specific space (overrides
/// server config)
Enable {
space: OwnedRoomOrAliasId,
},
/// Disable space permission cascading for a specific space (overrides
/// server config)
Disable {
space: OwnedRoomOrAliasId,
},
/// Show whether cascading is enabled for a space and the source (server
/// default or per-space override)
Status {
space: OwnedRoomOrAliasId,
},
}
#[admin_command]
@ -244,9 +259,7 @@ async fn remove(&self, space: OwnedRoomOrAliasId, role_name: String) -> Result {
for (state_key, event_id) in user_entries {
if let Ok(pdu) = self.services.rooms.timeline.get_pdu(&event_id).await {
if let Ok(mut member_content) =
pdu.get_content::<SpaceRoleMemberEventContent>()
{
if let Ok(mut member_content) = pdu.get_content::<SpaceRoleMemberEventContent>() {
if member_content.roles.contains(&role_name) {
member_content.roles.retain(|r| r != &role_name);
self.services
@ -281,9 +294,7 @@ async fn remove(&self, space: OwnedRoomOrAliasId, role_name: String) -> Result {
for (state_key, event_id) in room_entries {
if let Ok(pdu) = self.services.rooms.timeline.get_pdu(&event_id).await {
if let Ok(mut room_content) =
pdu.get_content::<SpaceRoleRoomEventContent>()
{
if let Ok(mut room_content) = pdu.get_content::<SpaceRoleRoomEventContent>() {
if room_content.required_roles.contains(&role_name) {
room_content.required_roles.retain(|r| r != &role_name);
self.services
@ -319,7 +330,6 @@ async fn assign(
) -> Result {
let space_id = resolve_space!(self, space);
// Read current role definitions to validate the role name
let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned());
let role_defs: SpaceRolesEventContent = self
.services
@ -363,10 +373,8 @@ async fn assign(
)
.await?;
self.write_str(&format!(
"Assigned role '{role_name}' to {user_id} in space {space_id}."
))
.await
self.write_str(&format!("Assigned role '{role_name}' to {user_id} in space {space_id}."))
.await
}
#[admin_command]
@ -408,10 +416,8 @@ async fn revoke(
)
.await?;
self.write_str(&format!(
"Revoked role '{role_name}' from {user_id} in space {space_id}."
))
.await
self.write_str(&format!("Revoked role '{role_name}' from {user_id} in space {space_id}."))
.await
}
#[admin_command]
@ -423,7 +429,6 @@ async fn require(
) -> Result {
let space_id = resolve_space!(self, space);
// Read current role definitions to validate the role name
let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned());
let role_defs: SpaceRolesEventContent = self
.services
@ -540,10 +545,9 @@ async fn user(&self, space: OwnedRoomOrAliasId, user_id: OwnedUserId) -> Result
))
.await
},
| _ => {
| _ =>
self.write_str(&format!("User {user_id} has no roles in space {space_id}."))
.await
},
.await,
}
}
@ -569,11 +573,123 @@ async fn room(&self, space: OwnedRoomOrAliasId, room_id: OwnedRoomId) -> Result
))
.await
},
| _ => {
| _ =>
self.write_str(&format!(
"Room {room_id} has no role requirements in space {space_id}."
))
.await
},
.await,
}
}
#[admin_command]
async fn enable(&self, space: OwnedRoomOrAliasId) -> Result {
let space_id = self.services.rooms.alias.resolve(&space).await?;
if !matches!(
self.services
.rooms
.state_accessor
.get_room_type(&space_id)
.await,
Ok(ruma::room::RoomType::Space)
) {
return Err!("The specified room is not a Space.");
}
let content = SpaceCascadingEventContent { enabled: true };
let state_lock = self.services.rooms.state.mutex.lock(&space_id).await;
let server_user = &self.services.globals.server_user;
self.services
.rooms
.timeline
.build_and_append_pdu(
custom_state_pdu!(SPACE_CASCADING_EVENT_TYPE, "", &content),
server_user,
Some(&space_id),
&state_lock,
)
.await?;
self.services
.rooms
.roles
.ensure_default_roles(&space_id)
.await?;
self.write_str(&format!("Space permission cascading enabled for {space_id}."))
.await
}
#[admin_command]
async fn disable(&self, space: OwnedRoomOrAliasId) -> Result {
let space_id = self.services.rooms.alias.resolve(&space).await?;
if !matches!(
self.services
.rooms
.state_accessor
.get_room_type(&space_id)
.await,
Ok(ruma::room::RoomType::Space)
) {
return Err!("The specified room is not a Space.");
}
let content = SpaceCascadingEventContent { enabled: false };
let state_lock = self.services.rooms.state.mutex.lock(&space_id).await;
let server_user = &self.services.globals.server_user;
self.services
.rooms
.timeline
.build_and_append_pdu(
custom_state_pdu!(SPACE_CASCADING_EVENT_TYPE, "", &content),
server_user,
Some(&space_id),
&state_lock,
)
.await?;
self.write_str(&format!("Space permission cascading disabled for {space_id}."))
.await
}
#[admin_command]
async fn status(&self, space: OwnedRoomOrAliasId) -> Result {
let space_id = self.services.rooms.alias.resolve(&space).await?;
if !matches!(
self.services
.rooms
.state_accessor
.get_room_type(&space_id)
.await,
Ok(ruma::room::RoomType::Space)
) {
return Err!("The specified room is not a Space.");
}
let global_default = self.services.rooms.roles.is_enabled();
let cascading_event_type = StateEventType::from(SPACE_CASCADING_EVENT_TYPE.to_owned());
let per_space_override: Option<bool> = self
.services
.rooms
.state_accessor
.room_state_get_content::<SpaceCascadingEventContent>(
&space_id,
&cascading_event_type,
"",
)
.await
.ok()
.map(|c| c.enabled);
let effective = per_space_override.unwrap_or(global_default);
let source = match per_space_override {
| Some(v) => format!("per-Space override (enabled: {v})"),
| None => format!("server default (space_permission_cascading: {global_default})"),
};
self.write_str(&format!(
"Cascading status for {space_id}:\n- Effective: **{effective}**\n- Source: {source}"
))
.await
}

View file

@ -347,9 +347,7 @@ pub async fn join_room_by_id_helper(
}
}
// Space permission cascading: check if user has required roles
// User must qualify in at least one parent space (if any exist)
if services.rooms.roles.is_enabled() {
{
let parent_spaces = services.rooms.roles.get_parent_spaces(room_id).await;
if !parent_spaces.is_empty() {
let mut qualifies_in_any = false;

View file

@ -603,9 +603,10 @@ pub struct Config {
#[serde(default)]
pub suspend_on_register: bool,
/// Enable space permission cascading (power levels and role-based access).
/// When enabled, power levels cascade from Spaces to child rooms and rooms
/// can require roles for access. Applies to all Spaces on this server.
/// Server-wide default for space permission cascading (power levels and
/// role-based access). Individual Spaces can override this via the
/// `com.continuwuity.space.cascading` state event or the admin command
/// `!admin space roles enable/disable <space>`.
///
/// default: false
#[serde(default)]

View file

@ -1,56 +1,39 @@
//! Custom state event content types for space permission cascading.
//!
//! These events live in Space rooms and define roles, user-role assignments,
//! and room-role requirements.
use std::collections::BTreeMap;
use serde::{Deserialize, Serialize};
/// Custom event type for space role definitions.
pub const SPACE_ROLES_EVENT_TYPE: &str = "com.continuwuity.space.roles";
/// Custom event type for per-user role assignments within a space.
pub const SPACE_ROLE_MEMBER_EVENT_TYPE: &str = "com.continuwuity.space.role.member";
/// Custom event type for per-room role requirements within a space.
pub const SPACE_ROLE_ROOM_EVENT_TYPE: &str = "com.continuwuity.space.role.room";
pub const SPACE_CASCADING_EVENT_TYPE: &str = "com.continuwuity.space.cascading";
/// Content for `com.continuwuity.space.roles` (state key: "")
///
/// Defines available roles for a Space.
#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceRolesEventContent {
pub roles: BTreeMap<String, RoleDefinition>,
}
/// A single role definition within a Space.
#[derive(Clone, Debug, Deserialize, Serialize, PartialEq, Eq)]
pub struct RoleDefinition {
pub description: String,
/// If present, users with this role receive this power level in child
/// rooms.
#[serde(skip_serializing_if = "Option::is_none")]
pub power_level: Option<i64>,
}
/// Content for `com.continuwuity.space.role.member` (state key: user ID)
///
/// Assigns roles to a user within a Space.
#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceRoleMemberEventContent {
pub roles: Vec<String>,
}
/// Content for `com.continuwuity.space.role.room` (state key: room ID)
///
/// Declares which roles a child room requires for access.
#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceRoleRoomEventContent {
pub required_roles: Vec<String>,
}
#[derive(Clone, Debug, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceCascadingEventContent {
pub enabled: bool,
}
#[cfg(test)]
mod tests {
use super::*;
@ -58,20 +41,14 @@ mod tests {
#[test]
fn serialize_space_roles() {
let mut roles = BTreeMap::new();
roles.insert(
"admin".to_owned(),
RoleDefinition {
description: "Space administrator".to_owned(),
power_level: Some(100),
},
);
roles.insert(
"nsfw".to_owned(),
RoleDefinition {
description: "NSFW access".to_owned(),
power_level: None,
},
);
roles.insert("admin".to_owned(), RoleDefinition {
description: "Space administrator".to_owned(),
power_level: Some(100),
});
roles.insert("nsfw".to_owned(), RoleDefinition {
description: "NSFW access".to_owned(),
power_level: None,
});
let content = SpaceRolesEventContent { roles };
let json = serde_json::to_string(&content).unwrap();
let deserialized: SpaceRolesEventContent = serde_json::from_str(&json).unwrap();
@ -92,9 +69,7 @@ mod tests {
#[test]
fn serialize_role_room() {
let content = SpaceRoleRoomEventContent {
required_roles: vec!["nsfw".to_owned()],
};
let content = SpaceRoleRoomEventContent { required_roles: vec!["nsfw".to_owned()] };
let json = serde_json::to_string(&content).unwrap();
let deserialized: SpaceRoleRoomEventContent = serde_json::from_str(&json).unwrap();
assert_eq!(deserialized.required_roles, vec!["nsfw"]);
@ -142,9 +117,7 @@ mod tests {
#[test]
fn empty_room_requirements() {
let content = SpaceRoleRoomEventContent {
required_roles: vec![],
};
let content = SpaceRoleRoomEventContent { required_roles: vec![] };
let json = serde_json::to_string(&content).unwrap();
let deserialized: SpaceRoleRoomEventContent = serde_json::from_str(&json).unwrap();
assert!(deserialized.required_roles.is_empty());

View file

@ -7,7 +7,7 @@
use std::collections::{BTreeMap, HashMap, HashSet};
use conduwuit_core::matrix::space_roles::RoleDefinition;
use ruma::{room_id, user_id, OwnedRoomId, OwnedUserId};
use ruma::{OwnedRoomId, OwnedUserId, room_id, user_id};
use super::tests::{make_requirements, make_roles, make_user_roles};
@ -75,10 +75,7 @@ impl MockCache {
room: &OwnedRoomId,
user: &OwnedUserId,
) -> bool {
let reqs = self
.room_requirements
.get(space)
.and_then(|r| r.get(room));
let reqs = self.room_requirements.get(space).and_then(|r| r.get(room));
match reqs {
| None => true,
@ -117,10 +114,7 @@ fn cache_populate_and_lookup() {
let child = room_id!("!child:example.com").to_owned();
let alice = user_id!("@alice:example.com").to_owned();
cache.add_space(
space.clone(),
make_roles(&[("admin", Some(100)), ("nsfw", None)]),
);
cache.add_space(space.clone(), make_roles(&[("admin", Some(100)), ("nsfw", None)]));
cache.add_child(&space, child.clone());
cache.assign_role(&space, alice.clone(), "nsfw".to_owned());
cache.set_room_requirements(&space, child.clone(), make_requirements(&["nsfw"]));
@ -154,21 +148,14 @@ fn cache_invalidation_on_requirement_change() {
let child = room_id!("!room:example.com").to_owned();
let alice = user_id!("@alice:example.com").to_owned();
cache.add_space(
space.clone(),
make_roles(&[("nsfw", None), ("vip", None)]),
);
cache.add_space(space.clone(), make_roles(&[("nsfw", None), ("vip", None)]));
cache.assign_role(&space, alice.clone(), "vip".to_owned());
cache.set_room_requirements(&space, child.clone(), make_requirements(&["vip"]));
assert!(cache.user_qualifies(&space, &child, &alice));
// Add nsfw requirement
cache.set_room_requirements(
&space,
child.clone(),
make_requirements(&["vip", "nsfw"]),
);
cache.set_room_requirements(&space, child.clone(), make_requirements(&["vip", "nsfw"]));
assert!(!cache.user_qualifies(&space, &child, &alice));
}
@ -177,11 +164,7 @@ fn cache_clear_empties_all() {
let mut cache = MockCache::new();
let space = room_id!("!space:example.com").to_owned();
cache.add_space(space.clone(), make_roles(&[("admin", Some(100))]));
cache.assign_role(
&space,
user_id!("@alice:example.com").to_owned(),
"admin".to_owned(),
);
cache.assign_role(&space, user_id!("@alice:example.com").to_owned(), "admin".to_owned());
cache.clear();
@ -204,7 +187,10 @@ fn cache_reverse_lookup_consistency() {
assert!(cache.room_to_space.get(&child1).unwrap().contains(&space));
assert!(cache.room_to_space.get(&child2).unwrap().contains(&space));
assert!(
cache.room_to_space.get(room_id!("!unknown:example.com")).is_none()
cache
.room_to_space
.get(room_id!("!unknown:example.com"))
.is_none()
);
}
@ -214,10 +200,7 @@ fn cache_power_level_updates_on_role_change() {
let space = room_id!("!space:example.com").to_owned();
let alice = user_id!("@alice:example.com").to_owned();
cache.add_space(
space.clone(),
make_roles(&[("admin", Some(100)), ("mod", Some(50))]),
);
cache.add_space(space.clone(), make_roles(&[("admin", Some(100)), ("mod", Some(50))]));
// No roles -> no PL
assert_eq!(cache.get_power_level(&space, &alice), None);

View file

@ -2,8 +2,10 @@ use std::collections::{HashMap, HashSet};
use ruma::{room_id, user_id};
use super::{compute_user_power_level, roles_satisfy_requirements};
use super::tests::{make_requirements, make_roles, make_user_roles};
use super::{
compute_user_power_level, roles_satisfy_requirements,
tests::{make_requirements, make_roles, make_user_roles},
};
#[test]
fn scenario_user_gains_and_loses_access() {
@ -53,11 +55,7 @@ fn scenario_multiple_rooms_different_requirements() {
#[test]
fn scenario_power_level_cascading_highest_wins() {
let roles = make_roles(&[
("admin", Some(100)),
("mod", Some(50)),
("helper", Some(25)),
]);
let roles = make_roles(&[("admin", Some(100)), ("mod", Some(50)), ("helper", Some(25))]);
let admin_mod = make_user_roles(&["admin", "mod"]);
assert_eq!(compute_user_power_level(&roles, &admin_mod), Some(100));
@ -114,10 +112,7 @@ fn scenario_identify_kick_candidates_after_role_revocation() {
rooms.insert("general".to_owned(), HashSet::new());
rooms.insert("nsfw-chat".to_owned(), make_requirements(&["nsfw"]));
rooms.insert("vip-lounge".to_owned(), make_requirements(&["vip"]));
rooms.insert(
"nsfw-vip".to_owned(),
make_requirements(&["nsfw", "vip"]),
);
rooms.insert("nsfw-vip".to_owned(), make_requirements(&["nsfw", "vip"]));
let kick_from: Vec<_> = rooms
.iter()

View file

@ -13,15 +13,13 @@ use std::{
use async_trait::async_trait;
use conduwuit::{
Event, Result, Server, debug, debug_warn, implement, info,
matrix::pdu::PduBuilder,
warn,
Event, Result, Server, debug, debug_warn, implement, info, matrix::pdu::PduBuilder, warn,
};
use conduwuit_core::{
matrix::space_roles::{
RoleDefinition, SpaceRoleMemberEventContent, SpaceRoleRoomEventContent,
SpaceRolesEventContent, SPACE_ROLES_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE,
SPACE_ROLE_ROOM_EVENT_TYPE,
RoleDefinition, SPACE_CASCADING_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE,
SPACE_ROLE_ROOM_EVENT_TYPE, SPACE_ROLES_EVENT_TYPE, SpaceCascadingEventContent,
SpaceRoleMemberEventContent, SpaceRoleRoomEventContent, SpaceRolesEventContent,
},
utils::{
future::TryExtExt,
@ -30,7 +28,7 @@ use conduwuit_core::{
};
use futures::{StreamExt, TryFutureExt};
use ruma::{
Int, OwnedEventId, OwnedRoomId, OwnedUserId, RoomId, UserId, room::RoomType,
Int, OwnedEventId, OwnedRoomId, OwnedUserId, RoomId, UserId,
events::{
StateEventType,
room::{
@ -39,6 +37,7 @@ use ruma::{
},
space::child::SpaceChildEventContent,
},
room::RoomType,
};
use serde_json::value::to_raw_value;
use tokio::sync::RwLock;
@ -130,10 +129,6 @@ impl crate::Service for Service {
}
async fn worker(self: Arc<Self>) -> Result<()> {
if !self.is_enabled() {
return Ok(());
}
info!("Rebuilding space roles cache from all known rooms");
let mut space_count: usize = 0;
@ -148,6 +143,11 @@ impl crate::Service for Service {
for room_id in &room_ids {
match self.services.state_accessor.get_room_type(room_id).await {
| Ok(RoomType::Space) => {
// Check per-Space override — skip spaces where cascading is
// disabled
if !self.is_enabled_for_space(room_id).await {
continue;
}
debug!(room_id = %room_id, "Populating space roles cache");
self.populate_space(room_id).await;
space_count = space_count.saturating_add(1);
@ -163,22 +163,30 @@ impl crate::Service for Service {
fn name(&self) -> &str { crate::service::make_name(std::module_path!()) }
}
/// Check whether space permission cascading is enabled in the server config.
#[implement(Service)]
pub fn is_enabled(&self) -> bool { self.server.config.space_permission_cascading }
/// Ensure a Space has the default admin/mod roles defined.
///
/// Checks whether a `com.continuwuity.space.roles` state event exists in the given space.
/// If not, creates default roles (admin at PL 100, mod at PL 50) and sends
/// the state event as the server user.
#[implement(Service)]
pub async fn is_enabled_for_space(&self, space_id: &RoomId) -> bool {
let cascading_event_type = StateEventType::from(SPACE_CASCADING_EVENT_TYPE.to_owned());
if let Ok(content) = self
.services
.state_accessor
.room_state_get_content::<SpaceCascadingEventContent>(space_id, &cascading_event_type, "")
.await
{
return content.enabled;
}
self.server.config.space_permission_cascading
}
#[implement(Service)]
pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
if !self.is_enabled() {
if !self.is_enabled_for_space(space_id).await {
return Ok(());
}
// Check if com.continuwuity.space.roles already exists
let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned());
if self
.services
@ -190,22 +198,15 @@ pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
return Ok(());
}
// Create default roles
let mut roles = BTreeMap::new();
roles.insert(
"admin".to_owned(),
RoleDefinition {
description: "Space administrator".to_owned(),
power_level: Some(100),
},
);
roles.insert(
"mod".to_owned(),
RoleDefinition {
description: "Space moderator".to_owned(),
power_level: Some(50),
},
);
roles.insert("admin".to_owned(), RoleDefinition {
description: "Space administrator".to_owned(),
power_level: Some(100),
});
roles.insert("mod".to_owned(), RoleDefinition {
description: "Space moderator".to_owned(),
power_level: Some(50),
});
let content = SpaceRolesEventContent { roles };
@ -214,8 +215,11 @@ pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
let pdu = PduBuilder {
event_type: ruma::events::TimelineEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned()),
content: to_raw_value(&content)
.map_err(|e| conduwuit::Error::Err(format!("Failed to serialize SpaceRolesEventContent: {e}").into()))?,
content: to_raw_value(&content).map_err(|e| {
conduwuit::Error::Err(
format!("Failed to serialize SpaceRolesEventContent: {e}").into(),
)
})?,
state_key: Some(String::new().into()),
..PduBuilder::default()
};
@ -230,18 +234,15 @@ pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
Ok(())
}
/// Populate the in-memory caches from state events for a single Space room.
///
/// Reads `com.continuwuity.space.roles`, `com.continuwuity.space.role.member`, `com.continuwuity.space.role.room`, and
/// `m.space.child` state events and indexes them for fast lookup.
#[implement(Service)]
pub async fn populate_space(&self, space_id: &RoomId) {
if !self.is_enabled() {
if !self.is_enabled_for_space(space_id).await {
return;
}
// Check cache capacity — if over limit, clear and let spaces repopulate on demand
if self.roles.read().await.len() >= usize::try_from(self.server.config.space_roles_cache_capacity).unwrap_or(usize::MAX) {
if self.roles.read().await.len()
>= usize::try_from(self.server.config.space_roles_cache_capacity).unwrap_or(usize::MAX)
{
self.roles.write().await.clear();
self.user_roles.write().await.clear();
self.room_requirements.write().await.clear();
@ -250,7 +251,6 @@ pub async fn populate_space(&self, space_id: &RoomId) {
debug_warn!("Space roles cache exceeded capacity, cleared");
}
// 1. Read com.continuwuity.space.roles (state key: "")
let roles_event_type = StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned());
if let Ok(content) = self
.services
@ -264,14 +264,8 @@ pub async fn populate_space(&self, space_id: &RoomId) {
.insert(space_id.to_owned(), content.roles);
}
// 2. Read all com.continuwuity.space.role.member state events (state key: user ID)
let member_event_type = StateEventType::from(SPACE_ROLE_MEMBER_EVENT_TYPE.to_owned());
let shortstatehash = match self
.services
.state
.get_room_shortstatehash(space_id)
.await
{
let shortstatehash = match self.services.state.get_room_shortstatehash(space_id).await {
| Ok(hash) => hash,
| Err(e) => {
debug_warn!(space_id = %space_id, error = ?e, "Failed to get shortstatehash, cache may be stale");
@ -309,7 +303,6 @@ pub async fn populate_space(&self, space_id: &RoomId) {
.await
.insert(space_id.to_owned(), user_roles_map);
// 3. Read all com.continuwuity.space.role.room state events (state key: room ID)
let room_event_type = StateEventType::from(SPACE_ROLE_ROOM_EVENT_TYPE.to_owned());
let mut room_reqs_map: HashMap<OwnedRoomId, HashSet<String>> = HashMap::new();
@ -341,7 +334,6 @@ pub async fn populate_space(&self, space_id: &RoomId) {
.await
.insert(space_id.to_owned(), room_reqs_map);
// 4. Read all m.space.child state events → build room_to_space reverse index
let mut child_rooms: Vec<OwnedRoomId> = Vec::new();
self.services
@ -373,16 +365,12 @@ pub async fn populate_space(&self, space_id: &RoomId) {
})
.await;
// Lock ordering: room_to_space before space_to_rooms.
// This order must be consistent to avoid deadlocks.
{
let mut room_to_space = self.room_to_space.write().await;
// Remove this space from all existing entries
room_to_space.retain(|_, parents| {
parents.remove(space_id);
!parents.is_empty()
});
// Insert fresh children
for child_room_id in &child_rooms {
room_to_space
.entry(child_room_id.clone())
@ -391,7 +379,6 @@ pub async fn populate_space(&self, space_id: &RoomId) {
}
}
// Update forward index (after room_to_space to maintain lock ordering)
{
let mut space_to_rooms = self.space_to_rooms.write().await;
space_to_rooms.insert(space_id.to_owned(), child_rooms.into_iter().collect())
@ -399,7 +386,6 @@ pub async fn populate_space(&self, space_id: &RoomId) {
}
}
/// Compute the maximum power level from a user's assigned roles.
#[must_use]
pub fn compute_user_power_level<S: ::std::hash::BuildHasher>(
role_defs: &BTreeMap<String, RoleDefinition>,
@ -411,7 +397,6 @@ pub fn compute_user_power_level<S: ::std::hash::BuildHasher>(
.max()
}
/// Check if a set of assigned roles satisfies all requirements.
#[must_use]
pub fn roles_satisfy_requirements<S: ::std::hash::BuildHasher>(
required: &HashSet<String, S>,
@ -420,20 +405,20 @@ pub fn roles_satisfy_requirements<S: ::std::hash::BuildHasher>(
required.iter().all(|r| assigned.contains(r))
}
/// Get a user's effective power level from Space roles.
/// Returns None if user has no roles with power levels.
#[implement(Service)]
pub async fn get_user_power_level(
&self,
space_id: &RoomId,
user_id: &UserId,
) -> Option<i64> {
pub async fn get_user_power_level(&self, space_id: &RoomId, user_id: &UserId) -> Option<i64> {
let role_defs = { self.roles.read().await.get(space_id).cloned()? };
let user_assigned = { self.user_roles.read().await.get(space_id)?.get(user_id).cloned()? };
let user_assigned = {
self.user_roles
.read()
.await
.get(space_id)?
.get(user_id)
.cloned()?
};
compute_user_power_level(&role_defs, &user_assigned)
}
/// Check if a user has all required roles for a room.
#[implement(Service)]
pub async fn user_qualifies_for_room(
&self,
@ -467,25 +452,25 @@ pub async fn user_qualifies_for_room(
roles_satisfy_requirements(&required, &user_assigned)
}
/// Get the parent Spaces of a child room, if any.
///
/// Only direct parent spaces are returned. Nested sub-space cascading
/// is not supported (see design doc requirement 6).
#[implement(Service)]
pub async fn get_parent_spaces(&self, room_id: &RoomId) -> Vec<OwnedRoomId> {
if !self.is_enabled() {
return Vec::new();
}
self.room_to_space
let all_parents: Vec<OwnedRoomId> = self
.room_to_space
.read()
.await
.get(room_id)
.map(|set| set.iter().cloned().collect())
.unwrap_or_default()
.unwrap_or_default();
let mut enabled_parents = Vec::new();
for parent in all_parents {
if self.is_enabled_for_space(&parent).await {
enabled_parents.push(parent);
}
}
enabled_parents
}
/// Get all child rooms of a Space from the forward index.
#[implement(Service)]
pub async fn get_child_rooms(&self, space_id: &RoomId) -> Vec<OwnedRoomId> {
self.space_to_rooms
@ -496,15 +481,12 @@ pub async fn get_child_rooms(&self, space_id: &RoomId) -> Vec<OwnedRoomId> {
.unwrap_or_default()
}
/// Synchronize power levels in a child room based on Space roles.
/// This overrides per-room power levels with Space-granted levels.
#[implement(Service)]
pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Result {
if !self.is_enabled() {
if !self.is_enabled_for_space(space_id).await {
return Ok(());
}
// Check if server user is joined to the room
let server_user = self.services.globals.server_user.as_ref();
if !self
.services
@ -516,7 +498,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
return Ok(());
}
// 1. Get current power levels for the room
let mut power_levels_content: RoomPowerLevelsEventContent = self
.services
.state_accessor
@ -524,7 +505,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
.await
.unwrap_or_default();
// 2. Get all members of the room
let members: Vec<OwnedUserId> = self
.services
.state_cache
@ -533,7 +513,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
.collect()
.await;
// 3. For each member, check their space role power level
let mut changed = false;
for user_id in &members {
if user_id == server_user {
@ -547,7 +526,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
.copied()
.unwrap_or(power_levels_content.users_default);
// 4. If the space PL differs from room PL, update it
if current_pl != space_pl_int {
power_levels_content
.users
@ -555,7 +533,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
changed = true;
}
} else {
// Check if any other parent space manages this user's PL
let parents = self.get_parent_spaces(room_id).await;
let mut managed_by_other = false;
for parent in &parents {
@ -575,7 +552,6 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
}
}
// 5. If changed, send updated power levels event
if changed {
let state_lock = self.services.state.mutex.lock(room_id).await;
@ -593,32 +569,20 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
Ok(())
}
/// Auto-join a user to all qualifying child rooms of a Space.
///
/// Iterates over all child rooms in the `space_to_rooms` forward index,
/// checks whether the user qualifies via their assigned roles, and
/// force-joins them if they are not already a member.
#[implement(Service)]
pub async fn auto_join_qualifying_rooms(
&self,
space_id: &RoomId,
user_id: &UserId,
) -> Result {
if !self.is_enabled() {
pub async fn auto_join_qualifying_rooms(&self, space_id: &RoomId, user_id: &UserId) -> Result {
if !self.is_enabled_for_space(space_id).await {
return Ok(());
}
// Skip server user — it doesn't need role-based auto-join
let server_user = self.services.globals.server_user.as_ref();
if user_id == server_user {
return Ok(());
}
// Get all child rooms via the space_to_rooms forward index
let child_rooms = self.get_child_rooms(space_id).await;
for child_room_id in &child_rooms {
// Skip if already joined
if self
.services
.state_cache
@ -628,7 +592,6 @@ pub async fn auto_join_qualifying_rooms(
continue;
}
// Check if user qualifies
if !self
.user_qualifies_for_room(space_id, child_room_id, user_id)
.await
@ -636,7 +599,6 @@ pub async fn auto_join_qualifying_rooms(
continue;
}
// Check if server user is joined to the child room
if !self
.services
.state_cache
@ -649,7 +611,6 @@ pub async fn auto_join_qualifying_rooms(
let state_lock = self.services.state.mutex.lock(child_room_id).await;
// First invite the user (server user as sender)
if let Err(e) = self
.services
.timeline
@ -668,7 +629,6 @@ pub async fn auto_join_qualifying_rooms(
continue;
}
// Then join (user as sender)
if let Err(e) = self
.services
.timeline
@ -690,12 +650,6 @@ pub async fn auto_join_qualifying_rooms(
Ok(())
}
/// Handle a state event change that may require enforcement.
///
/// Spawns a background task (gated by the enforcement semaphore) to
/// repopulate the cache and trigger enforcement actions based on the
/// event type. Deduplicated per-space to avoid redundant work during
/// bulk operations.
impl Service {
pub fn handle_state_event_change(
self: &Arc<Self>,
@ -703,14 +657,13 @@ impl Service {
event_type: String,
state_key: String,
) {
if !self.is_enabled() {
return;
}
let this = Arc::clone(self);
self.server.runtime().spawn(async move {
// Deduplicate: if enforcement is already pending for this space, skip.
// The running task's populate_space will pick up the latest state.
if event_type != SPACE_CASCADING_EVENT_TYPE
&& !this.is_enabled_for_space(&space_id).await
{
return;
}
{
let mut pending = this.pending_enforcement.write().await;
if pending.contains(&space_id) {
@ -723,21 +676,16 @@ impl Service {
return;
};
// Always repopulate cache first
this.populate_space(&space_id).await;
match event_type.as_str() {
| SPACE_ROLES_EVENT_TYPE => {
// Role definitions changed — sync PLs in all child rooms
let child_rooms = this.get_child_rooms(&space_id).await;
for child_room_id in &child_rooms {
if let Err(e) =
this.sync_power_levels(&space_id, child_room_id).await
{
if let Err(e) = this.sync_power_levels(&space_id, child_room_id).await {
debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels");
}
}
// Revalidate all space members against all child rooms
let space_members: Vec<OwnedUserId> = this
.services
.state_cache
@ -754,10 +702,8 @@ impl Service {
}
},
| SPACE_ROLE_MEMBER_EVENT_TYPE => {
// User's roles changed — auto-join/kick + PL sync
if let Ok(user_id) = UserId::parse(state_key.as_str()) {
if let Err(e) =
this.auto_join_qualifying_rooms(&space_id, user_id).await
if let Err(e) = this.auto_join_qualifying_rooms(&space_id, user_id).await
{
debug_warn!(user_id = %user_id, error = ?e, "Space role auto-join failed");
}
@ -766,11 +712,9 @@ impl Service {
{
debug_warn!(user_id = %user_id, error = ?e, "Space role auto-kick failed");
}
// Sync power levels in all child rooms
let child_rooms = this.get_child_rooms(&space_id).await;
for child_room_id in &child_rooms {
if let Err(e) =
this.sync_power_levels(&space_id, child_room_id).await
if let Err(e) = this.sync_power_levels(&space_id, child_room_id).await
{
debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels");
}
@ -778,7 +722,6 @@ impl Service {
}
},
| SPACE_ROLE_ROOM_EVENT_TYPE => {
// Room requirements changed — kick unqualified members
if let Ok(target_room) = RoomId::parse(state_key.as_str()) {
let members: Vec<OwnedUserId> = this
.services
@ -789,16 +732,12 @@ impl Service {
.await;
for member in &members {
if !this
.user_qualifies_for_room(
&space_id,
target_room,
member,
)
.user_qualifies_for_room(&space_id, target_room, member)
.await
{
if let Err(e) = Box::pin(this
.kick_unqualified_from_rooms(&space_id, member))
.await
if let Err(e) =
Box::pin(this.kick_unqualified_from_rooms(&space_id, member))
.await
{
debug_warn!(user_id = %member, error = ?e, "Space role requirement kick failed");
}
@ -809,33 +748,24 @@ impl Service {
| _ => {},
}
// Remove from pending set so future events can trigger enforcement
this.pending_enforcement.write().await.remove(&space_id);
});
}
/// Handle a new `m.space.child` event — update index and auto-join
/// qualifying members.
///
/// If the child event's `via` field is empty the child is removed from
/// both the forward and reverse indexes. Otherwise the child is added
/// and all qualifying space members are auto-joined.
pub fn handle_space_child_change(
self: &Arc<Self>,
space_id: OwnedRoomId,
child_room_id: OwnedRoomId,
) {
if !self.is_enabled() {
return;
}
let this = Arc::clone(self);
self.server.runtime().spawn(async move {
if !this.is_enabled_for_space(&space_id).await {
return;
}
let Ok(_permit) = this.enforcement_semaphore.acquire().await else {
return;
};
// Read the actual m.space.child state event to check via
let child_event_type = StateEventType::SpaceChild;
let is_removal = match this
.services
@ -852,8 +782,6 @@ impl Service {
};
if is_removal {
// Lock ordering: room_to_space before space_to_rooms.
// This order must be consistent to avoid deadlocks.
let mut room_to_space = this.room_to_space.write().await;
if let Some(parents) = room_to_space.get_mut(&child_room_id) {
parents.remove(&space_id);
@ -861,7 +789,6 @@ impl Service {
room_to_space.remove(&child_room_id);
}
}
// Remove child from space_to_rooms forward index
let mut space_to_rooms = this.space_to_rooms.write().await;
if let Some(children) = space_to_rooms.get_mut(&space_id) {
children.remove(&child_room_id);
@ -869,7 +796,6 @@ impl Service {
return;
}
// Add child to reverse index
this.room_to_space
.write()
.await
@ -877,7 +803,6 @@ impl Service {
.or_default()
.insert(space_id.clone());
// Add child to forward index
this.space_to_rooms
.write()
.await
@ -885,7 +810,6 @@ impl Service {
.or_default()
.insert(child_room_id.clone());
// Check if server user is joined to the child room before enforcement
let server_user = this.services.globals.server_user.as_ref();
if !this
.services
@ -897,7 +821,6 @@ impl Service {
return;
}
// Auto-join qualifying space members to this specific child room
let space_members: Vec<OwnedUserId> = this
.services
.state_cache
@ -920,7 +843,6 @@ impl Service {
let state_lock =
this.services.state.mutex.lock(&child_room_id).await;
// Invite
if let Err(e) = this
.services
.timeline
@ -941,7 +863,6 @@ impl Service {
continue;
}
// Join
if let Err(e) = this
.services
.timeline
@ -966,28 +887,21 @@ impl Service {
});
}
/// Handle a user joining a Space — auto-join them to qualifying child
/// rooms.
///
/// Spawns a background task that auto-joins the user into every child
/// room they qualify for, then synchronizes their power levels across
/// all child rooms.
pub fn handle_space_member_join(
self: &Arc<Self>,
space_id: OwnedRoomId,
user_id: OwnedUserId,
) {
if !self.is_enabled() {
return;
}
// Skip if the user is the server user
if user_id == self.services.globals.server_user {
return;
}
let this = Arc::clone(self);
self.server.runtime().spawn(async move {
if !this.is_enabled_for_space(&space_id).await {
return;
}
let Ok(_permit) = this.enforcement_semaphore.acquire().await else {
return;
};
@ -995,12 +909,9 @@ impl Service {
if let Err(e) = this.auto_join_qualifying_rooms(&space_id, &user_id).await {
debug_warn!(user_id = %user_id, error = ?e, "Auto-join on Space join failed");
}
// Also sync their power levels
let child_rooms = this.get_child_rooms(&space_id).await;
for child_room_id in &child_rooms {
if let Err(e) =
this.sync_power_levels(&space_id, child_room_id).await
{
if let Err(e) = this.sync_power_levels(&space_id, child_room_id).await {
debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels on join");
}
}
@ -1008,18 +919,9 @@ impl Service {
}
}
/// Remove a user from all child rooms they no longer qualify for.
///
/// Iterates over child rooms that have role requirements for the given
/// space, checks whether the user still qualifies, and kicks them with a
/// reason if they do not.
#[implement(Service)]
pub async fn kick_unqualified_from_rooms(
&self,
space_id: &RoomId,
user_id: &UserId,
) -> Result {
if !self.is_enabled() {
pub async fn kick_unqualified_from_rooms(&self, space_id: &RoomId, user_id: &UserId) -> Result {
if !self.is_enabled_for_space(space_id).await {
return Ok(());
}
@ -1028,7 +930,6 @@ pub async fn kick_unqualified_from_rooms(
return Ok(());
}
// Get child rooms that have requirements
let child_rooms: Vec<OwnedRoomId> = self
.room_requirements
.read()
@ -1038,7 +939,6 @@ pub async fn kick_unqualified_from_rooms(
.unwrap_or_default();
for child_room_id in &child_rooms {
// Check if server user is joined to the child room
if !self
.services
.state_cache
@ -1048,7 +948,6 @@ pub async fn kick_unqualified_from_rooms(
debug_warn!(room_id = %child_room_id, "Server user is not joined, skipping kick enforcement");
continue;
}
// Skip if not joined
if !self
.services
.state_cache
@ -1058,7 +957,6 @@ pub async fn kick_unqualified_from_rooms(
continue;
}
// Check if user still qualifies
if self
.user_qualifies_for_room(space_id, child_room_id, user_id)
.await
@ -1066,7 +964,6 @@ pub async fn kick_unqualified_from_rooms(
continue;
}
// Get existing member event content for the kick
let Ok(member_content) = self
.services
.state_accessor
@ -1079,22 +976,18 @@ pub async fn kick_unqualified_from_rooms(
let state_lock = self.services.state.mutex.lock(child_room_id).await;
// Kick the user by setting membership to Leave with a reason
if let Err(e) = self
.services
.timeline
.build_and_append_pdu(
PduBuilder::state(
user_id.to_string(),
&RoomMemberEventContent {
membership: MembershipState::Leave,
reason: Some("No longer has required Space roles".into()),
is_direct: None,
join_authorized_via_users_server: None,
third_party_invite: None,
..member_content
},
),
PduBuilder::state(user_id.to_string(), &RoomMemberEventContent {
membership: MembershipState::Leave,
reason: Some("No longer has required Space roles".into()),
is_direct: None,
join_authorized_via_users_server: None,
third_party_invite: None,
..member_content
}),
server_user,
Some(child_room_id),
&state_lock,

View file

@ -1,7 +1,7 @@
use std::collections::{BTreeMap, HashMap, HashSet};
use conduwuit_core::matrix::space_roles::RoleDefinition;
use ruma::{room_id, OwnedRoomId};
use ruma::{OwnedRoomId, room_id};
use super::{compute_user_power_level, roles_satisfy_requirements};
@ -10,13 +10,10 @@ pub fn make_roles(entries: &[(&str, Option<i64>)]) -> BTreeMap<String, RoleDefin
entries
.iter()
.map(|(name, pl)| {
(
(*name).to_owned(),
RoleDefinition {
description: format!("{name} role"),
power_level: *pl,
},
)
((*name).to_owned(), RoleDefinition {
description: format!("{name} role"),
power_level: *pl,
})
})
.collect()
}
@ -38,11 +35,7 @@ fn power_level_single_role() {
#[test]
fn power_level_multiple_roles_takes_highest() {
let roles = make_roles(&[
("admin", Some(100)),
("mod", Some(50)),
("helper", Some(25)),
]);
let roles = make_roles(&[("admin", Some(100)), ("mod", Some(50)), ("helper", Some(25))]);
let user_assigned = make_user_roles(&["mod", "helper"]);
assert_eq!(compute_user_power_level(&roles, &user_assigned), Some(50));
}
@ -120,7 +113,11 @@ fn room_to_space_lookup() {
.or_default()
.insert(space.clone());
assert!(room_to_space.get(&child).unwrap().contains(&space));
assert!(room_to_space.get(room_id!("!unknown:example.com")).is_none());
assert!(
room_to_space
.get(room_id!("!unknown:example.com"))
.is_none()
);
}
#[test]

View file

@ -10,7 +10,8 @@ use conduwuit_core::{
event::Event,
pdu::{PduCount, PduEvent, PduId, RawPduId},
space_roles::{
SPACE_ROLES_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE, SPACE_ROLE_ROOM_EVENT_TYPE,
SPACE_CASCADING_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE, SPACE_ROLE_ROOM_EVENT_TYPE,
SPACE_ROLES_EVENT_TYPE,
},
},
utils::{self, ReadyExt},
@ -362,59 +363,49 @@ where
| _ => {},
}
// Space permission cascading: react to role-related state events
if self.services.roles.is_enabled() {
if let Some(state_key) = pdu.state_key() {
let event_type_str = pdu.event_type().to_string();
match event_type_str.as_str() {
| SPACE_ROLES_EVENT_TYPE
| SPACE_ROLE_MEMBER_EVENT_TYPE
| SPACE_ROLE_ROOM_EVENT_TYPE => {
if matches!(
self.services.state_accessor.get_room_type(room_id).await,
Ok(ruma::room::RoomType::Space)
) {
let roles: Arc<crate::rooms::roles::Service> =
Arc::clone(&*self.services.roles);
roles.handle_state_event_change(
room_id.to_owned(),
event_type_str,
state_key.to_owned(),
);
}
},
| _ => {},
}
}
// Handle m.space.child changes
if *pdu.kind() == TimelineEventType::SpaceChild {
if let Some(state_key) = pdu.state_key() {
if let Ok(child_room_id) = ruma::RoomId::parse(state_key) {
if let Some(state_key) = pdu.state_key() {
let event_type_str = pdu.event_type().to_string();
match event_type_str.as_str() {
| SPACE_ROLES_EVENT_TYPE
| SPACE_ROLE_MEMBER_EVENT_TYPE
| SPACE_ROLE_ROOM_EVENT_TYPE
| SPACE_CASCADING_EVENT_TYPE => {
if matches!(
self.services.state_accessor.get_room_type(room_id).await,
Ok(ruma::room::RoomType::Space)
) {
let roles: Arc<crate::rooms::roles::Service> =
Arc::clone(&*self.services.roles);
roles.handle_space_child_change(
roles.handle_state_event_change(
room_id.to_owned(),
child_room_id.to_owned(),
event_type_str,
state_key.to_owned(),
);
}
},
| _ => {},
}
}
if *pdu.kind() == TimelineEventType::SpaceChild {
if let Some(state_key) = pdu.state_key() {
if let Ok(child_room_id) = ruma::RoomId::parse(state_key) {
let roles: Arc<crate::rooms::roles::Service> = Arc::clone(&*self.services.roles);
roles.handle_space_child_change(room_id.to_owned(), child_room_id.to_owned());
}
}
// Handle m.room.member join in a Space — auto-join child rooms
if *pdu.kind() == TimelineEventType::RoomMember
&& let Some(state_key) = pdu.state_key()
&& let Ok(content) =
pdu.get_content::<ruma::events::room::member::RoomMemberEventContent>()
&& content.membership == ruma::events::room::member::MembershipState::Join
&& let Ok(user_id) = UserId::parse(state_key)
&& matches!(
self.services.state_accessor.get_room_type(room_id).await,
Ok(ruma::room::RoomType::Space)
)
{
let roles: Arc<crate::rooms::roles::Service> =
Arc::clone(&*self.services.roles);
roles.handle_space_member_join(room_id.to_owned(), user_id.to_owned());
}
}
if *pdu.kind() == TimelineEventType::RoomMember
&& let Some(state_key) = pdu.state_key()
&& let Ok(content) =
pdu.get_content::<ruma::events::room::member::RoomMemberEventContent>()
&& content.membership == ruma::events::room::member::MembershipState::Join
&& let Ok(user_id) = UserId::parse(state_key)
&& matches!(
self.services.state_accessor.get_room_type(room_id).await,
Ok(ruma::room::RoomType::Space)
) {
let roles: Arc<crate::rooms::roles::Service> = Arc::clone(&*self.services.roles);
roles.handle_space_member_join(room_id.to_owned(), user_id.to_owned());
}
// CONCERN: If we receive events with a relation out-of-order, we never write

View file

@ -3,12 +3,10 @@ use std::{
iter::once,
};
use conduwuit_core::matrix::space_roles::RoleDefinition;
use conduwuit::{debug_warn, trace};
use conduwuit_core::{
Err, Result, implement,
matrix::{event::Event, pdu::PduBuilder},
matrix::{event::Event, pdu::PduBuilder, space_roles::RoleDefinition},
utils::{IterStream, ReadyExt},
};
use futures::{FutureExt, StreamExt};
@ -104,12 +102,15 @@ pub async fn build_and_append_pdu(
}
// Space permission cascading: reject power level changes that conflict
// with Space-granted levels (exempt the server user so sync_power_levels works)
type SpaceEnforcementData =
(ruma::OwnedRoomId, Vec<(OwnedUserId, HashSet<String>)>, BTreeMap<String, RoleDefinition>);
type SpaceEnforcementData = (
ruma::OwnedRoomId,
Vec<(OwnedUserId, HashSet<String>)>,
BTreeMap<String, RoleDefinition>,
);
if self.services.roles.is_enabled()
&& *pdu.kind() == TimelineEventType::RoomPowerLevels
&& pdu.sender() != <OwnedUserId as AsRef<UserId>>::as_ref(&self.services.globals.server_user)
if *pdu.kind() == TimelineEventType::RoomPowerLevels
&& pdu.sender()
!= <OwnedUserId as AsRef<UserId>>::as_ref(&self.services.globals.server_user)
{
use ruma::events::room::power_levels::RoomPowerLevelsEventContent;
@ -118,8 +119,11 @@ pub async fn build_and_append_pdu(
for parent_space in &parent_spaces {
// Check proposed users don't conflict with space-granted PLs
for (user_id, proposed_pl) in &proposed.users {
if let Some(space_pl) =
self.services.roles.get_user_power_level(parent_space, user_id).await
if let Some(space_pl) = self
.services
.roles
.get_user_power_level(parent_space, user_id)
.await
{
if i64::from(*proposed_pl) != space_pl {
debug_warn!(
@ -142,15 +146,21 @@ pub async fn build_and_append_pdu(
let space_data: Vec<SpaceEnforcementData> = {
let user_roles_guard = self.services.roles.user_roles.read().await;
let roles_guard = self.services.roles.roles.read().await;
parent_spaces.iter().filter_map(|ps| {
let space_users = user_roles_guard.get(ps)?;
let role_defs = roles_guard.get(ps)?;
Some((
ps.clone(),
space_users.iter().map(|(u, r)| (u.clone(), r.clone())).collect(),
role_defs.clone(),
))
}).collect()
parent_spaces
.iter()
.filter_map(|ps| {
let space_users = user_roles_guard.get(ps)?;
let role_defs = roles_guard.get(ps)?;
Some((
ps.clone(),
space_users
.iter()
.map(|(u, r)| (u.clone(), r.clone()))
.collect(),
role_defs.clone(),
))
})
.collect()
};
// Guards dropped here
@ -174,7 +184,8 @@ pub async fn build_and_append_pdu(
"Rejecting PL change: space-managed user omitted"
);
return Err!(Request(Forbidden(
"Cannot omit a user whose power level is managed by Space roles"
"Cannot omit a user whose power level is managed by Space \
roles"
)));
},
| Some(pl) if i64::from(*pl) != space_pl => {