chore(spaces): fix formatting, add changelog, remove design docs

Run cargo +nightly fmt, add towncrier news fragment, remove plan
documents that served their purpose during development.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
ember33 2026-03-19 16:10:53 +01:00
parent 78d7c56e6f
commit 53d4fb892c
11 changed files with 157 additions and 2644 deletions

View file

@ -0,0 +1 @@
Add Space permission cascading: power levels cascade from Spaces to child rooms, role-based room access with custom roles, continuous enforcement (auto-join/kick), and admin commands for role management. Controlled by `space_permission_cascading` config flag (off by default).

View file

@ -1,225 +0,0 @@
# Space Permission Cascading — Design Document
**Date:** 2026-03-17
**Status:** Implemented
## Overview
Server-side feature that allows user rights in a Space to cascade down to its
direct child rooms. Includes power level cascading and role-based room access
control. Enabled via a server-wide configuration flag, disabled by default.
## Requirements
1. Power levels defined in a Space cascade to all direct child rooms (Space
always wins over per-room overrides).
2. Admins can define custom roles in a Space and assign them to users.
3. Child rooms can require one or more roles for access.
4. Enforcement is continuous — role revocation auto-kicks users from rooms they
no longer qualify for.
5. Users are auto-joined to all qualifying child rooms when they join a Space or
receive a new role.
6. Cascading applies to direct parent Space only; no nested cascade through
sub-spaces.
7. Feature is toggled by a single server-wide config flag
(`space_permission_cascading`), off by default.
## Configuration
```toml
# conduwuit-example.toml
# Enable space permission cascading (power levels and role-based access).
# When enabled, power levels cascade from Spaces to child rooms and rooms
# can require roles for access. Applies to all Spaces on this server.
# Default: false
space_permission_cascading = false
```
## Custom State Events
All events live in the Space room.
### `com.continuwuity.space.roles` (state key: `""`)
Defines the available roles for the Space. Two default roles (`admin` and `mod`)
are created automatically when a Space is first encountered with the feature
enabled.
```json
{
"roles": {
"admin": {
"description": "Space administrator",
"power_level": 100
},
"mod": {
"description": "Space moderator",
"power_level": 50
},
"nsfw": {
"description": "Access to NSFW content"
},
"vip": {
"description": "VIP member"
}
}
}
```
- `description` (string, required): Human-readable description.
- `power_level` (integer, optional): If present, users with this role receive
this power level in all child rooms. When a user holds multiple roles with
power levels, the highest value wins.
### `com.continuwuity.space.role.member` (state key: user ID)
Assigns roles to a user within the Space.
```json
{
"roles": ["nsfw", "vip"]
}
```
### `com.continuwuity.space.role.room` (state key: room ID)
Declares which roles a child room requires. A user must hold **all** listed
roles to access the room.
```json
{
"required_roles": ["nsfw"]
}
```
## Enforcement Rules
All enforcement is skipped when `space_permission_cascading = false`.
### 1. Join gating
When a user attempts to join a room that is a direct child of a Space:
- Look up the room's `com.continuwuity.space.role.room` event in the parent Space.
- If the room has `required_roles`, check the user's `com.continuwuity.space.role.member`.
- Reject the join if the user is missing any required role.
### 2. Power level override
For every user in a child room of a Space:
- Look up their roles via `com.continuwuity.space.role.member` in the parent Space.
- For each role that has a `power_level`, take the highest value.
- Override the user's power level in the child room's `m.room.power_levels`.
- Reject attempts to manually set per-room power levels that conflict with
Space-granted levels.
### 3. Role revocation
When a `com.continuwuity.space.role.member` event is updated and a role is removed:
- Identify all child rooms that require the removed role.
- Auto-kick the user from rooms they no longer qualify for.
- Recalculate and update the user's power level in all child rooms.
### 4. Room requirement change
When a `com.continuwuity.space.role.room` event is updated with new requirements:
- Check all current members of the room.
- Auto-kick members who do not hold all newly required roles.
### 5. Auto-join on role grant
When a `com.continuwuity.space.role.member` event is updated and a role is added:
- Find all child rooms where the user now meets all required roles.
- Auto-join the user to qualifying rooms they are not already in.
This also applies when a user first joins the Space — they are auto-joined to
all child rooms they qualify for. Rooms with no role requirements auto-join all
Space members.
### 6. New child room
When a new `m.space.child` event is added to a Space:
- Auto-join all qualifying Space members to the new child room.
## Caching & Indexing
The source of truth is always the state events. The server maintains an
in-memory index for fast enforcement lookups, following the same patterns as the
existing `roomid_spacehierarchy_cache`.
### Index structures
| Index | Source event |
|------------------------------|------------------------|
| Space → roles defined | `com.continuwuity.space.roles` |
| Space → user → roles | `com.continuwuity.space.role.member` |
| Space → room → required roles| `com.continuwuity.space.role.room` |
| Room → parent Spaces | `m.space.child` (reverse lookup) |
| Space → child rooms | `m.space.child` (forward index) |
### Cache invalidation triggers
| Event changed | Action |
|----------------------------|-----------------------------------------------------|
| `com.continuwuity.space.roles` | Refresh role definitions, revalidate all members |
| `com.continuwuity.space.role.member` | Refresh user's roles, trigger auto-join/kick |
| `com.continuwuity.space.role.room` | Refresh room requirements, trigger auto-join/kick |
| `m.space.child` added | Index new child, auto-join qualifying members |
| `m.space.child` removed | Remove from index (no auto-kick) |
| Server startup | Full rebuild from state events |
## Admin Room Commands
Roles are managed via the existing admin room interface, which sends the
appropriate state events under the hood and triggers enforcement.
```
!admin space roles list <space>
!admin space roles add <space> <role_name> [description] [power_level]
!admin space roles remove <space> <role_name>
!admin space roles assign <space> <user_id> <role_name>
!admin space roles revoke <space> <user_id> <role_name>
!admin space roles require <space> <room_id> <role_name>
!admin space roles unrequire <space> <room_id> <role_name>
!admin space roles user <space> <user_id>
!admin space roles room <space> <room_id>
```
## Architecture
**Approach:** Hybrid — state events for definition, database cache for
enforcement.
- State events are the source of truth and federate normally.
- The server maintains an in-memory cache/index for fast enforcement.
- Cache is invalidated on relevant state event changes and fully rebuilt on
startup.
- All enforcement hooks (join gating, PL override, auto-join, auto-kick) check
the feature flag first and no-op when disabled.
- Existing clients can manage roles via Developer Tools (custom state events).
The admin room commands provide a user-friendly interface.
## Scope
### In scope
- Server-wide feature flag
- Custom state events for role definition, assignment, and room requirements
- Power level cascading (Space always wins)
- Continuous enforcement (auto-join, auto-kick)
- Admin room commands
- In-memory caching with invalidation
- Default `admin` (PL 100) and `mod` (PL 50) roles
### Out of scope
- Client-side UI for role management
- Nested cascade through sub-spaces
- Per-space opt-in/opt-out (it is server-wide)
- Federation-specific logic beyond normal state event replication

File diff suppressed because it is too large Load diff

View file

@ -1,18 +1,16 @@
use std::fmt::Write;
use clap::Subcommand;
use conduwuit::{Err, Event, Result};
use conduwuit::{Err, Event, Result, matrix::pdu::PduBuilder};
use conduwuit_core::matrix::space_roles::{
RoleDefinition, SpaceRoleMemberEventContent, SpaceRoleRoomEventContent,
SpaceRolesEventContent, SPACE_ROLES_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE,
SPACE_ROLE_ROOM_EVENT_TYPE,
RoleDefinition, SPACE_ROLE_MEMBER_EVENT_TYPE, SPACE_ROLE_ROOM_EVENT_TYPE,
SPACE_ROLES_EVENT_TYPE, SpaceRoleMemberEventContent, SpaceRoleRoomEventContent,
SpaceRolesEventContent,
};
use futures::StreamExt;
use ruma::{OwnedRoomId, OwnedRoomOrAliasId, OwnedUserId, events::StateEventType};
use serde_json::value::to_raw_value;
use conduwuit::matrix::pdu::PduBuilder;
use futures::StreamExt;
use crate::{admin_command, admin_command_dispatch};
macro_rules! require_enabled {
@ -20,8 +18,8 @@ macro_rules! require_enabled {
if !$self.services.rooms.roles.is_enabled() {
return $self
.write_str(
"Space permission cascading is disabled. \
Enable it with `space_permission_cascading = true` in your config.",
"Space permission cascading is disabled. Enable it with \
`space_permission_cascading = true` in your config.",
)
.await;
}
@ -51,10 +49,11 @@ macro_rules! custom_state_pdu {
($event_type:expr, $state_key:expr, $content:expr) => {
PduBuilder {
event_type: $event_type.to_owned().into(),
content: to_raw_value($content)
.map_err(|e| conduwuit::Error::Err(format!(
"Failed to serialize custom state event content: {e}"
).into()))?,
content: to_raw_value($content).map_err(|e| {
conduwuit::Error::Err(
format!("Failed to serialize custom state event content: {e}").into(),
)
})?,
state_key: Some($state_key.to_owned().into()),
..PduBuilder::default()
}
@ -244,9 +243,7 @@ async fn remove(&self, space: OwnedRoomOrAliasId, role_name: String) -> Result {
for (state_key, event_id) in user_entries {
if let Ok(pdu) = self.services.rooms.timeline.get_pdu(&event_id).await {
if let Ok(mut member_content) =
pdu.get_content::<SpaceRoleMemberEventContent>()
{
if let Ok(mut member_content) = pdu.get_content::<SpaceRoleMemberEventContent>() {
if member_content.roles.contains(&role_name) {
member_content.roles.retain(|r| r != &role_name);
self.services
@ -281,9 +278,7 @@ async fn remove(&self, space: OwnedRoomOrAliasId, role_name: String) -> Result {
for (state_key, event_id) in room_entries {
if let Ok(pdu) = self.services.rooms.timeline.get_pdu(&event_id).await {
if let Ok(mut room_content) =
pdu.get_content::<SpaceRoleRoomEventContent>()
{
if let Ok(mut room_content) = pdu.get_content::<SpaceRoleRoomEventContent>() {
if room_content.required_roles.contains(&role_name) {
room_content.required_roles.retain(|r| r != &role_name);
self.services
@ -363,10 +358,8 @@ async fn assign(
)
.await?;
self.write_str(&format!(
"Assigned role '{role_name}' to {user_id} in space {space_id}."
))
.await
self.write_str(&format!("Assigned role '{role_name}' to {user_id} in space {space_id}."))
.await
}
#[admin_command]
@ -408,10 +401,8 @@ async fn revoke(
)
.await?;
self.write_str(&format!(
"Revoked role '{role_name}' from {user_id} in space {space_id}."
))
.await
self.write_str(&format!("Revoked role '{role_name}' from {user_id} in space {space_id}."))
.await
}
#[admin_command]
@ -540,10 +531,9 @@ async fn user(&self, space: OwnedRoomOrAliasId, user_id: OwnedUserId) -> Result
))
.await
},
| _ => {
| _ =>
self.write_str(&format!("User {user_id} has no roles in space {space_id}."))
.await
},
.await,
}
}
@ -569,11 +559,10 @@ async fn room(&self, space: OwnedRoomOrAliasId, room_id: OwnedRoomId) -> Result
))
.await
},
| _ => {
| _ =>
self.write_str(&format!(
"Room {room_id} has no role requirements in space {space_id}."
))
.await
},
.await,
}
}

View file

@ -58,20 +58,14 @@ mod tests {
#[test]
fn serialize_space_roles() {
let mut roles = BTreeMap::new();
roles.insert(
"admin".to_owned(),
RoleDefinition {
description: "Space administrator".to_owned(),
power_level: Some(100),
},
);
roles.insert(
"nsfw".to_owned(),
RoleDefinition {
description: "NSFW access".to_owned(),
power_level: None,
},
);
roles.insert("admin".to_owned(), RoleDefinition {
description: "Space administrator".to_owned(),
power_level: Some(100),
});
roles.insert("nsfw".to_owned(), RoleDefinition {
description: "NSFW access".to_owned(),
power_level: None,
});
let content = SpaceRolesEventContent { roles };
let json = serde_json::to_string(&content).unwrap();
let deserialized: SpaceRolesEventContent = serde_json::from_str(&json).unwrap();
@ -92,9 +86,7 @@ mod tests {
#[test]
fn serialize_role_room() {
let content = SpaceRoleRoomEventContent {
required_roles: vec!["nsfw".to_owned()],
};
let content = SpaceRoleRoomEventContent { required_roles: vec!["nsfw".to_owned()] };
let json = serde_json::to_string(&content).unwrap();
let deserialized: SpaceRoleRoomEventContent = serde_json::from_str(&json).unwrap();
assert_eq!(deserialized.required_roles, vec!["nsfw"]);
@ -142,9 +134,7 @@ mod tests {
#[test]
fn empty_room_requirements() {
let content = SpaceRoleRoomEventContent {
required_roles: vec![],
};
let content = SpaceRoleRoomEventContent { required_roles: vec![] };
let json = serde_json::to_string(&content).unwrap();
let deserialized: SpaceRoleRoomEventContent = serde_json::from_str(&json).unwrap();
assert!(deserialized.required_roles.is_empty());

View file

@ -7,7 +7,7 @@
use std::collections::{BTreeMap, HashMap, HashSet};
use conduwuit_core::matrix::space_roles::RoleDefinition;
use ruma::{room_id, user_id, OwnedRoomId, OwnedUserId};
use ruma::{OwnedRoomId, OwnedUserId, room_id, user_id};
use super::tests::{make_requirements, make_roles, make_user_roles};
@ -75,10 +75,7 @@ impl MockCache {
room: &OwnedRoomId,
user: &OwnedUserId,
) -> bool {
let reqs = self
.room_requirements
.get(space)
.and_then(|r| r.get(room));
let reqs = self.room_requirements.get(space).and_then(|r| r.get(room));
match reqs {
| None => true,
@ -117,10 +114,7 @@ fn cache_populate_and_lookup() {
let child = room_id!("!child:example.com").to_owned();
let alice = user_id!("@alice:example.com").to_owned();
cache.add_space(
space.clone(),
make_roles(&[("admin", Some(100)), ("nsfw", None)]),
);
cache.add_space(space.clone(), make_roles(&[("admin", Some(100)), ("nsfw", None)]));
cache.add_child(&space, child.clone());
cache.assign_role(&space, alice.clone(), "nsfw".to_owned());
cache.set_room_requirements(&space, child.clone(), make_requirements(&["nsfw"]));
@ -154,21 +148,14 @@ fn cache_invalidation_on_requirement_change() {
let child = room_id!("!room:example.com").to_owned();
let alice = user_id!("@alice:example.com").to_owned();
cache.add_space(
space.clone(),
make_roles(&[("nsfw", None), ("vip", None)]),
);
cache.add_space(space.clone(), make_roles(&[("nsfw", None), ("vip", None)]));
cache.assign_role(&space, alice.clone(), "vip".to_owned());
cache.set_room_requirements(&space, child.clone(), make_requirements(&["vip"]));
assert!(cache.user_qualifies(&space, &child, &alice));
// Add nsfw requirement
cache.set_room_requirements(
&space,
child.clone(),
make_requirements(&["vip", "nsfw"]),
);
cache.set_room_requirements(&space, child.clone(), make_requirements(&["vip", "nsfw"]));
assert!(!cache.user_qualifies(&space, &child, &alice));
}
@ -177,11 +164,7 @@ fn cache_clear_empties_all() {
let mut cache = MockCache::new();
let space = room_id!("!space:example.com").to_owned();
cache.add_space(space.clone(), make_roles(&[("admin", Some(100))]));
cache.assign_role(
&space,
user_id!("@alice:example.com").to_owned(),
"admin".to_owned(),
);
cache.assign_role(&space, user_id!("@alice:example.com").to_owned(), "admin".to_owned());
cache.clear();
@ -204,7 +187,10 @@ fn cache_reverse_lookup_consistency() {
assert!(cache.room_to_space.get(&child1).unwrap().contains(&space));
assert!(cache.room_to_space.get(&child2).unwrap().contains(&space));
assert!(
cache.room_to_space.get(room_id!("!unknown:example.com")).is_none()
cache
.room_to_space
.get(room_id!("!unknown:example.com"))
.is_none()
);
}
@ -214,10 +200,7 @@ fn cache_power_level_updates_on_role_change() {
let space = room_id!("!space:example.com").to_owned();
let alice = user_id!("@alice:example.com").to_owned();
cache.add_space(
space.clone(),
make_roles(&[("admin", Some(100)), ("mod", Some(50))]),
);
cache.add_space(space.clone(), make_roles(&[("admin", Some(100)), ("mod", Some(50))]));
// No roles -> no PL
assert_eq!(cache.get_power_level(&space, &alice), None);

View file

@ -2,8 +2,10 @@ use std::collections::{HashMap, HashSet};
use ruma::{room_id, user_id};
use super::{compute_user_power_level, roles_satisfy_requirements};
use super::tests::{make_requirements, make_roles, make_user_roles};
use super::{
compute_user_power_level, roles_satisfy_requirements,
tests::{make_requirements, make_roles, make_user_roles},
};
#[test]
fn scenario_user_gains_and_loses_access() {
@ -53,11 +55,7 @@ fn scenario_multiple_rooms_different_requirements() {
#[test]
fn scenario_power_level_cascading_highest_wins() {
let roles = make_roles(&[
("admin", Some(100)),
("mod", Some(50)),
("helper", Some(25)),
]);
let roles = make_roles(&[("admin", Some(100)), ("mod", Some(50)), ("helper", Some(25))]);
let admin_mod = make_user_roles(&["admin", "mod"]);
assert_eq!(compute_user_power_level(&roles, &admin_mod), Some(100));
@ -114,10 +112,7 @@ fn scenario_identify_kick_candidates_after_role_revocation() {
rooms.insert("general".to_owned(), HashSet::new());
rooms.insert("nsfw-chat".to_owned(), make_requirements(&["nsfw"]));
rooms.insert("vip-lounge".to_owned(), make_requirements(&["vip"]));
rooms.insert(
"nsfw-vip".to_owned(),
make_requirements(&["nsfw", "vip"]),
);
rooms.insert("nsfw-vip".to_owned(), make_requirements(&["nsfw", "vip"]));
let kick_from: Vec<_> = rooms
.iter()

View file

@ -13,15 +13,13 @@ use std::{
use async_trait::async_trait;
use conduwuit::{
Event, Result, Server, debug, debug_warn, implement, info,
matrix::pdu::PduBuilder,
warn,
Event, Result, Server, debug, debug_warn, implement, info, matrix::pdu::PduBuilder, warn,
};
use conduwuit_core::{
matrix::space_roles::{
RoleDefinition, SpaceRoleMemberEventContent, SpaceRoleRoomEventContent,
SpaceRolesEventContent, SPACE_ROLES_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE,
SPACE_ROLE_ROOM_EVENT_TYPE,
RoleDefinition, SPACE_ROLE_MEMBER_EVENT_TYPE, SPACE_ROLE_ROOM_EVENT_TYPE,
SPACE_ROLES_EVENT_TYPE, SpaceRoleMemberEventContent, SpaceRoleRoomEventContent,
SpaceRolesEventContent,
},
utils::{
future::TryExtExt,
@ -30,7 +28,7 @@ use conduwuit_core::{
};
use futures::{StreamExt, TryFutureExt};
use ruma::{
Int, OwnedEventId, OwnedRoomId, OwnedUserId, RoomId, UserId, room::RoomType,
Int, OwnedEventId, OwnedRoomId, OwnedUserId, RoomId, UserId,
events::{
StateEventType,
room::{
@ -39,6 +37,7 @@ use ruma::{
},
space::child::SpaceChildEventContent,
},
room::RoomType,
};
use serde_json::value::to_raw_value;
use tokio::sync::RwLock;
@ -169,9 +168,9 @@ pub fn is_enabled(&self) -> bool { self.server.config.space_permission_cascading
/// Ensure a Space has the default admin/mod roles defined.
///
/// Checks whether a `com.continuwuity.space.roles` state event exists in the given space.
/// If not, creates default roles (admin at PL 100, mod at PL 50) and sends
/// the state event as the server user.
/// Checks whether a `com.continuwuity.space.roles` state event exists in the
/// given space. If not, creates default roles (admin at PL 100, mod at PL 50)
/// and sends the state event as the server user.
#[implement(Service)]
pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
if !self.is_enabled() {
@ -192,20 +191,14 @@ pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
// Create default roles
let mut roles = BTreeMap::new();
roles.insert(
"admin".to_owned(),
RoleDefinition {
description: "Space administrator".to_owned(),
power_level: Some(100),
},
);
roles.insert(
"mod".to_owned(),
RoleDefinition {
description: "Space moderator".to_owned(),
power_level: Some(50),
},
);
roles.insert("admin".to_owned(), RoleDefinition {
description: "Space administrator".to_owned(),
power_level: Some(100),
});
roles.insert("mod".to_owned(), RoleDefinition {
description: "Space moderator".to_owned(),
power_level: Some(50),
});
let content = SpaceRolesEventContent { roles };
@ -214,8 +207,11 @@ pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
let pdu = PduBuilder {
event_type: ruma::events::TimelineEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned()),
content: to_raw_value(&content)
.map_err(|e| conduwuit::Error::Err(format!("Failed to serialize SpaceRolesEventContent: {e}").into()))?,
content: to_raw_value(&content).map_err(|e| {
conduwuit::Error::Err(
format!("Failed to serialize SpaceRolesEventContent: {e}").into(),
)
})?,
state_key: Some(String::new().into()),
..PduBuilder::default()
};
@ -232,16 +228,20 @@ pub async fn ensure_default_roles(&self, space_id: &RoomId) -> Result {
/// Populate the in-memory caches from state events for a single Space room.
///
/// Reads `com.continuwuity.space.roles`, `com.continuwuity.space.role.member`, `com.continuwuity.space.role.room`, and
/// `m.space.child` state events and indexes them for fast lookup.
/// Reads `com.continuwuity.space.roles`, `com.continuwuity.space.role.member`,
/// `com.continuwuity.space.role.room`, and `m.space.child` state events and
/// indexes them for fast lookup.
#[implement(Service)]
pub async fn populate_space(&self, space_id: &RoomId) {
if !self.is_enabled() {
return;
}
// Check cache capacity — if over limit, clear and let spaces repopulate on demand
if self.roles.read().await.len() >= usize::try_from(self.server.config.space_roles_cache_capacity).unwrap_or(usize::MAX) {
// Check cache capacity — if over limit, clear and let spaces repopulate on
// demand
if self.roles.read().await.len()
>= usize::try_from(self.server.config.space_roles_cache_capacity).unwrap_or(usize::MAX)
{
self.roles.write().await.clear();
self.user_roles.write().await.clear();
self.room_requirements.write().await.clear();
@ -264,14 +264,10 @@ pub async fn populate_space(&self, space_id: &RoomId) {
.insert(space_id.to_owned(), content.roles);
}
// 2. Read all com.continuwuity.space.role.member state events (state key: user ID)
// 2. Read all com.continuwuity.space.role.member state events (state key: user
// ID)
let member_event_type = StateEventType::from(SPACE_ROLE_MEMBER_EVENT_TYPE.to_owned());
let shortstatehash = match self
.services
.state
.get_room_shortstatehash(space_id)
.await
{
let shortstatehash = match self.services.state.get_room_shortstatehash(space_id).await {
| Ok(hash) => hash,
| Err(e) => {
debug_warn!(space_id = %space_id, error = ?e, "Failed to get shortstatehash, cache may be stale");
@ -309,7 +305,8 @@ pub async fn populate_space(&self, space_id: &RoomId) {
.await
.insert(space_id.to_owned(), user_roles_map);
// 3. Read all com.continuwuity.space.role.room state events (state key: room ID)
// 3. Read all com.continuwuity.space.role.room state events (state key: room
// ID)
let room_event_type = StateEventType::from(SPACE_ROLE_ROOM_EVENT_TYPE.to_owned());
let mut room_reqs_map: HashMap<OwnedRoomId, HashSet<String>> = HashMap::new();
@ -423,13 +420,16 @@ pub fn roles_satisfy_requirements<S: ::std::hash::BuildHasher>(
/// Get a user's effective power level from Space roles.
/// Returns None if user has no roles with power levels.
#[implement(Service)]
pub async fn get_user_power_level(
&self,
space_id: &RoomId,
user_id: &UserId,
) -> Option<i64> {
pub async fn get_user_power_level(&self, space_id: &RoomId, user_id: &UserId) -> Option<i64> {
let role_defs = { self.roles.read().await.get(space_id).cloned()? };
let user_assigned = { self.user_roles.read().await.get(space_id)?.get(user_id).cloned()? };
let user_assigned = {
self.user_roles
.read()
.await
.get(space_id)?
.get(user_id)
.cloned()?
};
compute_user_power_level(&role_defs, &user_assigned)
}
@ -599,11 +599,7 @@ pub async fn sync_power_levels(&self, space_id: &RoomId, room_id: &RoomId) -> Re
/// checks whether the user qualifies via their assigned roles, and
/// force-joins them if they are not already a member.
#[implement(Service)]
pub async fn auto_join_qualifying_rooms(
&self,
space_id: &RoomId,
user_id: &UserId,
) -> Result {
pub async fn auto_join_qualifying_rooms(&self, space_id: &RoomId, user_id: &UserId) -> Result {
if !self.is_enabled() {
return Ok(());
}
@ -731,9 +727,7 @@ impl Service {
// Role definitions changed — sync PLs in all child rooms
let child_rooms = this.get_child_rooms(&space_id).await;
for child_room_id in &child_rooms {
if let Err(e) =
this.sync_power_levels(&space_id, child_room_id).await
{
if let Err(e) = this.sync_power_levels(&space_id, child_room_id).await {
debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels");
}
}
@ -756,8 +750,7 @@ impl Service {
| SPACE_ROLE_MEMBER_EVENT_TYPE => {
// User's roles changed — auto-join/kick + PL sync
if let Ok(user_id) = UserId::parse(state_key.as_str()) {
if let Err(e) =
this.auto_join_qualifying_rooms(&space_id, user_id).await
if let Err(e) = this.auto_join_qualifying_rooms(&space_id, user_id).await
{
debug_warn!(user_id = %user_id, error = ?e, "Space role auto-join failed");
}
@ -769,8 +762,7 @@ impl Service {
// Sync power levels in all child rooms
let child_rooms = this.get_child_rooms(&space_id).await;
for child_room_id in &child_rooms {
if let Err(e) =
this.sync_power_levels(&space_id, child_room_id).await
if let Err(e) = this.sync_power_levels(&space_id, child_room_id).await
{
debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels");
}
@ -789,16 +781,12 @@ impl Service {
.await;
for member in &members {
if !this
.user_qualifies_for_room(
&space_id,
target_room,
member,
)
.user_qualifies_for_room(&space_id, target_room, member)
.await
{
if let Err(e) = Box::pin(this
.kick_unqualified_from_rooms(&space_id, member))
.await
if let Err(e) =
Box::pin(this.kick_unqualified_from_rooms(&space_id, member))
.await
{
debug_warn!(user_id = %member, error = ?e, "Space role requirement kick failed");
}
@ -998,9 +986,7 @@ impl Service {
// Also sync their power levels
let child_rooms = this.get_child_rooms(&space_id).await;
for child_room_id in &child_rooms {
if let Err(e) =
this.sync_power_levels(&space_id, child_room_id).await
{
if let Err(e) = this.sync_power_levels(&space_id, child_room_id).await {
debug_warn!(room_id = %child_room_id, error = ?e, "Failed to sync power levels on join");
}
}
@ -1014,11 +1000,7 @@ impl Service {
/// space, checks whether the user still qualifies, and kicks them with a
/// reason if they do not.
#[implement(Service)]
pub async fn kick_unqualified_from_rooms(
&self,
space_id: &RoomId,
user_id: &UserId,
) -> Result {
pub async fn kick_unqualified_from_rooms(&self, space_id: &RoomId, user_id: &UserId) -> Result {
if !self.is_enabled() {
return Ok(());
}
@ -1084,17 +1066,14 @@ pub async fn kick_unqualified_from_rooms(
.services
.timeline
.build_and_append_pdu(
PduBuilder::state(
user_id.to_string(),
&RoomMemberEventContent {
membership: MembershipState::Leave,
reason: Some("No longer has required Space roles".into()),
is_direct: None,
join_authorized_via_users_server: None,
third_party_invite: None,
..member_content
},
),
PduBuilder::state(user_id.to_string(), &RoomMemberEventContent {
membership: MembershipState::Leave,
reason: Some("No longer has required Space roles".into()),
is_direct: None,
join_authorized_via_users_server: None,
third_party_invite: None,
..member_content
}),
server_user,
Some(child_room_id),
&state_lock,

View file

@ -1,7 +1,7 @@
use std::collections::{BTreeMap, HashMap, HashSet};
use conduwuit_core::matrix::space_roles::RoleDefinition;
use ruma::{room_id, OwnedRoomId};
use ruma::{OwnedRoomId, room_id};
use super::{compute_user_power_level, roles_satisfy_requirements};
@ -10,13 +10,10 @@ pub fn make_roles(entries: &[(&str, Option<i64>)]) -> BTreeMap<String, RoleDefin
entries
.iter()
.map(|(name, pl)| {
(
(*name).to_owned(),
RoleDefinition {
description: format!("{name} role"),
power_level: *pl,
},
)
((*name).to_owned(), RoleDefinition {
description: format!("{name} role"),
power_level: *pl,
})
})
.collect()
}
@ -38,11 +35,7 @@ fn power_level_single_role() {
#[test]
fn power_level_multiple_roles_takes_highest() {
let roles = make_roles(&[
("admin", Some(100)),
("mod", Some(50)),
("helper", Some(25)),
]);
let roles = make_roles(&[("admin", Some(100)), ("mod", Some(50)), ("helper", Some(25))]);
let user_assigned = make_user_roles(&["mod", "helper"]);
assert_eq!(compute_user_power_level(&roles, &user_assigned), Some(50));
}
@ -120,7 +113,11 @@ fn room_to_space_lookup() {
.or_default()
.insert(space.clone());
assert!(room_to_space.get(&child).unwrap().contains(&space));
assert!(room_to_space.get(room_id!("!unknown:example.com")).is_none());
assert!(
room_to_space
.get(room_id!("!unknown:example.com"))
.is_none()
);
}
#[test]

View file

@ -10,7 +10,7 @@ use conduwuit_core::{
event::Event,
pdu::{PduCount, PduEvent, PduId, RawPduId},
space_roles::{
SPACE_ROLES_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE, SPACE_ROLE_ROOM_EVENT_TYPE,
SPACE_ROLE_MEMBER_EVENT_TYPE, SPACE_ROLE_ROOM_EVENT_TYPE, SPACE_ROLES_EVENT_TYPE,
},
},
utils::{self, ReadyExt},
@ -392,10 +392,7 @@ where
if let Ok(child_room_id) = ruma::RoomId::parse(state_key) {
let roles: Arc<crate::rooms::roles::Service> =
Arc::clone(&*self.services.roles);
roles.handle_space_child_change(
room_id.to_owned(),
child_room_id.to_owned(),
);
roles.handle_space_child_change(room_id.to_owned(), child_room_id.to_owned());
}
}
}
@ -409,10 +406,8 @@ where
&& matches!(
self.services.state_accessor.get_room_type(room_id).await,
Ok(ruma::room::RoomType::Space)
)
{
let roles: Arc<crate::rooms::roles::Service> =
Arc::clone(&*self.services.roles);
) {
let roles: Arc<crate::rooms::roles::Service> = Arc::clone(&*self.services.roles);
roles.handle_space_member_join(room_id.to_owned(), user_id.to_owned());
}
}

View file

@ -3,12 +3,10 @@ use std::{
iter::once,
};
use conduwuit_core::matrix::space_roles::RoleDefinition;
use conduwuit::{debug_warn, trace};
use conduwuit_core::{
Err, Result, implement,
matrix::{event::Event, pdu::PduBuilder},
matrix::{event::Event, pdu::PduBuilder, space_roles::RoleDefinition},
utils::{IterStream, ReadyExt},
};
use futures::{FutureExt, StreamExt};
@ -104,12 +102,16 @@ pub async fn build_and_append_pdu(
}
// Space permission cascading: reject power level changes that conflict
// with Space-granted levels (exempt the server user so sync_power_levels works)
type SpaceEnforcementData =
(ruma::OwnedRoomId, Vec<(OwnedUserId, HashSet<String>)>, BTreeMap<String, RoleDefinition>);
type SpaceEnforcementData = (
ruma::OwnedRoomId,
Vec<(OwnedUserId, HashSet<String>)>,
BTreeMap<String, RoleDefinition>,
);
if self.services.roles.is_enabled()
&& *pdu.kind() == TimelineEventType::RoomPowerLevels
&& pdu.sender() != <OwnedUserId as AsRef<UserId>>::as_ref(&self.services.globals.server_user)
&& pdu.sender()
!= <OwnedUserId as AsRef<UserId>>::as_ref(&self.services.globals.server_user)
{
use ruma::events::room::power_levels::RoomPowerLevelsEventContent;
@ -118,8 +120,11 @@ pub async fn build_and_append_pdu(
for parent_space in &parent_spaces {
// Check proposed users don't conflict with space-granted PLs
for (user_id, proposed_pl) in &proposed.users {
if let Some(space_pl) =
self.services.roles.get_user_power_level(parent_space, user_id).await
if let Some(space_pl) = self
.services
.roles
.get_user_power_level(parent_space, user_id)
.await
{
if i64::from(*proposed_pl) != space_pl {
debug_warn!(
@ -142,15 +147,21 @@ pub async fn build_and_append_pdu(
let space_data: Vec<SpaceEnforcementData> = {
let user_roles_guard = self.services.roles.user_roles.read().await;
let roles_guard = self.services.roles.roles.read().await;
parent_spaces.iter().filter_map(|ps| {
let space_users = user_roles_guard.get(ps)?;
let role_defs = roles_guard.get(ps)?;
Some((
ps.clone(),
space_users.iter().map(|(u, r)| (u.clone(), r.clone())).collect(),
role_defs.clone(),
))
}).collect()
parent_spaces
.iter()
.filter_map(|ps| {
let space_users = user_roles_guard.get(ps)?;
let role_defs = roles_guard.get(ps)?;
Some((
ps.clone(),
space_users
.iter()
.map(|(u, r)| (u.clone(), r.clone()))
.collect(),
role_defs.clone(),
))
})
.collect()
};
// Guards dropped here
@ -174,7 +185,8 @@ pub async fn build_and_append_pdu(
"Rejecting PL change: space-managed user omitted"
);
return Err!(Request(Forbidden(
"Cannot omit a user whose power level is managed by Space roles"
"Cannot omit a user whose power level is managed by Space \
roles"
)));
},
| Some(pl) if i64::from(*pl) != space_pl => {