Compare commits

..

16 commits

Author SHA1 Message Date
Jacob Taylor
eb29c9eeb0 fix: Clear All Clippy Lints 2026-03-18 16:08:37 -07:00
Jacob Taylor
955db1e634 fix: Pre-Commit Lint Compliance Maneuver 2026-03-18 16:08:37 -07:00
timedout
245cb0e064 fix: Write-lock individual rooms when building sync for them 2026-03-18 16:08:37 -07:00
Jacob Taylor
7480cf842c fix warn 2026-03-18 16:08:37 -07:00
Joe Citrine
f47b562905 fix(resolver): port parser does not handle leading ':'
conditionally trim the leading colon
2026-03-18 16:08:37 -07:00
Jacob Taylor
07d7e15485 upgrade some logs to info 2026-03-18 16:08:37 -07:00
Jade Ellis
214d07214d fix: Use to_str methods on room IDs 2026-03-18 16:08:37 -07:00
Jade Ellis
e286bcf733 chore: cleanup 2026-03-18 16:08:37 -07:00
Jade Ellis
71ed8eadad chore: Fix more complicated clippy warnings 2026-03-18 16:08:37 -07:00
Jade Ellis
c948e58905 feat: Add command to purge sync tokens for empty rooms 2026-03-18 16:08:37 -07:00
Jade Ellis
3f178d7d85 feat: Add admin command to delete sync tokens from a room 2026-03-18 16:08:37 -07:00
Jacob Taylor
f0ffe860d6 exponential backoff is now just bees. did you want bees? no? well you have them now. congrats 2026-03-18 16:08:37 -07:00
Jacob Taylor
b6e88150b2 sender_workers scaling. this time, with feeling! 2026-03-18 16:08:37 -07:00
Jacob Taylor
2d9ad1bab0 log when there are zero extremities 2026-03-18 16:08:37 -07:00
Jacob Taylor
23584050e8 enable converged 6g at the edge in continuwuity 2026-03-18 16:08:37 -07:00
Jacob Taylor
5129803262 bump the number of allowed immutable memtables by 1, to allow for greater flood protection
this should probably not be applied if you have rocksdb_atomic_flush = false (the default)
2026-03-18 16:08:37 -07:00
32 changed files with 341 additions and 3734 deletions

4
.github/FUNDING.yml vendored
View file

@ -1,4 +1,4 @@
github: [JadedBlueEyes, nexy7574, gingershaped]
custom:
- https://timedout.uk/donate.html
- https://jade.ellis.link/sponsors
- https://ko-fi.com/nexy7574
- https://ko-fi.com/JadedBlueEyes

View file

@ -1 +0,0 @@
Add Space permission cascading: power levels cascade from Spaces to child rooms, role-based room access with custom roles, continuous enforcement (auto-join/kick), and admin commands for role management. Server-wide default controlled by `space_permission_cascading` config flag (off by default), with per-Space overrides via `!admin space roles enable/disable <space>`.

View file

@ -474,18 +474,6 @@
#
#suspend_on_register = false
# Server-wide default for space permission cascading (power levels and
# role-based access). Individual Spaces can override this via the
# `com.continuwuity.space.cascading` state event or the admin command
# `!admin space roles enable/disable <space>`.
#
#space_permission_cascading = false
# Maximum number of spaces to cache role data for. When exceeded the
# cache is cleared and repopulated on demand.
#
#space_roles_cache_flush_threshold = 1000
# Enabling this setting opens registration to anyone without restrictions.
# This makes your server vulnerable to abuse
#
@ -1795,11 +1783,9 @@
#stream_amplification = 1024
# Number of sender task workers; determines sender parallelism. Default is
# '0' which means the value is determined internally, likely matching the
# number of tokio worker-threads or number of cores, etc. Override by
# setting a non-zero value.
# core count. Override by setting a different value.
#
#sender_workers = 0
#sender_workers = core count
# Enables listener sockets; can be set to false to disable listening. This
# option is intended for developer/diagnostic purposes only.

View file

@ -1,226 +0,0 @@
# Space Permission Cascading — Design Document
**Date:** 2026-03-17
**Status:** Approved
## Overview
Server-side feature that allows user rights in a Space to cascade down to its
direct child rooms. Includes power level cascading and role-based room access
control. Enabled via a server-wide configuration flag, disabled by default.
## Requirements
1. Power levels defined in a Space cascade to all direct child rooms (Space
always wins over per-room overrides).
2. Admins can define custom roles in a Space and assign them to users.
3. Child rooms can require one or more roles for access.
4. Enforcement is continuous — role revocation auto-kicks users from rooms they
no longer qualify for.
5. Users are auto-joined to all qualifying child rooms when they join a Space or
receive a new role.
6. Cascading applies to direct parent Space only; no nested cascade through
sub-spaces.
7. Feature is toggled by a single server-wide config flag
(`space_permission_cascading`), off by default.
## Configuration
```toml
# conduwuit-example.toml
# Enable space permission cascading (power levels and role-based access).
# When enabled, power levels cascade from Spaces to child rooms and rooms
# can require roles for access. Applies to all Spaces on this server.
# Default: false
space_permission_cascading = false
```
## Custom State Events
All events live in the Space room.
### `m.space.roles` (state key: `""`)
Defines the available roles for the Space. Two default roles (`admin` and `mod`)
are created automatically when a Space is first encountered with the feature
enabled.
```json
{
"roles": {
"admin": {
"description": "Space administrator",
"power_level": 100
},
"mod": {
"description": "Space moderator",
"power_level": 50
},
"nsfw": {
"description": "Access to NSFW content"
},
"vip": {
"description": "VIP member"
}
}
}
```
- `description` (string, required): Human-readable description.
- `power_level` (integer, optional): If present, users with this role receive
this power level in all child rooms. When a user holds multiple roles with
power levels, the highest value wins.
### `m.space.role.member` (state key: user ID)
Assigns roles to a user within the Space.
```json
{
"roles": ["nsfw", "vip"]
}
```
### `m.space.role.room` (state key: room ID)
Declares which roles a child room requires. A user must hold **all** listed
roles to access the room.
```json
{
"required_roles": ["nsfw"]
}
```
## Enforcement Rules
All enforcement is skipped when `space_permission_cascading = false`.
### 1. Join gating
When a user attempts to join a room that is a direct child of a Space:
- Look up the room's `m.space.role.room` event in the parent Space.
- If the room has `required_roles`, check the user's `m.space.role.member`.
- Reject the join if the user is missing any required role.
### 2. Power level override
For every user in a child room of a Space:
- Look up their roles via `m.space.role.member` in the parent Space.
- For each role that has a `power_level`, take the highest value.
- Override the user's power level in the child room's `m.room.power_levels`.
- Reject attempts to manually set per-room power levels that conflict with
Space-granted levels.
### 3. Role revocation
When an `m.space.role.member` event is updated and a role is removed:
- Identify all child rooms that require the removed role.
- Auto-kick the user from rooms they no longer qualify for.
- Recalculate and update the user's power level in all child rooms.
### 4. Room requirement change
When an `m.space.role.room` event is updated with new requirements:
- Check all current members of the room.
- Auto-kick members who do not hold all newly required roles.
### 5. Auto-join on role grant
When an `m.space.role.member` event is updated and a role is added:
- Find all child rooms where the user now meets all required roles.
- Auto-join the user to qualifying rooms they are not already in.
This also applies when a user first joins the Space — they are auto-joined to
all child rooms they qualify for. Rooms with no role requirements auto-join all
Space members.
### 6. New child room
When a new `m.space.child` event is added to a Space:
- Auto-join all qualifying Space members to the new child room.
## Caching & Indexing
The source of truth is always the state events. The server maintains an
in-memory index for fast enforcement lookups, following the same patterns as the
existing `roomid_spacehierarchy_cache`.
### Index structures
| Index | Source event |
|------------------------------|------------------------|
| Space → roles defined | `m.space.roles` |
| Space → user → roles | `m.space.role.member` |
| Space → room → required roles| `m.space.role.room` |
| Room → parent Space | `m.space.child` (reverse lookup) |
The Space → child rooms mapping already exists.
### Cache invalidation triggers
| Event changed | Action |
|----------------------------|-----------------------------------------------------|
| `m.space.roles` | Refresh role definitions, revalidate all members |
| `m.space.role.member` | Refresh user's roles, trigger auto-join/kick |
| `m.space.role.room` | Refresh room requirements, trigger auto-join/kick |
| `m.space.child` added | Index new child, auto-join qualifying members |
| `m.space.child` removed | Remove from index (no auto-kick) |
| Server startup | Full rebuild from state events |
## Admin Room Commands
Roles are managed via the existing admin room interface, which sends the
appropriate state events under the hood and triggers enforcement.
```
!admin space roles list <space>
!admin space roles add <space> <role_name> [description] [power_level]
!admin space roles remove <space> <role_name>
!admin space roles assign <space> <user_id> <role_name>
!admin space roles revoke <space> <user_id> <role_name>
!admin space roles require <space> <room_id> <role_name>
!admin space roles unrequire <space> <room_id> <role_name>
!admin space roles user <space> <user_id>
!admin space roles room <space> <room_id>
```
## Architecture
**Approach:** Hybrid — state events for definition, database cache for
enforcement.
- State events are the source of truth and federate normally.
- The server maintains an in-memory cache/index for fast enforcement.
- Cache is invalidated on relevant state event changes and fully rebuilt on
startup.
- All enforcement hooks (join gating, PL override, auto-join, auto-kick) check
the feature flag first and no-op when disabled.
- Existing clients can manage roles via Developer Tools (custom state events).
The admin room commands provide a user-friendly interface.
## Scope
### In scope
- Server-wide feature flag
- Custom state events for role definition, assignment, and room requirements
- Power level cascading (Space always wins)
- Continuous enforcement (auto-join, auto-kick)
- Admin room commands
- In-memory caching with invalidation
- Default `admin` (PL 100) and `mod` (PL 50) roles
### Out of scope
- Client-side UI for role management
- Nested cascade through sub-spaces
- Per-space opt-in/opt-out (it is server-wide)
- Federation-specific logic beyond normal state event replication

File diff suppressed because it is too large Load diff

View file

@ -11,7 +11,6 @@ use crate::{
query::{self, QueryCommand},
room::{self, RoomCommand},
server::{self, ServerCommand},
space::{self, SpaceCommand},
token::{self, TokenCommand},
user::{self, UserCommand},
};
@ -35,10 +34,6 @@ pub enum AdminCommand {
/// Commands for managing rooms
Rooms(RoomCommand),
#[command(subcommand)]
/// Commands for managing space permissions
Spaces(SpaceCommand),
#[command(subcommand)]
/// Commands for managing federation
Federation(FederationCommand),
@ -86,10 +81,6 @@ pub(super) async fn process(command: AdminCommand, context: &Context<'_>) -> Res
token::process(command, context).await
},
| Rooms(command) => room::process(command, context).await,
| Spaces(command) => {
context.bail_restricted()?;
space::process(command, context).await
},
| Federation(command) => federation::process(command, context).await,
| Server(command) => server::process(command, context).await,
| Debug(command) => debug::process(command, context).await,

View file

@ -17,7 +17,6 @@ pub(crate) mod media;
pub(crate) mod query;
pub(crate) mod room;
pub(crate) mod server;
pub(crate) mod space;
pub(crate) mod token;
pub(crate) mod user;

View file

@ -1,6 +1,6 @@
use conduwuit::{Err, Result};
use futures::StreamExt;
use ruma::OwnedRoomId;
use ruma::{OwnedRoomId, OwnedRoomOrAliasId};
use crate::{PAGE_SIZE, admin_command, get_room_info};
@ -82,3 +82,185 @@ pub(super) async fn exists(&self, room_id: OwnedRoomId) -> Result {
self.write_str(&format!("{result}")).await
}
#[admin_command]
pub(super) async fn purge_sync_tokens(&self, room: OwnedRoomOrAliasId) -> Result {
// Resolve the room ID from the room or alias ID
let room_id = self.services.rooms.alias.resolve(&room).await?;
// Delete all tokens for this room using the service method
let Ok(deleted_count) = self.services.rooms.user.delete_room_tokens(&room_id).await else {
return Err!("Failed to delete sync tokens for room {}", room_id.as_str());
};
self.write_str(&format!(
"Successfully deleted {deleted_count} sync tokens for room {}",
room_id.as_str()
))
.await
}
/// Target options for room purging
#[derive(Default, Debug, clap::ValueEnum, Clone)]
pub enum RoomTargetOption {
#[default]
/// Target all rooms
All,
/// Target only disabled rooms
DisabledOnly,
/// Target only banned rooms
BannedOnly,
}
#[admin_command]
pub(super) async fn purge_all_sync_tokens(
&self,
target_option: Option<RoomTargetOption>,
execute: bool,
) -> Result {
use conduwuit::{debug, info};
let mode = if !execute { "Simulating" } else { "Starting" };
// strictly, we should check if these reach the max value after the loop and
// warn the user that the count is too large
let mut total_rooms_checked: usize = 0;
let mut total_tokens_deleted: usize = 0;
let mut error_count: u32 = 0;
let mut skipped_rooms: usize = 0;
info!("{} purge of sync tokens", mode);
// Get all rooms in the server
let all_rooms = self
.services
.rooms
.metadata
.iter_ids()
.collect::<Vec<_>>()
.await;
info!("Found {} rooms total on the server", all_rooms.len());
// Filter rooms based on options
let mut rooms = Vec::new();
for room_id in all_rooms {
if let Some(target) = &target_option {
match target {
| RoomTargetOption::DisabledOnly => {
if !self.services.rooms.metadata.is_disabled(room_id).await {
debug!("Skipping room {} as it's not disabled", room_id.as_str());
skipped_rooms = skipped_rooms.saturating_add(1);
continue;
}
},
| RoomTargetOption::BannedOnly => {
if !self.services.rooms.metadata.is_banned(room_id).await {
debug!("Skipping room {} as it's not banned", room_id.as_str());
skipped_rooms = skipped_rooms.saturating_add(1);
continue;
}
},
| RoomTargetOption::All => {},
}
}
rooms.push(room_id);
}
// Total number of rooms we'll be checking
let total_rooms = rooms.len();
info!(
"Processing {} rooms after filtering (skipped {} rooms)",
total_rooms, skipped_rooms
);
// Process each room
for room_id in rooms {
total_rooms_checked = total_rooms_checked.saturating_add(1);
// Log progress periodically
if total_rooms_checked.is_multiple_of(100) || total_rooms_checked == total_rooms {
info!(
"Progress: {}/{} rooms checked, {} tokens {}",
total_rooms_checked,
total_rooms,
total_tokens_deleted,
if !execute { "would be deleted" } else { "deleted" }
);
}
// In dry run mode, just count what would be deleted, don't actually delete
debug!(
"Room {}: {}",
room_id.as_str(),
if !execute {
"would purge sync tokens"
} else {
"purging sync tokens"
}
);
if !execute {
// For dry run mode, count tokens without deleting
match self.services.rooms.user.count_room_tokens(room_id).await {
| Ok(count) =>
if count > 0 {
debug!(
"Would delete {} sync tokens for room {}",
count,
room_id.as_str()
);
total_tokens_deleted = total_tokens_deleted.saturating_add(count);
} else {
debug!("No sync tokens found for room {}", room_id.as_str());
},
| Err(e) => {
debug!("Error counting sync tokens for room {}: {:?}", room_id.as_str(), e);
error_count = error_count.saturating_add(1);
},
}
} else {
// Real deletion mode
match self.services.rooms.user.delete_room_tokens(room_id).await {
| Ok(count) =>
if count > 0 {
debug!("Deleted {} sync tokens for room {}", count, room_id.as_str());
total_tokens_deleted = total_tokens_deleted.saturating_add(count);
} else {
debug!("No sync tokens found for room {}", room_id.as_str());
},
| Err(e) => {
debug!("Error purging sync tokens for room {}: {:?}", room_id.as_str(), e);
error_count = error_count.saturating_add(1);
},
}
}
}
let action = if !execute { "would be deleted" } else { "deleted" };
info!(
"Finished {}: checked {} rooms out of {} total, {} tokens {}, errors: {}",
if !execute {
"purge simulation"
} else {
"purging sync tokens"
},
total_rooms_checked,
total_rooms,
total_tokens_deleted,
action,
error_count
);
self.write_str(&format!(
"Finished {}: checked {} rooms out of {} total, {} tokens {}, errors: {}",
if !execute { "simulation" } else { "purging sync tokens" },
total_rooms_checked,
total_rooms,
total_tokens_deleted,
action,
error_count
))
.await
}

View file

@ -5,8 +5,9 @@ mod info;
mod moderation;
use clap::Subcommand;
use commands::RoomTargetOption;
use conduwuit::Result;
use ruma::OwnedRoomId;
use ruma::{OwnedRoomId, OwnedRoomOrAliasId};
use self::{
alias::RoomAliasCommand, directory::RoomDirectoryCommand, info::RoomInfoCommand,
@ -60,4 +61,25 @@ pub enum RoomCommand {
Exists {
room_id: OwnedRoomId,
},
/// - Delete all sync tokens for a room
PurgeSyncTokens {
/// Room ID or alias to purge sync tokens for
#[arg(value_parser)]
room: OwnedRoomOrAliasId,
},
/// - Delete sync tokens for all rooms that have no local users
///
/// By default, processes all empty rooms.
PurgeAllSyncTokens {
/// Target specific room types
#[arg(long, value_enum)]
target_option: Option<RoomTargetOption>,
/// Execute token deletions. Otherwise,
/// Performs a dry run without actually deleting any tokens
#[arg(long)]
execute: bool,
},
}

View file

@ -1,15 +0,0 @@
pub(super) mod roles;
use clap::Subcommand;
use conduwuit::Result;
use self::roles::SpaceRolesCommand;
use crate::admin_command_dispatch;
#[admin_command_dispatch]
#[derive(Debug, Subcommand)]
pub enum SpaceCommand {
#[command(subcommand)]
/// Manage space roles and permissions
Roles(SpaceRolesCommand),
}

View file

@ -1,632 +0,0 @@
use std::fmt::Write;
use clap::Subcommand;
use conduwuit::{Err, Event, Result, matrix::pdu::PduBuilder};
use conduwuit_core::matrix::space_roles::{
RoleDefinition, SPACE_CASCADING_EVENT_TYPE, SPACE_ROLE_MEMBER_EVENT_TYPE,
SPACE_ROLE_ROOM_EVENT_TYPE, SPACE_ROLES_EVENT_TYPE, SpaceCascadingEventContent,
SpaceRoleMemberEventContent, SpaceRoleRoomEventContent, SpaceRolesEventContent,
};
use futures::StreamExt;
use ruma::{OwnedRoomId, OwnedRoomOrAliasId, OwnedUserId, events::StateEventType};
use serde_json::value::to_raw_value;
use crate::{admin_command, admin_command_dispatch};
fn roles_event_type() -> StateEventType {
StateEventType::from(SPACE_ROLES_EVENT_TYPE.to_owned())
}
fn member_event_type() -> StateEventType {
StateEventType::from(SPACE_ROLE_MEMBER_EVENT_TYPE.to_owned())
}
fn room_event_type() -> StateEventType {
StateEventType::from(SPACE_ROLE_ROOM_EVENT_TYPE.to_owned())
}
fn cascading_event_type() -> StateEventType {
StateEventType::from(SPACE_CASCADING_EVENT_TYPE.to_owned())
}
macro_rules! resolve_room_as_space {
($self:expr, $space:expr) => {{
let space_id = $self.services.rooms.alias.resolve(&$space).await?;
if !matches!(
$self
.services
.rooms
.state_accessor
.get_room_type(&space_id)
.await,
Ok(ruma::room::RoomType::Space)
) {
return Err!("The specified room is not a Space.");
}
space_id
}};
}
macro_rules! resolve_space {
($self:expr, $space:expr) => {{
let space_id = resolve_room_as_space!($self, $space);
if !$self
.services
.rooms
.roles
.is_enabled_for_space(&space_id)
.await
{
return $self
.write_str(
"Space permission cascading is disabled for this Space. Enable it \
server-wide with `space_permission_cascading = true` in your config, or \
per-Space with `!admin space roles enable <space>`.",
)
.await;
}
space_id
}};
}
macro_rules! custom_state_pdu {
($event_type:expr, $state_key:expr, $content:expr) => {
PduBuilder {
event_type: $event_type.to_owned().into(),
content: to_raw_value($content)
.map_err(|e| conduwuit::err!("Failed to serialize state event content: {e}"))?,
state_key: Some($state_key.to_owned().into()),
..PduBuilder::default()
}
};
}
/// Cascade-remove a role name from all state events of a given type. For each
/// event that contains the role, the `$field` is filtered and the updated
/// content is sent back as a new state event.
macro_rules! cascade_remove_role {
(
$self:expr,
$shortstatehash:expr,
$event_type_fn:expr,
$event_type_const:expr,
$content_ty:ty,
$field:ident,
$role_name:expr,
$space_id:expr,
$state_lock:expr,
$server_user:expr
) => {{
let ev_type = $event_type_fn;
let entries: Vec<(_, ruma::OwnedEventId)> = $self
.services
.rooms
.state_accessor
.state_keys_with_ids($shortstatehash, &ev_type)
.collect()
.await;
for (state_key, event_id) in entries {
if let Ok(pdu) = $self.services.rooms.timeline.get_pdu(&event_id).await {
if let Ok(mut content) = pdu.get_content::<$content_ty>() {
if content.$field.contains($role_name) {
content.$field.retain(|r| r != $role_name);
$self
.services
.rooms
.timeline
.build_and_append_pdu(
custom_state_pdu!($event_type_const, &state_key, &content),
$server_user,
Some(&$space_id),
&$state_lock,
)
.await?;
}
}
}
}
}};
}
macro_rules! send_space_state {
($self:expr, $space_id:expr, $event_type:expr, $state_key:expr, $content:expr) => {{
let state_lock = $self.services.rooms.state.mutex.lock(&$space_id).await;
let server_user = &$self.services.globals.server_user;
$self
.services
.rooms
.timeline
.build_and_append_pdu(
custom_state_pdu!($event_type, $state_key, $content),
server_user,
Some(&$space_id),
&state_lock,
)
.await?
}};
}
#[admin_command_dispatch]
#[derive(Debug, Subcommand)]
pub enum SpaceRolesCommand {
/// List all roles defined in a space
List {
space: OwnedRoomOrAliasId,
},
/// Add a new role to a space
Add {
space: OwnedRoomOrAliasId,
role_name: String,
#[arg(long)]
description: Option<String>,
#[arg(long)]
power_level: Option<i64>,
},
/// Remove a role from a space
Remove {
space: OwnedRoomOrAliasId,
role_name: String,
},
/// Assign a role to a user
Assign {
space: OwnedRoomOrAliasId,
user_id: OwnedUserId,
role_name: String,
},
/// Revoke a role from a user
Revoke {
space: OwnedRoomOrAliasId,
user_id: OwnedUserId,
role_name: String,
},
/// Require a role for a room
Require {
space: OwnedRoomOrAliasId,
room_id: OwnedRoomId,
role_name: String,
},
/// Remove a role requirement from a room
Unrequire {
space: OwnedRoomOrAliasId,
room_id: OwnedRoomId,
role_name: String,
},
/// Show a user's roles in a space
User {
space: OwnedRoomOrAliasId,
user_id: OwnedUserId,
},
/// Show a room's role requirements in a space
Room {
space: OwnedRoomOrAliasId,
room_id: OwnedRoomId,
},
/// Enable space permission cascading for a specific space (overrides
/// server config)
Enable {
space: OwnedRoomOrAliasId,
},
/// Disable space permission cascading for a specific space (overrides
/// server config)
Disable {
space: OwnedRoomOrAliasId,
},
/// Show whether cascading is enabled for a space and the source (server
/// default or per-space override)
Status {
space: OwnedRoomOrAliasId,
},
}
#[admin_command]
async fn list(&self, space: OwnedRoomOrAliasId) -> Result {
let space_id = resolve_space!(self, space);
let roles_event_type = roles_event_type();
let content: SpaceRolesEventContent = self
.services
.rooms
.state_accessor
.room_state_get_content(&space_id, &roles_event_type, "")
.await
.unwrap_or_default();
if content.roles.is_empty() {
return self.write_str("No roles defined in this space.").await;
}
let mut msg = format!("Roles in {space_id}:\n```\n");
for (name, def) in &content.roles {
let pl = def
.power_level
.map(|p| format!(" (power_level: {p})"))
.unwrap_or_default();
let _ = writeln!(msg, "- {name}: {}{pl}", def.description);
}
msg.push_str("```");
self.write_str(&msg).await
}
#[admin_command]
async fn add(
&self,
space: OwnedRoomOrAliasId,
role_name: String,
description: Option<String>,
power_level: Option<i64>,
) -> Result {
let space_id = resolve_space!(self, space);
if let Some(pl) = power_level {
if pl > i64::from(ruma::Int::MAX) || pl < i64::from(ruma::Int::MIN) {
return Err!(
"Power level must be between {} and {}.",
ruma::Int::MIN,
ruma::Int::MAX
);
}
}
let roles_event_type = roles_event_type();
let mut content: SpaceRolesEventContent = self
.services
.rooms
.state_accessor
.room_state_get_content(&space_id, &roles_event_type, "")
.await
.unwrap_or_default();
if content.roles.contains_key(&role_name) {
return Err!("Role '{role_name}' already exists in this space.");
}
content.roles.insert(role_name.clone(), RoleDefinition {
description: description.unwrap_or_else(|| role_name.clone()),
power_level,
});
send_space_state!(self, space_id, SPACE_ROLES_EVENT_TYPE, "", &content);
self.write_str(&format!("Added role '{role_name}' to space {space_id}."))
.await
}
#[admin_command]
async fn remove(&self, space: OwnedRoomOrAliasId, role_name: String) -> Result {
let space_id = resolve_space!(self, space);
let roles_event_type = roles_event_type();
let mut content: SpaceRolesEventContent = self
.services
.rooms
.state_accessor
.room_state_get_content(&space_id, &roles_event_type, "")
.await
.unwrap_or_default();
if content.roles.remove(&role_name).is_none() {
return Err!("Role '{role_name}' does not exist in this space.");
}
send_space_state!(self, space_id, SPACE_ROLES_EVENT_TYPE, "", &content);
// Cascade: remove the deleted role from all member and room events
let server_user = &self.services.globals.server_user;
if let Ok(shortstatehash) = self
.services
.rooms
.state
.get_room_shortstatehash(&space_id)
.await
{
let state_lock = self.services.rooms.state.mutex.lock(&space_id).await;
cascade_remove_role!(
self,
shortstatehash,
member_event_type(),
SPACE_ROLE_MEMBER_EVENT_TYPE,
SpaceRoleMemberEventContent,
roles,
&role_name,
space_id,
state_lock,
server_user
);
cascade_remove_role!(
self,
shortstatehash,
room_event_type(),
SPACE_ROLE_ROOM_EVENT_TYPE,
SpaceRoleRoomEventContent,
required_roles,
&role_name,
space_id,
state_lock,
server_user
);
}
self.write_str(&format!("Removed role '{role_name}' from space {space_id}."))
.await
}
#[admin_command]
async fn assign(
&self,
space: OwnedRoomOrAliasId,
user_id: OwnedUserId,
role_name: String,
) -> Result {
let space_id = resolve_space!(self, space);
let roles_event_type = roles_event_type();
let role_defs: SpaceRolesEventContent = self
.services
.rooms
.state_accessor
.room_state_get_content(&space_id, &roles_event_type, "")
.await
.unwrap_or_default();
if !role_defs.roles.contains_key(&role_name) {
return Err!("Role '{role_name}' does not exist in this space.");
}
let member_event_type = member_event_type();
let mut content: SpaceRoleMemberEventContent = self
.services
.rooms
.state_accessor
.room_state_get_content(&space_id, &member_event_type, user_id.as_str())
.await
.unwrap_or_default();
if content.roles.contains(&role_name) {
return Err!("User {user_id} already has role '{role_name}' in this space.");
}
content.roles.push(role_name.clone());
send_space_state!(self, space_id, SPACE_ROLE_MEMBER_EVENT_TYPE, user_id.as_str(), &content);
self.write_str(&format!("Assigned role '{role_name}' to {user_id} in space {space_id}."))
.await
}
#[admin_command]
async fn revoke(
&self,
space: OwnedRoomOrAliasId,
user_id: OwnedUserId,
role_name: String,
) -> Result {
let space_id = resolve_space!(self, space);
let member_event_type = member_event_type();
let mut content: SpaceRoleMemberEventContent = self
.services
.rooms
.state_accessor
.room_state_get_content(&space_id, &member_event_type, user_id.as_str())
.await
.unwrap_or_default();
let original_len = content.roles.len();
content.roles.retain(|r| r != &role_name);
if content.roles.len() == original_len {
return Err!("User {user_id} does not have role '{role_name}' in this space.");
}
send_space_state!(self, space_id, SPACE_ROLE_MEMBER_EVENT_TYPE, user_id.as_str(), &content);
self.write_str(&format!("Revoked role '{role_name}' from {user_id} in space {space_id}."))
.await
}
#[admin_command]
async fn require(
&self,
space: OwnedRoomOrAliasId,
room_id: OwnedRoomId,
role_name: String,
) -> Result {
let space_id = resolve_space!(self, space);
let child_rooms = self.services.rooms.roles.get_child_rooms(&space_id).await;
if !child_rooms.contains(&room_id) {
return Err!("Room {room_id} is not a child of space {space_id}.");
}
let roles_event_type = roles_event_type();
let role_defs: SpaceRolesEventContent = self
.services
.rooms
.state_accessor
.room_state_get_content(&space_id, &roles_event_type, "")
.await
.unwrap_or_default();
if !role_defs.roles.contains_key(&role_name) {
return Err!("Role '{role_name}' does not exist in this space.");
}
let room_event_type = room_event_type();
let mut content: SpaceRoleRoomEventContent = self
.services
.rooms
.state_accessor
.room_state_get_content(&space_id, &room_event_type, room_id.as_str())
.await
.unwrap_or_default();
if content.required_roles.contains(&role_name) {
return Err!("Room {room_id} already requires role '{role_name}' in this space.");
}
content.required_roles.push(role_name.clone());
send_space_state!(self, space_id, SPACE_ROLE_ROOM_EVENT_TYPE, room_id.as_str(), &content);
self.write_str(&format!(
"Room {room_id} now requires role '{role_name}' in space {space_id}."
))
.await
}
#[admin_command]
async fn unrequire(
&self,
space: OwnedRoomOrAliasId,
room_id: OwnedRoomId,
role_name: String,
) -> Result {
let space_id = resolve_space!(self, space);
let room_event_type = room_event_type();
let mut content: SpaceRoleRoomEventContent = self
.services
.rooms
.state_accessor
.room_state_get_content(&space_id, &room_event_type, room_id.as_str())
.await
.unwrap_or_default();
let original_len = content.required_roles.len();
content.required_roles.retain(|r| r != &role_name);
if content.required_roles.len() == original_len {
return Err!("Room {room_id} does not require role '{role_name}' in this space.");
}
send_space_state!(self, space_id, SPACE_ROLE_ROOM_EVENT_TYPE, room_id.as_str(), &content);
self.write_str(&format!(
"Removed role requirement '{role_name}' from room {room_id} in space {space_id}."
))
.await
}
#[admin_command]
async fn user(&self, space: OwnedRoomOrAliasId, user_id: OwnedUserId) -> Result {
let space_id = resolve_space!(self, space);
let roles = self
.services
.rooms
.roles
.get_user_roles_in_space(&space_id, &user_id)
.await;
match roles {
| Some(roles) if !roles.is_empty() => {
let list: String = roles
.iter()
.map(|r| format!("- {r}"))
.collect::<Vec<_>>()
.join("\n");
self.write_str(&format!("Roles for {user_id} in space {space_id}:\n```\n{list}\n```"))
.await
},
| _ =>
self.write_str(&format!("User {user_id} has no roles in space {space_id}."))
.await,
}
}
#[admin_command]
async fn room(&self, space: OwnedRoomOrAliasId, room_id: OwnedRoomId) -> Result {
let space_id = resolve_space!(self, space);
let reqs = self
.services
.rooms
.roles
.get_room_requirements_in_space(&space_id, &room_id)
.await;
match reqs {
| Some(reqs) if !reqs.is_empty() => {
let list: String = reqs
.iter()
.map(|r| format!("- {r}"))
.collect::<Vec<_>>()
.join("\n");
self.write_str(&format!(
"Required roles for room {room_id} in space {space_id}:\n```\n{list}\n```"
))
.await
},
| _ =>
self.write_str(&format!(
"Room {room_id} has no role requirements in space {space_id}."
))
.await,
}
}
#[admin_command]
async fn enable(&self, space: OwnedRoomOrAliasId) -> Result {
let space_id = resolve_room_as_space!(self, space);
self.services
.rooms
.roles
.ensure_default_roles(&space_id)
.await?;
let content = SpaceCascadingEventContent { enabled: true };
send_space_state!(self, space_id, SPACE_CASCADING_EVENT_TYPE, "", &content);
self.write_str(&format!("Space permission cascading enabled for {space_id}."))
.await
}
#[admin_command]
async fn disable(&self, space: OwnedRoomOrAliasId) -> Result {
let space_id = resolve_room_as_space!(self, space);
let content = SpaceCascadingEventContent { enabled: false };
send_space_state!(self, space_id, SPACE_CASCADING_EVENT_TYPE, "", &content);
self.write_str(&format!("Space permission cascading disabled for {space_id}."))
.await
}
#[admin_command]
async fn status(&self, space: OwnedRoomOrAliasId) -> Result {
let space_id = resolve_room_as_space!(self, space);
let global_default = self.services.rooms.roles.is_enabled();
let cascading_event_type = cascading_event_type();
let per_space_override: Option<bool> = self
.services
.rooms
.state_accessor
.room_state_get_content::<SpaceCascadingEventContent>(
&space_id,
&cascading_event_type,
"",
)
.await
.ok()
.map(|c| c.enabled);
let effective = per_space_override.unwrap_or(global_default);
let source = match per_space_override {
| Some(v) => format!("per-Space override (enabled: {v})"),
| None => format!("server default (space_permission_cascading: {global_default})"),
};
self.write_str(&format!(
"Cascading status for {space_id}:\n- Effective: **{effective}**\n- Source: {source}"
))
.await
}

View file

@ -347,12 +347,6 @@ pub async fn join_room_by_id_helper(
}
}
services
.rooms
.roles
.check_join_allowed(room_id, sender_user)
.await?;
if server_in_room {
join_room_by_id_helper_local(services, sender_user, room_id, reason, servers, state_lock)
.boxed()

View file

@ -65,6 +65,7 @@ pub(super) async fn load_joined_room(
and `join*` functions are used to perform steps in parallel which do not depend on each other.
*/
let insert_lock = services.rooms.timeline.mutex_insert.lock(room_id).await;
let (
account_data,
ephemeral,
@ -82,6 +83,7 @@ pub(super) async fn load_joined_room(
)
.boxed()
.await?;
drop(insert_lock);
if !timeline.is_empty() || !state_events.is_empty() {
trace!(

View file

@ -607,22 +607,6 @@ pub struct Config {
#[serde(default)]
pub suspend_on_register: bool,
/// Server-wide default for space permission cascading (power levels and
/// role-based access). Individual Spaces can override this via the
/// `com.continuwuity.space.cascading` state event or the admin command
/// `!admin space roles enable/disable <space>`.
///
/// default: false
#[serde(default)]
pub space_permission_cascading: bool,
/// Maximum number of spaces to cache role data for. When exceeded the
/// cache is cleared and repopulated on demand.
///
/// default: 1000
#[serde(default = "default_space_roles_cache_flush_threshold")]
pub space_roles_cache_flush_threshold: u32,
/// Enabling this setting opens registration to anyone without restrictions.
/// This makes your server vulnerable to abuse
#[serde(default)]
@ -2067,12 +2051,10 @@ pub struct Config {
pub stream_amplification: usize,
/// Number of sender task workers; determines sender parallelism. Default is
/// '0' which means the value is determined internally, likely matching the
/// number of tokio worker-threads or number of cores, etc. Override by
/// setting a non-zero value.
/// core count. Override by setting a different value.
///
/// default: 0
#[serde(default)]
/// default: core count
#[serde(default = "default_sender_workers")]
pub sender_workers: usize,
/// Enables listener sockets; can be set to false to disable listening. This
@ -2554,45 +2536,47 @@ fn default_database_backups_to_keep() -> i16 { 1 }
fn default_db_write_buffer_capacity_mb() -> f64 { 48.0 + parallelism_scaled_f64(4.0) }
fn default_db_cache_capacity_mb() -> f64 { 128.0 + parallelism_scaled_f64(64.0) }
fn default_db_cache_capacity_mb() -> f64 { 512.0 + parallelism_scaled_f64(512.0) }
fn default_pdu_cache_capacity() -> u32 { parallelism_scaled_u32(10_000).saturating_add(100_000) }
fn default_pdu_cache_capacity() -> u32 { parallelism_scaled_u32(50_000).saturating_add(100_000) }
fn default_cache_capacity_modifier() -> f64 { 1.0 }
fn default_auth_chain_cache_capacity() -> u32 {
parallelism_scaled_u32(10_000).saturating_add(100_000)
}
fn default_shorteventid_cache_capacity() -> u32 {
parallelism_scaled_u32(50_000).saturating_add(100_000)
}
fn default_shorteventid_cache_capacity() -> u32 {
parallelism_scaled_u32(100_000).saturating_add(100_000)
}
fn default_eventidshort_cache_capacity() -> u32 {
parallelism_scaled_u32(25_000).saturating_add(100_000)
parallelism_scaled_u32(50_000).saturating_add(100_000)
}
fn default_eventid_pdu_cache_capacity() -> u32 {
parallelism_scaled_u32(25_000).saturating_add(100_000)
parallelism_scaled_u32(50_000).saturating_add(100_000)
}
fn default_shortstatekey_cache_capacity() -> u32 {
parallelism_scaled_u32(10_000).saturating_add(100_000)
parallelism_scaled_u32(50_000).saturating_add(100_000)
}
fn default_statekeyshort_cache_capacity() -> u32 {
parallelism_scaled_u32(10_000).saturating_add(100_000)
parallelism_scaled_u32(50_000).saturating_add(100_000)
}
fn default_servernameevent_data_cache_capacity() -> u32 {
parallelism_scaled_u32(100_000).saturating_add(500_000)
parallelism_scaled_u32(100_000).saturating_add(100_000)
}
fn default_stateinfo_cache_capacity() -> u32 { parallelism_scaled_u32(100) }
fn default_stateinfo_cache_capacity() -> u32 { parallelism_scaled_u32(500).clamp(100, 12000) }
fn default_roomid_spacehierarchy_cache_capacity() -> u32 { parallelism_scaled_u32(1000) }
fn default_roomid_spacehierarchy_cache_capacity() -> u32 {
parallelism_scaled_u32(500).clamp(100, 12000)
}
fn default_dns_cache_entries() -> u32 { 32768 }
fn default_dns_cache_entries() -> u32 { 327_680 }
fn default_dns_min_ttl() -> u64 { 60 * 180 }
@ -2800,15 +2784,26 @@ fn default_admin_log_capture() -> String {
fn default_admin_room_tag() -> String { "m.server_notice".to_owned() }
#[must_use]
#[allow(clippy::as_conversions, clippy::cast_precision_loss)]
fn parallelism_scaled_f64(val: f64) -> f64 { val * (sys::available_parallelism() as f64) }
pub fn parallelism_scaled_f64(val: f64) -> f64 { val * (sys::available_parallelism() as f64) }
fn parallelism_scaled_u32(val: u32) -> u32 {
let val = val.try_into().expect("failed to cast u32 to usize");
parallelism_scaled(val).try_into().unwrap_or(u32::MAX)
#[must_use]
#[allow(clippy::as_conversions, clippy::cast_possible_truncation)]
pub fn parallelism_scaled_u32(val: u32) -> u32 {
val.saturating_mul(sys::available_parallelism() as u32)
}
fn parallelism_scaled(val: usize) -> usize { val.saturating_mul(sys::available_parallelism()) }
#[must_use]
#[allow(clippy::as_conversions, clippy::cast_possible_truncation, clippy::cast_possible_wrap)]
pub fn parallelism_scaled_i32(val: i32) -> i32 {
val.saturating_mul(sys::available_parallelism() as i32)
}
#[must_use]
pub fn parallelism_scaled(val: usize) -> usize {
val.saturating_mul(sys::available_parallelism())
}
fn default_trusted_server_batch_size() -> usize { 256 }
@ -2828,6 +2823,8 @@ fn default_stream_width_scale() -> f32 { 1.0 }
fn default_stream_amplification() -> usize { 1024 }
fn default_sender_workers() -> usize { parallelism_scaled(1) }
fn default_client_receive_timeout() -> u64 { 75 }
fn default_client_request_timeout() -> u64 { 180 }
@ -2853,5 +2850,3 @@ fn default_ldap_search_filter() -> String { "(objectClass=*)".to_owned() }
fn default_ldap_uid_attribute() -> String { String::from("uid") }
fn default_ldap_name_attribute() -> String { String::from("givenName") }
fn default_space_roles_cache_flush_threshold() -> u32 { 1000 }

View file

@ -2,7 +2,6 @@
pub mod event;
pub mod pdu;
pub mod space_roles;
pub mod state_key;
pub mod state_res;

View file

@ -1,81 +0,0 @@
use std::collections::BTreeMap;
use serde::{Deserialize, Serialize};
pub const SPACE_ROLES_EVENT_TYPE: &str = "com.continuwuity.space.roles";
pub const SPACE_ROLE_MEMBER_EVENT_TYPE: &str = "com.continuwuity.space.role.member";
pub const SPACE_ROLE_ROOM_EVENT_TYPE: &str = "com.continuwuity.space.role.room";
pub const SPACE_CASCADING_EVENT_TYPE: &str = "com.continuwuity.space.cascading";
#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceRolesEventContent {
pub roles: BTreeMap<String, RoleDefinition>,
}
#[derive(Clone, Debug, Deserialize, Serialize, PartialEq, Eq)]
pub struct RoleDefinition {
pub description: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub power_level: Option<i64>,
}
#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceRoleMemberEventContent {
pub roles: Vec<String>,
}
#[derive(Clone, Debug, Default, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceRoleRoomEventContent {
pub required_roles: Vec<String>,
}
#[derive(Clone, Debug, Deserialize, Serialize, PartialEq, Eq)]
pub struct SpaceCascadingEventContent {
pub enabled: bool,
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn space_roles_roundtrip() {
let mut roles = BTreeMap::new();
roles.insert("admin".to_owned(), RoleDefinition {
description: "Space administrator".to_owned(),
power_level: Some(100),
});
roles.insert("nsfw".to_owned(), RoleDefinition {
description: "NSFW access".to_owned(),
power_level: None,
});
let content = SpaceRolesEventContent { roles };
let json = serde_json::to_string(&content).unwrap();
let deserialized: SpaceRolesEventContent = serde_json::from_str(&json).unwrap();
assert_eq!(deserialized.roles["admin"].power_level, Some(100));
assert!(deserialized.roles["nsfw"].power_level.is_none());
}
#[test]
fn power_level_omitted_in_serialization_when_none() {
let role = RoleDefinition {
description: "Test".to_owned(),
power_level: None,
};
let json = serde_json::to_string(&role).unwrap();
assert!(!json.contains("power_level"));
}
#[test]
fn negative_power_level() {
let json = r#"{"description":"Restricted","power_level":-10}"#;
let role: RoleDefinition = serde_json::from_str(json).unwrap();
assert_eq!(role.power_level, Some(-10));
}
#[test]
fn missing_description_fails() {
let json = r#"{"power_level":100}"#;
serde_json::from_str::<RoleDefinition>(json).unwrap_err();
}
}

View file

@ -29,7 +29,7 @@ fn descriptor_cf_options(
set_table_options(&mut opts, &desc, cache)?;
opts.set_min_write_buffer_number(1);
opts.set_max_write_buffer_number(2);
opts.set_max_write_buffer_number(3);
opts.set_write_buffer_size(desc.write_size);
opts.set_target_file_size_base(desc.file_size);

View file

@ -8,7 +8,7 @@ use axum::{
extract::State,
response::{IntoResponse, Response},
};
use conduwuit::{Result, debug, debug_error, debug_warn, err, error, trace};
use conduwuit::{Result, debug_warn, err, error, info, trace};
use conduwuit_service::Services;
use futures::FutureExt;
use http::{Method, StatusCode, Uri};
@ -102,11 +102,11 @@ fn handle_result(method: &Method, uri: &Uri, result: Response) -> Result<Respons
let reason = status.canonical_reason().unwrap_or("Unknown Reason");
if status.is_server_error() {
error!(%method, %uri, "{code} {reason}");
info!(%method, %uri, "{code} {reason}");
} else if status.is_client_error() {
debug_error!(%method, %uri, "{code} {reason}");
info!(%method, %uri, "{code} {reason}");
} else if status.is_redirection() {
debug!(%method, %uri, "{code} {reason}");
trace!(%method, %uri, "{code} {reason}");
} else {
trace!(%method, %uri, "{code} {reason}");
}

View file

@ -100,7 +100,7 @@ impl Service {
/// Pings the presence of the given user in the given room, setting the
/// specified state.
pub async fn ping_presence(&self, user_id: &UserId, new_state: &PresenceState) -> Result<()> {
const REFRESH_TIMEOUT: u64 = 60 * 1000;
const REFRESH_TIMEOUT: u64 = 60 * 1000 * 4;
let last_presence = self.db.get_presence(user_id).await;
let state_changed = match last_presence {

View file

@ -119,7 +119,11 @@ impl super::Service {
async fn actual_dest_2(&self, dest: &ServerName, cache: bool, pos: usize) -> Result<FedDest> {
debug!("2: Hostname with included port");
let (host, port) = dest.as_str().split_at(pos);
self.conditional_query_and_cache(host, port.parse::<u16>().unwrap_or(8448), cache)
self.conditional_query_and_cache(
host,
port.trim_start_matches(':').parse::<u16>().unwrap_or(8448),
cache,
)
.await?;
Ok(FedDest::Named(
@ -165,7 +169,11 @@ impl super::Service {
) -> Result<FedDest> {
debug!("3.2: Hostname with port in .well-known file");
let (host, port) = delegated.split_at(pos);
self.conditional_query_and_cache(host, port.parse::<u16>().unwrap_or(8448), cache)
self.conditional_query_and_cache(
host,
port.trim_start_matches(':').parse::<u16>().unwrap_or(8448),
cache,
)
.await?;
Ok(FedDest::Named(

View file

@ -53,9 +53,9 @@ impl Resolver {
opts.cache_size = config.dns_cache_entries as usize;
opts.preserve_intermediates = true;
opts.negative_min_ttl = Some(Duration::from_secs(config.dns_min_ttl_nxdomain));
opts.negative_max_ttl = Some(Duration::from_secs(60 * 60 * 24 * 30));
opts.negative_max_ttl = Some(Duration::from_secs(60 * 60 * 24));
opts.positive_min_ttl = Some(Duration::from_secs(config.dns_min_ttl));
opts.positive_max_ttl = Some(Duration::from_secs(60 * 60 * 24 * 7));
opts.positive_max_ttl = Some(Duration::from_secs(60 * 60 * 24));
opts.timeout = Duration::from_secs(config.dns_timeout);
opts.attempts = config.dns_attempts as usize;
opts.try_tcp_on_error = config.dns_tcp_fallback;

View file

@ -80,7 +80,7 @@ where
{
// Exponential backoff
const MIN_DURATION: u64 = 60 * 2;
const MAX_DURATION: u64 = 60 * 60 * 8;
const MAX_DURATION: u64 = 60 * 60;
if continue_exponential_backoff_secs(
MIN_DURATION,
MAX_DURATION,

View file

@ -46,7 +46,7 @@ where
{
// Exponential backoff
const MIN_DURATION: u64 = 5 * 60;
const MAX_DURATION: u64 = 60 * 60 * 24;
const MAX_DURATION: u64 = 60 * 60;
if continue_exponential_backoff_secs(MIN_DURATION, MAX_DURATION, time.elapsed(), *tries) {
debug!(
?tries,

View file

@ -197,6 +197,15 @@ where
.await;
extremities.push(incoming_pdu.event_id().to_owned());
if extremities.is_empty() {
info!(
"Retained zero extremities when upgrading outlier PDU to timeline PDU with {} \
previous events, event id: {}",
incoming_pdu.prev_events.len(),
incoming_pdu.event_id
);
}
debug!(
"Retained {} extremities checked against {} prev_events",
extremities.len(),

View file

@ -7,7 +7,6 @@ pub mod metadata;
pub mod outlier;
pub mod pdu_metadata;
pub mod read_receipt;
pub mod roles;
pub mod search;
pub mod short;
pub mod spaces;
@ -32,7 +31,6 @@ pub struct Service {
pub outlier: Arc<outlier::Service>,
pub pdu_metadata: Arc<pdu_metadata::Service>,
pub read_receipt: Arc<read_receipt::Service>,
pub roles: Arc<roles::Service>,
pub search: Arc<search::Service>,
pub short: Arc<short::Service>,
pub spaces: Arc<spaces::Service>,

File diff suppressed because it is too large Load diff

View file

@ -1,204 +0,0 @@
use std::collections::{BTreeMap, HashSet};
use conduwuit_core::matrix::space_roles::RoleDefinition;
use super::{compute_user_power_level, roles_satisfy_requirements};
pub(super) fn make_roles(entries: &[(&str, Option<i64>)]) -> BTreeMap<String, RoleDefinition> {
entries
.iter()
.map(|(name, pl)| {
((*name).to_owned(), RoleDefinition {
description: format!("{name} role"),
power_level: *pl,
})
})
.collect()
}
pub(super) fn make_set(items: &[&str]) -> HashSet<String> {
items.iter().map(|s| (*s).to_owned()).collect()
}
#[test]
fn power_level_single_role() {
let roles = make_roles(&[("admin", Some(100)), ("mod", Some(50))]);
assert_eq!(compute_user_power_level(&roles, &make_set(&["admin"])), Some(100));
}
#[test]
fn power_level_multiple_roles_takes_highest() {
let roles = make_roles(&[("admin", Some(100)), ("mod", Some(50)), ("helper", Some(25))]);
assert_eq!(compute_user_power_level(&roles, &make_set(&["mod", "helper"])), Some(50));
}
#[test]
fn power_level_no_power_roles() {
let roles = make_roles(&[("nsfw", None), ("vip", None)]);
assert_eq!(compute_user_power_level(&roles, &make_set(&["nsfw", "vip"])), None);
}
#[test]
fn power_level_mixed_roles() {
let roles = make_roles(&[("mod", Some(50)), ("nsfw", None)]);
assert_eq!(compute_user_power_level(&roles, &make_set(&["mod", "nsfw"])), Some(50));
}
#[test]
fn power_level_no_roles_assigned() {
let roles = make_roles(&[("admin", Some(100))]);
assert_eq!(compute_user_power_level(&roles, &HashSet::new()), None);
}
#[test]
fn power_level_unknown_role_ignored() {
let roles = make_roles(&[("admin", Some(100))]);
assert_eq!(compute_user_power_level(&roles, &make_set(&["nonexistent"])), None);
}
#[test]
fn qualifies_with_all_required_roles() {
assert!(roles_satisfy_requirements(
&make_set(&["nsfw", "vip"]),
&make_set(&["nsfw", "vip", "extra"]),
));
}
#[test]
fn does_not_qualify_missing_one_role() {
assert!(!roles_satisfy_requirements(&make_set(&["nsfw", "vip"]), &make_set(&["nsfw"]),));
}
#[test]
fn qualifies_with_no_requirements() {
assert!(roles_satisfy_requirements(&HashSet::new(), &make_set(&["nsfw"])));
}
#[test]
fn does_not_qualify_with_no_roles() {
assert!(!roles_satisfy_requirements(&make_set(&["nsfw"]), &HashSet::new()));
}
// Multi-space scenarios
#[test]
fn multi_space_highest_pl_wins() {
let space_a_roles = make_roles(&[("mod", Some(50))]);
let space_b_roles = make_roles(&[("admin", Some(100))]);
let user_roles_a = make_set(&["mod"]);
let user_roles_b = make_set(&["admin"]);
let pl_a = compute_user_power_level(&space_a_roles, &user_roles_a);
let pl_b = compute_user_power_level(&space_b_roles, &user_roles_b);
let effective = [pl_a, pl_b].into_iter().flatten().max();
assert_eq!(effective, Some(100));
}
#[test]
fn multi_space_one_space_has_no_pl() {
let space_a_roles = make_roles(&[("nsfw", None)]);
let space_b_roles = make_roles(&[("mod", Some(50))]);
let user_roles_a = make_set(&["nsfw"]);
let user_roles_b = make_set(&["mod"]);
let pl_a = compute_user_power_level(&space_a_roles, &user_roles_a);
let pl_b = compute_user_power_level(&space_b_roles, &user_roles_b);
let effective = [pl_a, pl_b].into_iter().flatten().max();
assert_eq!(effective, Some(50));
}
#[test]
fn multi_space_neither_has_pl() {
let space_a_roles = make_roles(&[("nsfw", None)]);
let space_b_roles = make_roles(&[("vip", None)]);
let user_roles_a = make_set(&["nsfw"]);
let user_roles_b = make_set(&["vip"]);
let pl_a = compute_user_power_level(&space_a_roles, &user_roles_a);
let pl_b = compute_user_power_level(&space_b_roles, &user_roles_b);
let effective = [pl_a, pl_b].into_iter().flatten().max();
assert_eq!(effective, None);
}
#[test]
fn multi_space_user_only_in_one_space() {
let space_a_roles = make_roles(&[("admin", Some(100))]);
let space_b_roles = make_roles(&[("mod", Some(50))]);
let user_roles_a = make_set(&["admin"]);
let user_roles_b: HashSet<String> = HashSet::new();
let pl_a = compute_user_power_level(&space_a_roles, &user_roles_a);
let pl_b = compute_user_power_level(&space_b_roles, &user_roles_b);
let effective = [pl_a, pl_b].into_iter().flatten().max();
assert_eq!(effective, Some(100));
}
#[test]
fn multi_space_qualifies_in_one_not_other() {
let space_a_reqs = make_set(&["staff"]);
let space_b_reqs = make_set(&["nsfw"]);
let user_roles = make_set(&["nsfw"]);
assert!(!roles_satisfy_requirements(&space_a_reqs, &user_roles));
assert!(roles_satisfy_requirements(&space_b_reqs, &user_roles));
}
#[test]
fn multi_space_qualifies_after_role_revoke_via_other_space() {
let space_a_reqs = make_set(&["nsfw"]);
let space_b_reqs = make_set(&["vip"]);
let user_roles_after_revoke = make_set(&["vip"]);
assert!(!roles_satisfy_requirements(&space_a_reqs, &user_roles_after_revoke));
assert!(roles_satisfy_requirements(&space_b_reqs, &user_roles_after_revoke));
}
#[test]
fn multi_space_room_has_reqs_in_one_space_only() {
let space_a_reqs = make_set(&["admin"]);
let space_b_reqs: HashSet<String> = HashSet::new();
let user_roles = make_set(&["nsfw"]);
assert!(!roles_satisfy_requirements(&space_a_reqs, &user_roles));
assert!(roles_satisfy_requirements(&space_b_reqs, &user_roles));
}
#[test]
fn multi_space_no_qualification_anywhere() {
let space_a_reqs = make_set(&["staff"]);
let space_b_reqs = make_set(&["admin"]);
let user_roles = make_set(&["nsfw"]);
let qualifies_a = roles_satisfy_requirements(&space_a_reqs, &user_roles);
let qualifies_b = roles_satisfy_requirements(&space_b_reqs, &user_roles);
assert!(!qualifies_a);
assert!(!qualifies_b);
assert!(!(qualifies_a || qualifies_b));
}
#[test]
fn multi_space_same_role_different_pl() {
let space_a_roles = make_roles(&[("mod", Some(50))]);
let space_b_roles = make_roles(&[("mod", Some(75))]);
let user_roles = make_set(&["mod"]);
let pl_a = compute_user_power_level(&space_a_roles, &user_roles);
let pl_b = compute_user_power_level(&space_b_roles, &user_roles);
let effective = [pl_a, pl_b].into_iter().flatten().max();
assert_eq!(effective, Some(75));
}

View file

@ -327,7 +327,7 @@ where
}
},
| TimelineEventType::SpaceChild =>
if pdu.state_key().is_some() {
if let Some(_state_key) = pdu.state_key() {
self.services
.spaces
.roomid_spacehierarchy_cache
@ -359,8 +359,6 @@ where
| _ => {},
}
self.services.roles.on_pdu_appended(room_id, &pdu);
// CONCERN: If we receive events with a relation out-of-order, we never write
// their relation / thread. We need some kind of way to trigger when we receive
// this event, and potentially a way to rebuild the table entirely.

View file

@ -97,17 +97,6 @@ pub async fn build_and_append_pdu(
)));
}
}
if *pdu.kind() == TimelineEventType::RoomPowerLevels {
if let Ok(proposed) =
pdu.get_content::<ruma::events::room::power_levels::RoomPowerLevelsEventContent>()
{
self.services
.roles
.validate_pl_change(&room_id, pdu.sender(), &proposed)
.await?;
}
}
if *pdu.kind() == TimelineEventType::RoomCreate {
trace!("Creating shortroomid for {room_id}");
self.services

View file

@ -80,7 +80,6 @@ struct Services {
threads: Dep<rooms::threads::Service>,
search: Dep<rooms::search::Service>,
spaces: Dep<rooms::spaces::Service>,
roles: Dep<rooms::roles::Service>,
event_handler: Dep<rooms::event_handler::Service>,
}
@ -113,7 +112,6 @@ impl crate::Service for Service {
threads: args.depend::<rooms::threads::Service>("rooms::threads"),
search: args.depend::<rooms::search::Service>("rooms::search"),
spaces: args.depend::<rooms::spaces::Service>("rooms::spaces"),
roles: args.depend::<rooms::roles::Service>("rooms::roles"),
event_handler: args
.depend::<rooms::event_handler::Service>("rooms::event_handler"),
},

View file

@ -127,3 +127,63 @@ pub async fn get_token_shortstatehash(
.await
.deserialized()
}
/// Count how many sync tokens exist for a room without deleting them
///
/// This is useful for dry runs to see how many tokens would be deleted
#[implement(Service)]
pub async fn count_room_tokens(&self, room_id: &RoomId) -> Result<usize> {
use futures::TryStreamExt;
let shortroomid = self.services.short.get_shortroomid(room_id).await?;
// Create a prefix to search by - all entries for this room will start with its
// short ID
let prefix = &[shortroomid];
// Collect all keys into a Vec and count them
let keys = self
.db
.roomsynctoken_shortstatehash
.keys_prefix_raw(prefix)
.map_ok(|_| ()) // We only need to count, not store the keys
.try_collect::<Vec<_>>()
.await?;
Ok(keys.len())
}
/// Delete all sync tokens associated with a room
///
/// This helps clean up the database as these tokens are never otherwise removed
#[implement(Service)]
pub async fn delete_room_tokens(&self, room_id: &RoomId) -> Result<usize> {
use futures::TryStreamExt;
let shortroomid = self.services.short.get_shortroomid(room_id).await?;
// Create a prefix to search by - all entries for this room will start with its
// short ID
let prefix = &[shortroomid];
// Collect all keys into a Vec first, then delete them
let keys = self
.db
.roomsynctoken_shortstatehash
.keys_prefix_raw(prefix)
.map_ok(|key| {
// Clone the key since we can't store references in the Vec
Vec::from(key)
})
.try_collect::<Vec<_>>()
.await?;
// Delete each key individually
for key in &keys {
self.db.roomsynctoken_shortstatehash.del(key);
}
let count = keys.len();
Ok(count)
}

View file

@ -96,7 +96,6 @@ impl Services {
outlier: build!(rooms::outlier::Service),
pdu_metadata: build!(rooms::pdu_metadata::Service),
read_receipt: build!(rooms::read_receipt::Service),
roles: build!(rooms::roles::Service),
search: build!(rooms::search::Service),
short: build!(rooms::short::Service),
spaces: build!(rooms::spaces::Service),