mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-09 01:30:18 +00:00
Compare commits
18 Commits
erikj/wait
...
erikj/dock
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cfa5eca168 | ||
|
|
56e6abfecc | ||
|
|
b71d277438 | ||
|
|
a547b49773 | ||
|
|
6a9a641fb8 | ||
|
|
b5facbac0f | ||
|
|
b250ca5df2 | ||
|
|
e0d420fbd1 | ||
|
|
9956f35c6a | ||
|
|
d464ee3602 | ||
|
|
439a095edc | ||
|
|
5d040f2066 | ||
|
|
f33266232e | ||
|
|
d43042864a | ||
|
|
f4ce030608 | ||
|
|
8b43cc89fa | ||
|
|
52af16c561 | ||
|
|
38f03a09ff |
46
CHANGES.md
46
CHANGES.md
@@ -1,3 +1,49 @@
|
||||
# Synapse 1.108.0rc1 (2024-05-21)
|
||||
|
||||
### Features
|
||||
|
||||
- Add a feature that allows clients to query the configured federation whitelist. Disabled by default. ([\#16848](https://github.com/element-hq/synapse/issues/16848), [\#17199](https://github.com/element-hq/synapse/issues/17199))
|
||||
- Add the ability to allow numeric user IDs with a specific prefix when in the CAS flow. Contributed by Aurélien Grimpard. ([\#17098](https://github.com/element-hq/synapse/issues/17098))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix bug where push rules would be empty in `/sync` for some accounts. Introduced in v1.93.0. ([\#17142](https://github.com/element-hq/synapse/issues/17142))
|
||||
- Add support for optional whitespace around the Federation API's `Authorization` header's parameter commas. ([\#17145](https://github.com/element-hq/synapse/issues/17145))
|
||||
- Fix bug where disabling room publication prevented public rooms being created on workers. ([\#17177](https://github.com/element-hq/synapse/issues/17177), [\#17184](https://github.com/element-hq/synapse/issues/17184))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Document [`/v1/make_knock`](https://spec.matrix.org/v1.10/server-server-api/#get_matrixfederationv1make_knockroomiduserid) and [`/v1/send_knock/`](https://spec.matrix.org/v1.10/server-server-api/#put_matrixfederationv1send_knockroomideventid) federation endpoints as worker-compatible. ([\#17058](https://github.com/element-hq/synapse/issues/17058))
|
||||
- Update User Admin API with note about prefixing OIDC external_id providers. ([\#17139](https://github.com/element-hq/synapse/issues/17139))
|
||||
- Clarify the state of the created room when using the `autocreate_auto_join_room_preset` config option. ([\#17150](https://github.com/element-hq/synapse/issues/17150))
|
||||
- Update the Admin FAQ with the current libjemalloc version for latest Debian stable. Additionally update the name of the "push_rules" stream in the Workers documentation. ([\#17171](https://github.com/element-hq/synapse/issues/17171))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Add note to reflect that [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886) is closed but will remain supported for some time. ([\#17151](https://github.com/element-hq/synapse/issues/17151))
|
||||
- Update dependency PyO3 to 0.21. ([\#17162](https://github.com/element-hq/synapse/issues/17162))
|
||||
- Fixes linter errors found in PR #17147. ([\#17166](https://github.com/element-hq/synapse/issues/17166))
|
||||
- Bump black from 24.2.0 to 24.4.2. ([\#17170](https://github.com/element-hq/synapse/issues/17170))
|
||||
- Cache literal sync filter validation for performance. ([\#17186](https://github.com/element-hq/synapse/issues/17186))
|
||||
- Improve performance by fixing a reactor pause. ([\#17192](https://github.com/element-hq/synapse/issues/17192))
|
||||
- Route `/make_knock` and `/send_knock` federation APIs to the federation reader worker in Complement test runs. ([\#17195](https://github.com/element-hq/synapse/issues/17195))
|
||||
- Prepare sync handler to be able to return different sync responses (`SyncVersion`). ([\#17200](https://github.com/element-hq/synapse/issues/17200))
|
||||
- Organize the sync cache key parameter outside of the sync config (separate concerns). ([\#17201](https://github.com/element-hq/synapse/issues/17201))
|
||||
- Refactor `SyncResultBuilder` assembly to its own function. ([\#17202](https://github.com/element-hq/synapse/issues/17202))
|
||||
- Rename to be obvious: `joined_rooms` -> `joined_room_ids`. ([\#17203](https://github.com/element-hq/synapse/issues/17203), [\#17208](https://github.com/element-hq/synapse/issues/17208))
|
||||
- Add a short pause when rate-limiting a request. ([\#17210](https://github.com/element-hq/synapse/issues/17210))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump cryptography from 42.0.5 to 42.0.7. ([\#17180](https://github.com/element-hq/synapse/issues/17180))
|
||||
* Bump gitpython from 3.1.41 to 3.1.43. ([\#17181](https://github.com/element-hq/synapse/issues/17181))
|
||||
* Bump immutabledict from 4.1.0 to 4.2.0. ([\#17179](https://github.com/element-hq/synapse/issues/17179))
|
||||
* Bump sentry-sdk from 1.40.3 to 2.1.1. ([\#17178](https://github.com/element-hq/synapse/issues/17178))
|
||||
* Bump serde from 1.0.200 to 1.0.201. ([\#17183](https://github.com/element-hq/synapse/issues/17183))
|
||||
* Bump serde_json from 1.0.116 to 1.0.117. ([\#17182](https://github.com/element-hq/synapse/issues/17182))
|
||||
|
||||
Synapse 1.107.0 (2024-05-14)
|
||||
============================
|
||||
|
||||
|
||||
12
Cargo.lock
generated
12
Cargo.lock
generated
@@ -13,9 +13,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "anyhow"
|
||||
version = "1.0.83"
|
||||
version = "1.0.86"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "25bdb32cbbdce2b519a9cd7df3a678443100e265d5e25ca763b7572a5104f5f3"
|
||||
checksum = "b3d1d046238990b9cf5bcde22a3fb3584ee5cf65fb2765f454ed428c7a0063da"
|
||||
|
||||
[[package]]
|
||||
name = "arc-swap"
|
||||
@@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
|
||||
|
||||
[[package]]
|
||||
name = "serde"
|
||||
version = "1.0.201"
|
||||
version = "1.0.202"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "780f1cebed1629e4753a1a38a3c72d30b97ec044f0aef68cb26650a3c5cf363c"
|
||||
checksum = "226b61a0d411b2ba5ff6d7f73a476ac4f8bb900373459cd00fab8512828ba395"
|
||||
dependencies = [
|
||||
"serde_derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_derive"
|
||||
version = "1.0.201"
|
||||
version = "1.0.202"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c5e405930b9796f1c00bee880d03fc7e0bb4b9a11afc776885ffe84320da2865"
|
||||
checksum = "6048858004bcff69094cd972ed40a32500f153bd3be9f716b2eed2e8217c4838"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Add a feature that allows clients to query the configured federation whitelist. Disabled by default.
|
||||
@@ -1 +0,0 @@
|
||||
Add the ability to allow numeric user IDs with a specific prefix when in the CAS flow. Contributed by Aurélien Grimpard.
|
||||
@@ -1 +0,0 @@
|
||||
Update User Admin API with note about prefixing OIDC external_id providers.
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug where push rules would be empty in `/sync` for some accounts. Introduced in v1.93.0.
|
||||
@@ -1 +0,0 @@
|
||||
Add support for optional whitespace around the Federation API's `Authorization` header's parameter commas.
|
||||
1
changelog.d/17147.feature
Normal file
1
changelog.d/17147.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add the ability to auto-accept invites on the behalf of users. See the [`auto_accept_invites`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#auto-accept-invites) config option for details.
|
||||
@@ -1 +0,0 @@
|
||||
Clarify the state of the created room when using the `autocreate_auto_join_room_preset` config option.
|
||||
@@ -1 +0,0 @@
|
||||
Add note to reflect that [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886) is closed but will support will remain for some time.
|
||||
@@ -1 +0,0 @@
|
||||
Update dependency PyO3 to 0.21.
|
||||
@@ -1 +0,0 @@
|
||||
Fixes linter errors found in PR #17147.
|
||||
@@ -1 +0,0 @@
|
||||
Bump black from 24.2.0 to 24.4.2.
|
||||
@@ -1 +0,0 @@
|
||||
Update the Admin FAQ with the current libjemalloc version for latest Debian stable. Additionally update the name of the "push_rules" stream in the Workers documentation.
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug where disabling room publication prevented public rooms being created on workers.
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug where disabling room publication prevented public rooms being created on workers.
|
||||
@@ -1 +0,0 @@
|
||||
Cache literal sync filter validation for performance.
|
||||
@@ -1 +0,0 @@
|
||||
Improve performance by fixing a reactor pause.
|
||||
@@ -1 +0,0 @@
|
||||
Route `/make_knock` and `/send_knock` federation APIs to the federation reader worker in Complement test runs.
|
||||
@@ -1 +0,0 @@
|
||||
Add a feature that allows clients to query the configured federation whitelist. Disabled by default.
|
||||
@@ -1 +0,0 @@
|
||||
Prepare sync handler to be able to return different sync responses (`SyncVersion`).
|
||||
@@ -1 +0,0 @@
|
||||
Organize the sync cache key parameter outside of the sync config (separate concerns).
|
||||
@@ -1 +0,0 @@
|
||||
Refactor `SyncResultBuilder` assembly to its own function.
|
||||
@@ -1 +0,0 @@
|
||||
Rename to be obvious: `joined_rooms` -> `joined_room_ids`.
|
||||
1
changelog.d/17204.doc
Normal file
1
changelog.d/17204.doc
Normal file
@@ -0,0 +1 @@
|
||||
Update OIDC documentation: by default Matrix doesn't query userinfo endpoint, then claims should be put on id_token.
|
||||
@@ -1 +0,0 @@
|
||||
Rename to be obvious: `joined_rooms` -> `joined_room_ids`.
|
||||
1
changelog.d/17211.misc
Normal file
1
changelog.d/17211.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce work of calculating outbound device lists updates.
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug where duplicate events could be sent down sync when using workers that are overloaded.
|
||||
1
changelog.d/17216.misc
Normal file
1
changelog.d/17216.misc
Normal file
@@ -0,0 +1 @@
|
||||
Improve performance of calculating device lists changes in `/sync`.
|
||||
1
changelog.d/17228.docker
Normal file
1
changelog.d/17228.docker
Normal file
@@ -0,0 +1 @@
|
||||
Support loading pluggable modules from Docker images.
|
||||
6
debian/changelog
vendored
6
debian/changelog
vendored
@@ -1,3 +1,9 @@
|
||||
matrix-synapse-py3 (1.108.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.108.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 21 May 2024 10:54:13 +0100
|
||||
|
||||
matrix-synapse-py3 (1.107.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.107.0.
|
||||
|
||||
@@ -243,3 +243,26 @@ healthcheck:
|
||||
Jemalloc is embedded in the image and will be used instead of the default allocator.
|
||||
You can read about jemalloc by reading the Synapse
|
||||
[Admin FAQ](https://element-hq.github.io/synapse/latest/usage/administration/admin_faq.html#help-synapse-is-slow-and-eats-all-my-ramcpu).
|
||||
|
||||
|
||||
## Modules
|
||||
|
||||
Synapse supports loading additional modules, using the
|
||||
[`modules`](https://element-hq.github.io/synapse/latest/modules/index.html)
|
||||
config. Synapse will look for these modules `/modules`.
|
||||
|
||||
To install a package, simply run:
|
||||
|
||||
```
|
||||
pip install --target <module_directory> <package>
|
||||
```
|
||||
|
||||
Where `<module_directory>` is the directory mounted to `/modules`, and
|
||||
`<package>` is either the package name or a path to the package. See
|
||||
`pip install` for more details.
|
||||
|
||||
**Note**: Packages already installed as part of Synapse cannot be overridden by
|
||||
different versions of the package in `/modules`, e.g. if the Synapse version
|
||||
uses Twisted 24.3.0 then installing Twisted 23.10.0 in `/modules` won't have any
|
||||
effect. This can cause issues if the required version of a package is different
|
||||
between Synapse and the module being installed.
|
||||
|
||||
@@ -269,6 +269,15 @@ running with 'migrate_config'. See the README for more details.
|
||||
|
||||
args += ["--config-path", config_path]
|
||||
|
||||
# Add the `/modules` directly to python search path, which allows users to
|
||||
# add custom modules.
|
||||
#
|
||||
# We want to add the directory *last* so that nothing can overwrite the
|
||||
# existing package versions. Therefore we load the current path and append
|
||||
# `/modules` to that
|
||||
path = ":".join(sys.path)
|
||||
environ["PYTHONPATH"] = f"{path}:/modules"
|
||||
|
||||
log("Starting synapse with args " + " ".join(args))
|
||||
|
||||
args = [sys.executable] + args
|
||||
|
||||
@@ -525,6 +525,8 @@ oidc_providers:
|
||||
(`Options > Security > ID Token signature algorithm` and `Options > Security >
|
||||
Access Token signature algorithm`)
|
||||
- Scopes: OpenID, Email and Profile
|
||||
- Force claims into `id_token`
|
||||
(`Options > Advanced > Force claims to be returned in ID Token`)
|
||||
- Allowed redirection addresses for login (`Options > Basic > Allowed
|
||||
redirection addresses for login` ) :
|
||||
`[synapse public baseurl]/_synapse/client/oidc/callback`
|
||||
|
||||
@@ -4595,3 +4595,32 @@ background_updates:
|
||||
min_batch_size: 10
|
||||
default_batch_size: 50
|
||||
```
|
||||
---
|
||||
## Auto Accept Invites
|
||||
Configuration settings related to automatically accepting invites.
|
||||
|
||||
---
|
||||
### `auto_accept_invites`
|
||||
|
||||
Automatically accepting invites controls whether users are presented with an invite request or if they
|
||||
are instead automatically joined to a room when receiving an invite. Set the `enabled` sub-option to true to
|
||||
enable auto-accepting invites. Defaults to false.
|
||||
This setting has the following sub-options:
|
||||
* `enabled`: Whether to run the auto-accept invites logic. Defaults to false.
|
||||
* `only_for_direct_messages`: Whether invites should be automatically accepted for all room types, or only
|
||||
for direct messages. Defaults to false.
|
||||
* `only_from_local_users`: Whether to only automatically accept invites from users on this homeserver. Defaults to false.
|
||||
* `worker_to_run_on`: Which worker to run this module on. This must match the "worker_name".
|
||||
|
||||
NOTE: Care should be taken not to enable this setting if the `synapse_auto_accept_invite` module is enabled and installed.
|
||||
The two modules will compete to perform the same task and may result in undesired behaviour. For example, multiple join
|
||||
events could be generated from a single invite.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
auto_accept_invites:
|
||||
enabled: true
|
||||
only_for_direct_messages: true
|
||||
only_from_local_users: true
|
||||
worker_to_run_on: "worker_1"
|
||||
```
|
||||
|
||||
@@ -211,6 +211,8 @@ information.
|
||||
^/_matrix/federation/v1/make_leave/
|
||||
^/_matrix/federation/(v1|v2)/send_join/
|
||||
^/_matrix/federation/(v1|v2)/send_leave/
|
||||
^/_matrix/federation/v1/make_knock/
|
||||
^/_matrix/federation/v1/send_knock/
|
||||
^/_matrix/federation/(v1|v2)/invite/
|
||||
^/_matrix/federation/v1/event_auth/
|
||||
^/_matrix/federation/v1/timestamp_to_event/
|
||||
|
||||
76
poetry.lock
generated
76
poetry.lock
generated
@@ -67,38 +67,38 @@ visualize = ["Twisted (>=16.1.1)", "graphviz (>0.5.1)"]
|
||||
|
||||
[[package]]
|
||||
name = "bcrypt"
|
||||
version = "4.1.2"
|
||||
version = "4.1.3"
|
||||
description = "Modern password hashing for your software and your servers"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "bcrypt-4.1.2-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:ac621c093edb28200728a9cca214d7e838529e557027ef0581685909acd28b5e"},
|
||||
{file = "bcrypt-4.1.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea505c97a5c465ab8c3ba75c0805a102ce526695cd6818c6de3b1a38f6f60da1"},
|
||||
{file = "bcrypt-4.1.2-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:57fa9442758da926ed33a91644649d3e340a71e2d0a5a8de064fb621fd5a3326"},
|
||||
{file = "bcrypt-4.1.2-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:eb3bd3321517916696233b5e0c67fd7d6281f0ef48e66812db35fc963a422a1c"},
|
||||
{file = "bcrypt-4.1.2-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:6cad43d8c63f34b26aef462b6f5e44fdcf9860b723d2453b5d391258c4c8e966"},
|
||||
{file = "bcrypt-4.1.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:44290ccc827d3a24604f2c8bcd00d0da349e336e6503656cb8192133e27335e2"},
|
||||
{file = "bcrypt-4.1.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:732b3920a08eacf12f93e6b04ea276c489f1c8fb49344f564cca2adb663b3e4c"},
|
||||
{file = "bcrypt-4.1.2-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:1c28973decf4e0e69cee78c68e30a523be441972c826703bb93099868a8ff5b5"},
|
||||
{file = "bcrypt-4.1.2-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:b8df79979c5bae07f1db22dcc49cc5bccf08a0380ca5c6f391cbb5790355c0b0"},
|
||||
{file = "bcrypt-4.1.2-cp37-abi3-win32.whl", hash = "sha256:fbe188b878313d01b7718390f31528be4010fed1faa798c5a1d0469c9c48c369"},
|
||||
{file = "bcrypt-4.1.2-cp37-abi3-win_amd64.whl", hash = "sha256:9800ae5bd5077b13725e2e3934aa3c9c37e49d3ea3d06318010aa40f54c63551"},
|
||||
{file = "bcrypt-4.1.2-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:71b8be82bc46cedd61a9f4ccb6c1a493211d031415a34adde3669ee1b0afbb63"},
|
||||
{file = "bcrypt-4.1.2-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e3c6642077b0c8092580c819c1684161262b2e30c4f45deb000c38947bf483"},
|
||||
{file = "bcrypt-4.1.2-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:387e7e1af9a4dd636b9505a465032f2f5cb8e61ba1120e79a0e1cd0b512f3dfc"},
|
||||
{file = "bcrypt-4.1.2-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:f70d9c61f9c4ca7d57f3bfe88a5ccf62546ffbadf3681bb1e268d9d2e41c91a7"},
|
||||
{file = "bcrypt-4.1.2-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:2a298db2a8ab20056120b45e86c00a0a5eb50ec4075b6142db35f593b97cb3fb"},
|
||||
{file = "bcrypt-4.1.2-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:ba55e40de38a24e2d78d34c2d36d6e864f93e0d79d0b6ce915e4335aa81d01b1"},
|
||||
{file = "bcrypt-4.1.2-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:3566a88234e8de2ccae31968127b0ecccbb4cddb629da744165db72b58d88ca4"},
|
||||
{file = "bcrypt-4.1.2-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:b90e216dc36864ae7132cb151ffe95155a37a14e0de3a8f64b49655dd959ff9c"},
|
||||
{file = "bcrypt-4.1.2-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:69057b9fc5093ea1ab00dd24ede891f3e5e65bee040395fb1e66ee196f9c9b4a"},
|
||||
{file = "bcrypt-4.1.2-cp39-abi3-win32.whl", hash = "sha256:02d9ef8915f72dd6daaef40e0baeef8a017ce624369f09754baf32bb32dba25f"},
|
||||
{file = "bcrypt-4.1.2-cp39-abi3-win_amd64.whl", hash = "sha256:be3ab1071662f6065899fe08428e45c16aa36e28bc42921c4901a191fda6ee42"},
|
||||
{file = "bcrypt-4.1.2-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:d75fc8cd0ba23f97bae88a6ec04e9e5351ff3c6ad06f38fe32ba50cbd0d11946"},
|
||||
{file = "bcrypt-4.1.2-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:a97e07e83e3262599434816f631cc4c7ca2aa8e9c072c1b1a7fec2ae809a1d2d"},
|
||||
{file = "bcrypt-4.1.2-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:e51c42750b7585cee7892c2614be0d14107fad9581d1738d954a262556dd1aab"},
|
||||
{file = "bcrypt-4.1.2-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:ba4e4cc26610581a6329b3937e02d319f5ad4b85b074846bf4fef8a8cf51e7bb"},
|
||||
{file = "bcrypt-4.1.2.tar.gz", hash = "sha256:33313a1200a3ae90b75587ceac502b048b840fc69e7f7a0905b5f87fac7a1258"},
|
||||
{file = "bcrypt-4.1.3-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:48429c83292b57bf4af6ab75809f8f4daf52aa5d480632e53707805cc1ce9b74"},
|
||||
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a8bea4c152b91fd8319fef4c6a790da5c07840421c2b785084989bf8bbb7455"},
|
||||
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d3b317050a9a711a5c7214bf04e28333cf528e0ed0ec9a4e55ba628d0f07c1a"},
|
||||
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:094fd31e08c2b102a14880ee5b3d09913ecf334cd604af27e1013c76831f7b05"},
|
||||
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:4fb253d65da30d9269e0a6f4b0de32bd657a0208a6f4e43d3e645774fb5457f3"},
|
||||
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:193bb49eeeb9c1e2db9ba65d09dc6384edd5608d9d672b4125e9320af9153a15"},
|
||||
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:8cbb119267068c2581ae38790e0d1fbae65d0725247a930fc9900c285d95725d"},
|
||||
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:6cac78a8d42f9d120b3987f82252bdbeb7e6e900a5e1ba37f6be6fe4e3848286"},
|
||||
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:01746eb2c4299dd0ae1670234bf77704f581dd72cc180f444bfe74eb80495b64"},
|
||||
{file = "bcrypt-4.1.3-cp37-abi3-win32.whl", hash = "sha256:037c5bf7c196a63dcce75545c8874610c600809d5d82c305dd327cd4969995bf"},
|
||||
{file = "bcrypt-4.1.3-cp37-abi3-win_amd64.whl", hash = "sha256:8a893d192dfb7c8e883c4576813bf18bb9d59e2cfd88b68b725990f033f1b978"},
|
||||
{file = "bcrypt-4.1.3-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:0d4cf6ef1525f79255ef048b3489602868c47aea61f375377f0d00514fe4a78c"},
|
||||
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f5698ce5292a4e4b9e5861f7e53b1d89242ad39d54c3da451a93cac17b61921a"},
|
||||
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec3c2e1ca3e5c4b9edb94290b356d082b721f3f50758bce7cce11d8a7c89ce84"},
|
||||
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:3a5be252fef513363fe281bafc596c31b552cf81d04c5085bc5dac29670faa08"},
|
||||
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:5f7cd3399fbc4ec290378b541b0cf3d4398e4737a65d0f938c7c0f9d5e686611"},
|
||||
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:c4c8d9b3e97209dd7111bf726e79f638ad9224b4691d1c7cfefa571a09b1b2d6"},
|
||||
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:31adb9cbb8737a581a843e13df22ffb7c84638342de3708a98d5c986770f2834"},
|
||||
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:551b320396e1d05e49cc18dd77d970accd52b322441628aca04801bbd1d52a73"},
|
||||
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6717543d2c110a155e6821ce5670c1f512f602eabb77dba95717ca76af79867d"},
|
||||
{file = "bcrypt-4.1.3-cp39-abi3-win32.whl", hash = "sha256:6004f5229b50f8493c49232b8e75726b568535fd300e5039e255d919fc3a07f2"},
|
||||
{file = "bcrypt-4.1.3-cp39-abi3-win_amd64.whl", hash = "sha256:2505b54afb074627111b5a8dc9b6ae69d0f01fea65c2fcaea403448c503d3991"},
|
||||
{file = "bcrypt-4.1.3-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:cb9c707c10bddaf9e5ba7cdb769f3e889e60b7d4fea22834b261f51ca2b89fed"},
|
||||
{file = "bcrypt-4.1.3-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:9f8ea645eb94fb6e7bea0cf4ba121c07a3a182ac52876493870033141aa687bc"},
|
||||
{file = "bcrypt-4.1.3-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:f44a97780677e7ac0ca393bd7982b19dbbd8d7228c1afe10b128fd9550eef5f1"},
|
||||
{file = "bcrypt-4.1.3-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d84702adb8f2798d813b17d8187d27076cca3cd52fe3686bb07a9083930ce650"},
|
||||
{file = "bcrypt-4.1.3.tar.gz", hash = "sha256:2ee15dd749f5952fe3f0430d0ff6b74082e159c50332a1413d51b5689cf06623"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
@@ -1736,13 +1736,13 @@ psycopg2 = "*"
|
||||
|
||||
[[package]]
|
||||
name = "pyasn1"
|
||||
version = "0.5.1"
|
||||
version = "0.6.0"
|
||||
description = "Pure-Python implementation of ASN.1 types and DER/BER/CER codecs (X.208)"
|
||||
optional = false
|
||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "pyasn1-0.5.1-py2.py3-none-any.whl", hash = "sha256:4439847c58d40b1d0a573d07e3856e95333f1976294494c325775aeca506eb58"},
|
||||
{file = "pyasn1-0.5.1.tar.gz", hash = "sha256:6d391a96e59b23130a5cfa74d6fd7f388dbbe26cc8f1edf39fdddf08d9d6676c"},
|
||||
{file = "pyasn1-0.6.0-py2.py3-none-any.whl", hash = "sha256:cca4bb0f2df5504f02f6f8a775b6e416ff9b0b3b16f7ee80b5a3153d9b804473"},
|
||||
{file = "pyasn1-0.6.0.tar.gz", hash = "sha256:3a35ab2c4b5ef98e17dfdec8ab074046fbda76e281c5a706ccd82328cfc8f64c"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -2673,13 +2673,13 @@ docs = ["sphinx (<7.0.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "twine"
|
||||
version = "5.0.0"
|
||||
version = "5.1.0"
|
||||
description = "Collection of utilities for publishing packages on PyPI"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "twine-5.0.0-py3-none-any.whl", hash = "sha256:a262933de0b484c53408f9edae2e7821c1c45a3314ff2df9bdd343aa7ab8edc0"},
|
||||
{file = "twine-5.0.0.tar.gz", hash = "sha256:89b0cc7d370a4b66421cc6102f269aa910fe0f1861c124f573cf2ddedbc10cf4"},
|
||||
{file = "twine-5.1.0-py3-none-any.whl", hash = "sha256:fe1d814395bfe50cfbe27783cb74efe93abeac3f66deaeb6c8390e4e92bacb43"},
|
||||
{file = "twine-5.1.0.tar.gz", hash = "sha256:4d74770c88c4fcaf8134d2a6a9d863e40f08255ff7d8e2acb3cbbd57d25f6e9d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2853,13 +2853,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "types-psycopg2"
|
||||
version = "2.9.21.20240311"
|
||||
version = "2.9.21.20240417"
|
||||
description = "Typing stubs for psycopg2"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "types-psycopg2-2.9.21.20240311.tar.gz", hash = "sha256:722945dffa6a729bebc660f14137f37edfcead5a2c15eb234212a7d017ee8072"},
|
||||
{file = "types_psycopg2-2.9.21.20240311-py3-none-any.whl", hash = "sha256:2e137ae2b516ee0dbaab6f555086b6cfb723ba4389d67f551b0336adf4efcf1b"},
|
||||
{file = "types-psycopg2-2.9.21.20240417.tar.gz", hash = "sha256:05db256f4a459fb21a426b8e7fca0656c3539105ff0208eaf6bdaf406a387087"},
|
||||
{file = "types_psycopg2-2.9.21.20240417-py3-none-any.whl", hash = "sha256:644d6644d64ebbe37203229b00771012fb3b3bddd507a129a2e136485990e4f8"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
@@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.107.0"
|
||||
version = "1.108.0rc1"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
@@ -200,8 +200,10 @@ netaddr = ">=0.7.18"
|
||||
# add a lower bound to the Jinja2 dependency.
|
||||
Jinja2 = ">=3.0"
|
||||
bleach = ">=1.4.3"
|
||||
# We use `Self`, which were added in `typing-extensions` 4.0.
|
||||
typing-extensions = ">=4.0"
|
||||
# We use `ParamSpec` and `Concatenate`, which were added in `typing-extensions` 3.10.0.0.
|
||||
# Additionally we need https://github.com/python/typing/pull/817 to allow types to be
|
||||
# generic over ParamSpecs.
|
||||
typing-extensions = ">=3.10.0.1"
|
||||
# We enforce that we have a `cryptography` version that bundles an `openssl`
|
||||
# with the latest security patches.
|
||||
cryptography = ">=3.4.7"
|
||||
|
||||
@@ -316,6 +316,10 @@ class Ratelimiter:
|
||||
)
|
||||
|
||||
if not allowed:
|
||||
# We pause for a bit here to stop clients from "tight-looping" on
|
||||
# retrying their request.
|
||||
await self.clock.sleep(0.5)
|
||||
|
||||
raise LimitExceededError(
|
||||
limiter_name=self._limiter_name,
|
||||
retry_after_ms=int(1000 * (time_allowed - time_now_s)),
|
||||
|
||||
@@ -68,6 +68,7 @@ from synapse.config._base import format_config_error
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.config.server import ListenerConfig, ManholeConfig, TCPListenerConfig
|
||||
from synapse.crypto import context_factory
|
||||
from synapse.events.auto_accept_invites import InviteAutoAccepter
|
||||
from synapse.events.presence_router import load_legacy_presence_router
|
||||
from synapse.handlers.auth import load_legacy_password_auth_providers
|
||||
from synapse.http.site import SynapseSite
|
||||
@@ -582,6 +583,11 @@ async def start(hs: "HomeServer") -> None:
|
||||
m = module(config, module_api)
|
||||
logger.info("Loaded module %s", m)
|
||||
|
||||
if hs.config.auto_accept_invites.enabled:
|
||||
# Start the local auto_accept_invites module.
|
||||
m = InviteAutoAccepter(hs.config.auto_accept_invites, module_api)
|
||||
logger.info("Loaded local module %s", m)
|
||||
|
||||
load_legacy_spam_checkers(hs)
|
||||
load_legacy_third_party_event_rules(hs)
|
||||
load_legacy_presence_router(hs)
|
||||
|
||||
@@ -23,6 +23,7 @@ from synapse.config import ( # noqa: F401
|
||||
api,
|
||||
appservice,
|
||||
auth,
|
||||
auto_accept_invites,
|
||||
background_updates,
|
||||
cache,
|
||||
captcha,
|
||||
@@ -120,6 +121,7 @@ class RootConfig:
|
||||
federation: federation.FederationConfig
|
||||
retention: retention.RetentionConfig
|
||||
background_updates: background_updates.BackgroundUpdateConfig
|
||||
auto_accept_invites: auto_accept_invites.AutoAcceptInvitesConfig
|
||||
|
||||
config_classes: List[Type["Config"]] = ...
|
||||
config_files: List[str]
|
||||
|
||||
43
synapse/config/auto_accept_invites.py
Normal file
43
synapse/config/auto_accept_invites.py
Normal file
@@ -0,0 +1,43 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
# Originally licensed under the Apache License, Version 2.0:
|
||||
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||
#
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
from typing import Any
|
||||
|
||||
from synapse.types import JsonDict
|
||||
|
||||
from ._base import Config
|
||||
|
||||
|
||||
class AutoAcceptInvitesConfig(Config):
|
||||
section = "auto_accept_invites"
|
||||
|
||||
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
|
||||
auto_accept_invites_config = config.get("auto_accept_invites") or {}
|
||||
|
||||
self.enabled = auto_accept_invites_config.get("enabled", False)
|
||||
|
||||
self.accept_invites_only_for_direct_messages = auto_accept_invites_config.get(
|
||||
"only_for_direct_messages", False
|
||||
)
|
||||
|
||||
self.accept_invites_only_from_local_users = auto_accept_invites_config.get(
|
||||
"only_from_local_users", False
|
||||
)
|
||||
|
||||
self.worker_to_run_on = auto_accept_invites_config.get("worker_to_run_on")
|
||||
@@ -23,6 +23,7 @@ from .account_validity import AccountValidityConfig
|
||||
from .api import ApiConfig
|
||||
from .appservice import AppServiceConfig
|
||||
from .auth import AuthConfig
|
||||
from .auto_accept_invites import AutoAcceptInvitesConfig
|
||||
from .background_updates import BackgroundUpdateConfig
|
||||
from .cache import CacheConfig
|
||||
from .captcha import CaptchaConfig
|
||||
@@ -105,4 +106,5 @@ class HomeServerConfig(RootConfig):
|
||||
RedisConfig,
|
||||
ExperimentalConfig,
|
||||
BackgroundUpdateConfig,
|
||||
AutoAcceptInvitesConfig,
|
||||
]
|
||||
|
||||
196
synapse/events/auto_accept_invites.py
Normal file
196
synapse/events/auto_accept_invites.py
Normal file
@@ -0,0 +1,196 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
# Originally licensed under the Apache License, Version 2.0:
|
||||
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||
#
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
import logging
|
||||
from http import HTTPStatus
|
||||
from typing import Any, Dict, Tuple
|
||||
|
||||
from synapse.api.constants import AccountDataTypes, EventTypes, Membership
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.config.auto_accept_invites import AutoAcceptInvitesConfig
|
||||
from synapse.module_api import EventBase, ModuleApi, run_as_background_process
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class InviteAutoAccepter:
|
||||
def __init__(self, config: AutoAcceptInvitesConfig, api: ModuleApi):
|
||||
# Keep a reference to the Module API.
|
||||
self._api = api
|
||||
self._config = config
|
||||
|
||||
if not self._config.enabled:
|
||||
return
|
||||
|
||||
should_run_on_this_worker = config.worker_to_run_on == self._api.worker_name
|
||||
|
||||
if not should_run_on_this_worker:
|
||||
logger.info(
|
||||
"Not accepting invites on this worker (configured: %r, here: %r)",
|
||||
config.worker_to_run_on,
|
||||
self._api.worker_name,
|
||||
)
|
||||
return
|
||||
|
||||
logger.info(
|
||||
"Accepting invites on this worker (here: %r)", self._api.worker_name
|
||||
)
|
||||
|
||||
# Register the callback.
|
||||
self._api.register_third_party_rules_callbacks(
|
||||
on_new_event=self.on_new_event,
|
||||
)
|
||||
|
||||
async def on_new_event(self, event: EventBase, *args: Any) -> None:
|
||||
"""Listens for new events, and if the event is an invite for a local user then
|
||||
automatically accepts it.
|
||||
|
||||
Args:
|
||||
event: The incoming event.
|
||||
"""
|
||||
# Check if the event is an invite for a local user.
|
||||
is_invite_for_local_user = (
|
||||
event.type == EventTypes.Member
|
||||
and event.is_state()
|
||||
and event.membership == Membership.INVITE
|
||||
and self._api.is_mine(event.state_key)
|
||||
)
|
||||
|
||||
# Only accept invites for direct messages if the configuration mandates it.
|
||||
is_direct_message = event.content.get("is_direct", False)
|
||||
is_allowed_by_direct_message_rules = (
|
||||
not self._config.accept_invites_only_for_direct_messages
|
||||
or is_direct_message is True
|
||||
)
|
||||
|
||||
# Only accept invites from remote users if the configuration mandates it.
|
||||
is_from_local_user = self._api.is_mine(event.sender)
|
||||
is_allowed_by_local_user_rules = (
|
||||
not self._config.accept_invites_only_from_local_users
|
||||
or is_from_local_user is True
|
||||
)
|
||||
|
||||
if (
|
||||
is_invite_for_local_user
|
||||
and is_allowed_by_direct_message_rules
|
||||
and is_allowed_by_local_user_rules
|
||||
):
|
||||
# Make the user join the room. We run this as a background process to circumvent a race condition
|
||||
# that occurs when responding to invites over federation (see https://github.com/matrix-org/synapse-auto-accept-invite/issues/12)
|
||||
run_as_background_process(
|
||||
"retry_make_join",
|
||||
self._retry_make_join,
|
||||
event.state_key,
|
||||
event.state_key,
|
||||
event.room_id,
|
||||
"join",
|
||||
bg_start_span=False,
|
||||
)
|
||||
|
||||
if is_direct_message:
|
||||
# Mark this room as a direct message!
|
||||
await self._mark_room_as_direct_message(
|
||||
event.state_key, event.sender, event.room_id
|
||||
)
|
||||
|
||||
async def _mark_room_as_direct_message(
|
||||
self, user_id: str, dm_user_id: str, room_id: str
|
||||
) -> None:
|
||||
"""
|
||||
Marks a room (`room_id`) as a direct message with the counterparty `dm_user_id`
|
||||
from the perspective of the user `user_id`.
|
||||
|
||||
Args:
|
||||
user_id: the user for whom the membership is changing
|
||||
dm_user_id: the user performing the membership change
|
||||
room_id: room id of the room the user is invited to
|
||||
"""
|
||||
|
||||
# This is a dict of User IDs to tuples of Room IDs
|
||||
# (get_global will return a frozendict of tuples as it freezes the data,
|
||||
# but we should accept either frozen or unfrozen variants.)
|
||||
# Be careful: we convert the outer frozendict into a dict here,
|
||||
# but the contents of the dict are still frozen (tuples in lieu of lists,
|
||||
# etc.)
|
||||
dm_map: Dict[str, Tuple[str, ...]] = dict(
|
||||
await self._api.account_data_manager.get_global(
|
||||
user_id, AccountDataTypes.DIRECT
|
||||
)
|
||||
or {}
|
||||
)
|
||||
|
||||
if dm_user_id not in dm_map:
|
||||
dm_map[dm_user_id] = (room_id,)
|
||||
else:
|
||||
dm_rooms_for_user = dm_map[dm_user_id]
|
||||
assert isinstance(dm_rooms_for_user, (tuple, list))
|
||||
|
||||
dm_map[dm_user_id] = tuple(dm_rooms_for_user) + (room_id,)
|
||||
|
||||
await self._api.account_data_manager.put_global(
|
||||
user_id, AccountDataTypes.DIRECT, dm_map
|
||||
)
|
||||
|
||||
async def _retry_make_join(
|
||||
self, sender: str, target: str, room_id: str, new_membership: str
|
||||
) -> None:
|
||||
"""
|
||||
A function to retry sending the `make_join` request with an increasing backoff. This is
|
||||
implemented to work around a race condition when receiving invites over federation.
|
||||
|
||||
Args:
|
||||
sender: the user performing the membership change
|
||||
target: the user for whom the membership is changing
|
||||
room_id: room id of the room to join to
|
||||
new_membership: the type of membership event (in this case will be "join")
|
||||
"""
|
||||
|
||||
sleep = 0
|
||||
retries = 0
|
||||
join_event = None
|
||||
|
||||
while retries < 5:
|
||||
try:
|
||||
await self._api.sleep(sleep)
|
||||
join_event = await self._api.update_room_membership(
|
||||
sender=sender,
|
||||
target=target,
|
||||
room_id=room_id,
|
||||
new_membership=new_membership,
|
||||
)
|
||||
except SynapseError as e:
|
||||
if e.code == HTTPStatus.FORBIDDEN:
|
||||
logger.debug(
|
||||
f"Update_room_membership was forbidden. This can sometimes be expected for remote invites. Exception: {e}"
|
||||
)
|
||||
else:
|
||||
logger.warn(
|
||||
f"Update_room_membership raised the following unexpected (SynapseError) exception: {e}"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warn(
|
||||
f"Update_room_membership raised the following unexpected exception: {e}"
|
||||
)
|
||||
|
||||
sleep = 2**retries
|
||||
retries += 1
|
||||
|
||||
if join_event is not None:
|
||||
break
|
||||
@@ -159,20 +159,32 @@ class DeviceWorkerHandler:
|
||||
|
||||
@cancellable
|
||||
async def get_device_changes_in_shared_rooms(
|
||||
self, user_id: str, room_ids: StrCollection, from_token: StreamToken
|
||||
self,
|
||||
user_id: str,
|
||||
room_ids: StrCollection,
|
||||
from_token: StreamToken,
|
||||
now_token: Optional[StreamToken] = None,
|
||||
) -> Set[str]:
|
||||
"""Get the set of users whose devices have changed who share a room with
|
||||
the given user.
|
||||
"""
|
||||
now_device_lists_key = self.store.get_device_stream_token()
|
||||
if now_token:
|
||||
now_device_lists_key = now_token.device_list_key
|
||||
|
||||
changed_users = await self.store.get_device_list_changes_in_rooms(
|
||||
room_ids, from_token.device_list_key
|
||||
room_ids,
|
||||
from_token.device_list_key,
|
||||
now_device_lists_key,
|
||||
)
|
||||
|
||||
if changed_users is not None:
|
||||
# We also check if the given user has changed their device. If
|
||||
# they're in no rooms then the above query won't include them.
|
||||
changed = await self.store.get_users_whose_devices_changed(
|
||||
from_token.device_list_key, [user_id]
|
||||
from_token.device_list_key,
|
||||
[user_id],
|
||||
to_key=now_device_lists_key,
|
||||
)
|
||||
changed_users.update(changed)
|
||||
return changed_users
|
||||
@@ -190,7 +202,9 @@ class DeviceWorkerHandler:
|
||||
tracked_users.add(user_id)
|
||||
|
||||
changed = await self.store.get_users_whose_devices_changed(
|
||||
from_token.device_list_key, tracked_users
|
||||
from_token.device_list_key,
|
||||
tracked_users,
|
||||
to_key=now_device_lists_key,
|
||||
)
|
||||
|
||||
return changed
|
||||
@@ -892,6 +906,13 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
context=opentracing_context,
|
||||
)
|
||||
|
||||
await self.store.mark_redundant_device_lists_pokes(
|
||||
user_id=user_id,
|
||||
device_id=device_id,
|
||||
room_id=room_id,
|
||||
converted_upto_stream_id=stream_id,
|
||||
)
|
||||
|
||||
# Notify replication that we've updated the device list stream.
|
||||
self.notifier.notify_replication()
|
||||
|
||||
|
||||
@@ -817,7 +817,7 @@ class SsoHandler:
|
||||
server_name = profile["avatar_url"].split("/")[-2]
|
||||
media_id = profile["avatar_url"].split("/")[-1]
|
||||
if self._is_mine_server_name(server_name):
|
||||
media = await self._media_repo.store.get_local_media(media_id)
|
||||
media = await self._media_repo.store.get_local_media(media_id) # type: ignore[has-type]
|
||||
if media is not None and upload_name == media.upload_name:
|
||||
logger.info("skipping saving the user avatar")
|
||||
return True
|
||||
|
||||
@@ -279,23 +279,6 @@ class SyncResult:
|
||||
or self.device_lists
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def empty(next_batch: StreamToken) -> "SyncResult":
|
||||
"Return a new empty result"
|
||||
return SyncResult(
|
||||
next_batch=next_batch,
|
||||
presence=[],
|
||||
account_data=[],
|
||||
joined=[],
|
||||
invited=[],
|
||||
knocked=[],
|
||||
archived=[],
|
||||
to_device=[],
|
||||
device_lists=DeviceListUpdates(),
|
||||
device_one_time_keys_count={},
|
||||
device_unused_fallback_key_types=[],
|
||||
)
|
||||
|
||||
|
||||
class SyncHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
@@ -418,24 +401,6 @@ class SyncHandler:
|
||||
if context:
|
||||
context.tag = sync_label
|
||||
|
||||
if since_token is not None:
|
||||
# We need to make sure this worker has caught up with the token. If
|
||||
# this returns false it means we timed out waiting, and we should
|
||||
# just return an empty response.
|
||||
start = self.clock.time_msec()
|
||||
if not await self.notifier.wait_for_stream_token(since_token):
|
||||
logger.warning(
|
||||
"Timed out waiting for worker to catch up. Returning empty response"
|
||||
)
|
||||
return SyncResult.empty(since_token)
|
||||
|
||||
# If we've spent significant time waiting to catch up, take it off
|
||||
# the timeout.
|
||||
now = self.clock.time_msec()
|
||||
if now - start > 1_000:
|
||||
timeout -= now - start
|
||||
timeout = max(timeout, 0)
|
||||
|
||||
# if we have a since token, delete any to-device messages before that token
|
||||
# (since we now know that the device has received them)
|
||||
if since_token is not None:
|
||||
@@ -1921,38 +1886,14 @@ class SyncHandler:
|
||||
|
||||
# Step 1a, check for changes in devices of users we share a room
|
||||
# with
|
||||
#
|
||||
# We do this in two different ways depending on what we have cached.
|
||||
# If we already have a list of all the user that have changed since
|
||||
# the last sync then it's likely more efficient to compare the rooms
|
||||
# they're in with the rooms the syncing user is in.
|
||||
#
|
||||
# If we don't have that info cached then we get all the users that
|
||||
# share a room with our user and check if those users have changed.
|
||||
cache_result = self.store.get_cached_device_list_changes(
|
||||
since_token.device_list_key
|
||||
)
|
||||
if cache_result.hit:
|
||||
changed_users = cache_result.entities
|
||||
|
||||
result = await self.store.get_rooms_for_users(changed_users)
|
||||
|
||||
for changed_user_id, entries in result.items():
|
||||
# Check if the changed user shares any rooms with the user,
|
||||
# or if the changed user is the syncing user (as we always
|
||||
# want to include device list updates of their own devices).
|
||||
if user_id == changed_user_id or any(
|
||||
rid in joined_room_ids for rid in entries
|
||||
):
|
||||
users_that_have_changed.add(changed_user_id)
|
||||
else:
|
||||
users_that_have_changed = (
|
||||
await self._device_handler.get_device_changes_in_shared_rooms(
|
||||
user_id,
|
||||
sync_result_builder.joined_room_ids,
|
||||
from_token=since_token,
|
||||
)
|
||||
users_that_have_changed = (
|
||||
await self._device_handler.get_device_changes_in_shared_rooms(
|
||||
user_id,
|
||||
sync_result_builder.joined_room_ids,
|
||||
from_token=since_token,
|
||||
now_token=sync_result_builder.now_token,
|
||||
)
|
||||
)
|
||||
|
||||
# Step 1b, check for newly joined rooms
|
||||
for room_id in newly_joined_rooms:
|
||||
|
||||
@@ -763,29 +763,6 @@ class Notifier:
|
||||
|
||||
return result
|
||||
|
||||
async def wait_for_stream_token(self, stream_token: StreamToken) -> bool:
|
||||
"""Wait for this worker to catch up with the given stream token."""
|
||||
|
||||
start = self.clock.time_msec()
|
||||
while True:
|
||||
current_token = self.event_sources.get_current_token()
|
||||
if stream_token.is_before_or_eq(current_token):
|
||||
return True
|
||||
|
||||
now = self.clock.time_msec()
|
||||
|
||||
if now - start > 10_000:
|
||||
return False
|
||||
|
||||
logger.info(
|
||||
"Waiting for current token to reach %s; currently at %s",
|
||||
stream_token,
|
||||
current_token,
|
||||
)
|
||||
|
||||
# TODO: be better
|
||||
await self.clock.sleep(0.5)
|
||||
|
||||
async def _get_room_ids(
|
||||
self, user: UserID, explicit_room_id: Optional[str]
|
||||
) -> Tuple[StrCollection, bool]:
|
||||
|
||||
@@ -112,6 +112,15 @@ class ReplicationDataHandler:
|
||||
token: stream token for this batch of rows
|
||||
rows: a list of Stream.ROW_TYPE objects as returned by Stream.parse_row.
|
||||
"""
|
||||
all_room_ids: Set[str] = set()
|
||||
if stream_name == DeviceListsStream.NAME:
|
||||
if any(row.entity.startswith("@") and not row.is_signature for row in rows):
|
||||
prev_token = self.store.get_device_stream_token()
|
||||
all_room_ids = await self.store.get_all_device_list_changes(
|
||||
prev_token, token
|
||||
)
|
||||
self.store.device_lists_in_rooms_have_changed(all_room_ids, token)
|
||||
|
||||
self.store.process_replication_rows(stream_name, instance_name, token, rows)
|
||||
# NOTE: this must be called after process_replication_rows to ensure any
|
||||
# cache invalidations are first handled before any stream ID advances.
|
||||
@@ -146,12 +155,6 @@ class ReplicationDataHandler:
|
||||
StreamKeyType.TO_DEVICE, token, users=entities
|
||||
)
|
||||
elif stream_name == DeviceListsStream.NAME:
|
||||
all_room_ids: Set[str] = set()
|
||||
for row in rows:
|
||||
if row.entity.startswith("@") and not row.is_signature:
|
||||
room_ids = await self.store.get_rooms_for_user(row.entity)
|
||||
all_room_ids.update(room_ids)
|
||||
|
||||
# `all_room_ids` can be large, so let's wake up those streams in batches
|
||||
for batched_room_ids in batch_iter(all_room_ids, 100):
|
||||
self.notifier.on_new_event(
|
||||
|
||||
@@ -70,10 +70,7 @@ from synapse.types import (
|
||||
from synapse.util import json_decoder, json_encoder
|
||||
from synapse.util.caches.descriptors import cached, cachedList
|
||||
from synapse.util.caches.lrucache import LruCache
|
||||
from synapse.util.caches.stream_change_cache import (
|
||||
AllEntitiesChangedResult,
|
||||
StreamChangeCache,
|
||||
)
|
||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
from synapse.util.cancellation import cancellable
|
||||
from synapse.util.iterutils import batch_iter
|
||||
from synapse.util.stringutils import shortstr
|
||||
@@ -132,6 +129,20 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||
prefilled_cache=device_list_prefill,
|
||||
)
|
||||
|
||||
device_list_room_prefill, min_device_list_room_id = self.db_pool.get_cache_dict(
|
||||
db_conn,
|
||||
"device_lists_changes_in_room",
|
||||
entity_column="room_id",
|
||||
stream_column="stream_id",
|
||||
max_value=device_list_max,
|
||||
limit=10000,
|
||||
)
|
||||
self._device_list_room_stream_cache = StreamChangeCache(
|
||||
"DeviceListRoomStreamChangeCache",
|
||||
min_device_list_room_id,
|
||||
prefilled_cache=device_list_room_prefill,
|
||||
)
|
||||
|
||||
(
|
||||
user_signature_stream_prefill,
|
||||
user_signature_stream_list_id,
|
||||
@@ -209,6 +220,13 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||
row.entity, token
|
||||
)
|
||||
|
||||
def device_lists_in_rooms_have_changed(
|
||||
self, room_ids: StrCollection, token: int
|
||||
) -> None:
|
||||
"Record that device lists have changed in rooms"
|
||||
for room_id in room_ids:
|
||||
self._device_list_room_stream_cache.entity_has_changed(room_id, token)
|
||||
|
||||
def get_device_stream_token(self) -> int:
|
||||
return self._device_list_id_gen.get_current_token()
|
||||
|
||||
@@ -832,16 +850,6 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||
)
|
||||
return {device[0]: db_to_json(device[1]) for device in devices}
|
||||
|
||||
def get_cached_device_list_changes(
|
||||
self,
|
||||
from_key: int,
|
||||
) -> AllEntitiesChangedResult:
|
||||
"""Get set of users whose devices have changed since `from_key`, or None
|
||||
if that information is not in our cache.
|
||||
"""
|
||||
|
||||
return self._device_list_stream_cache.get_all_entities_changed(from_key)
|
||||
|
||||
@cancellable
|
||||
async def get_all_devices_changed(
|
||||
self,
|
||||
@@ -1457,7 +1465,7 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||
|
||||
@cancellable
|
||||
async def get_device_list_changes_in_rooms(
|
||||
self, room_ids: Collection[str], from_id: int
|
||||
self, room_ids: Collection[str], from_id: int, to_id: int
|
||||
) -> Optional[Set[str]]:
|
||||
"""Return the set of users whose devices have changed in the given rooms
|
||||
since the given stream ID.
|
||||
@@ -1473,9 +1481,15 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||
if min_stream_id > from_id:
|
||||
return None
|
||||
|
||||
changed_room_ids = self._device_list_room_stream_cache.get_entities_changed(
|
||||
room_ids, from_id
|
||||
)
|
||||
if not changed_room_ids:
|
||||
return set()
|
||||
|
||||
sql = """
|
||||
SELECT DISTINCT user_id FROM device_lists_changes_in_room
|
||||
WHERE {clause} AND stream_id >= ?
|
||||
WHERE {clause} AND stream_id > ? AND stream_id <= ?
|
||||
"""
|
||||
|
||||
def _get_device_list_changes_in_rooms_txn(
|
||||
@@ -1487,11 +1501,12 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||
return {user_id for user_id, in txn}
|
||||
|
||||
changes = set()
|
||||
for chunk in batch_iter(room_ids, 1000):
|
||||
for chunk in batch_iter(changed_room_ids, 1000):
|
||||
clause, args = make_in_list_sql_clause(
|
||||
self.database_engine, "room_id", chunk
|
||||
)
|
||||
args.append(from_id)
|
||||
args.append(to_id)
|
||||
|
||||
changes |= await self.db_pool.runInteraction(
|
||||
"get_device_list_changes_in_rooms",
|
||||
@@ -1502,6 +1517,34 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||
|
||||
return changes
|
||||
|
||||
async def get_all_device_list_changes(self, from_id: int, to_id: int) -> Set[str]:
|
||||
"""Return the set of rooms where devices have changed since the given
|
||||
stream ID.
|
||||
|
||||
Will raise an exception if the given stream ID is too old.
|
||||
"""
|
||||
|
||||
min_stream_id = await self._get_min_device_lists_changes_in_room()
|
||||
|
||||
if min_stream_id > from_id:
|
||||
raise Exception("stream ID is too old")
|
||||
|
||||
sql = """
|
||||
SELECT DISTINCT room_id FROM device_lists_changes_in_room
|
||||
WHERE stream_id > ? AND stream_id <= ?
|
||||
"""
|
||||
|
||||
def _get_all_device_list_changes_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> Set[str]:
|
||||
txn.execute(sql, (from_id, to_id))
|
||||
return {room_id for room_id, in txn}
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"get_all_device_list_changes",
|
||||
_get_all_device_list_changes_txn,
|
||||
)
|
||||
|
||||
async def get_device_list_changes_in_room(
|
||||
self, room_id: str, min_stream_id: int
|
||||
) -> Collection[Tuple[str, str]]:
|
||||
@@ -1962,8 +2005,8 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
async def add_device_change_to_streams(
|
||||
self,
|
||||
user_id: str,
|
||||
device_ids: Collection[str],
|
||||
room_ids: Collection[str],
|
||||
device_ids: StrCollection,
|
||||
room_ids: StrCollection,
|
||||
) -> Optional[int]:
|
||||
"""Persist that a user's devices have been updated, and which hosts
|
||||
(if any) should be poked.
|
||||
@@ -2118,12 +2161,36 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
},
|
||||
)
|
||||
|
||||
async def mark_redundant_device_lists_pokes(
|
||||
self,
|
||||
user_id: str,
|
||||
device_id: str,
|
||||
room_id: str,
|
||||
converted_upto_stream_id: int,
|
||||
) -> None:
|
||||
"""If we've calculated the outbound pokes for a given room/device list
|
||||
update, mark any subsequent changes as already converted"""
|
||||
|
||||
sql = """
|
||||
UPDATE device_lists_changes_in_room
|
||||
SET converted_to_destinations = true
|
||||
WHERE stream_id > ? AND user_id = ? AND device_id = ?
|
||||
AND room_id = ? AND NOT converted_to_destinations
|
||||
"""
|
||||
|
||||
def mark_redundant_device_lists_pokes_txn(txn: LoggingTransaction) -> None:
|
||||
txn.execute(sql, (converted_upto_stream_id, user_id, device_id, room_id))
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"mark_redundant_device_lists_pokes", mark_redundant_device_lists_pokes_txn
|
||||
)
|
||||
|
||||
def _add_device_outbound_room_poke_txn(
|
||||
self,
|
||||
txn: LoggingTransaction,
|
||||
user_id: str,
|
||||
device_ids: Iterable[str],
|
||||
room_ids: Collection[str],
|
||||
device_ids: StrCollection,
|
||||
room_ids: StrCollection,
|
||||
stream_ids: List[int],
|
||||
context: Dict[str, str],
|
||||
) -> None:
|
||||
@@ -2161,6 +2228,10 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
],
|
||||
)
|
||||
|
||||
txn.call_after(
|
||||
self.device_lists_in_rooms_have_changed, room_ids, max(stream_ids)
|
||||
)
|
||||
|
||||
async def get_uncoverted_outbound_room_pokes(
|
||||
self, start_stream_id: int, start_room_id: str, limit: int = 10
|
||||
) -> List[Tuple[str, str, str, int, Optional[Dict[str, str]]]]:
|
||||
|
||||
@@ -95,10 +95,6 @@ class DeltaState:
|
||||
to_insert: StateMap[str]
|
||||
no_longer_in_room: bool = False
|
||||
|
||||
def is_noop(self) -> bool:
|
||||
"""Whether this state delta is actually empty"""
|
||||
return not self.to_delete and not self.to_insert and not self.no_longer_in_room
|
||||
|
||||
|
||||
class PersistEventsStore:
|
||||
"""Contains all the functions for writing events to the database.
|
||||
@@ -1021,9 +1017,6 @@ class PersistEventsStore:
|
||||
) -> None:
|
||||
"""Update the current state stored in the datatabase for the given room"""
|
||||
|
||||
if state_delta.is_noop():
|
||||
return
|
||||
|
||||
async with self._stream_id_gen.get_next() as stream_ordering:
|
||||
await self.db_pool.runInteraction(
|
||||
"update_current_state",
|
||||
|
||||
@@ -204,11 +204,7 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="events",
|
||||
instance_name=hs.get_instance_name(),
|
||||
tables=[
|
||||
("events", "instance_name", "stream_ordering"),
|
||||
("current_state_delta_stream", "instance_name", "stream_id"),
|
||||
("ex_outlier_stream", "instance_name", "event_stream_ordering"),
|
||||
],
|
||||
tables=[("events", "instance_name", "stream_ordering")],
|
||||
sequence_name="events_stream_seq",
|
||||
writers=hs.config.worker.writers.events,
|
||||
)
|
||||
@@ -218,10 +214,7 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="backfill",
|
||||
instance_name=hs.get_instance_name(),
|
||||
tables=[
|
||||
("events", "instance_name", "stream_ordering"),
|
||||
("ex_outlier_stream", "instance_name", "event_stream_ordering"),
|
||||
],
|
||||
tables=[("events", "instance_name", "stream_ordering")],
|
||||
sequence_name="events_backfill_stream_seq",
|
||||
positive=False,
|
||||
writers=hs.config.worker.writers.events,
|
||||
@@ -237,10 +230,6 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
"events",
|
||||
"stream_ordering",
|
||||
is_writer=hs.get_instance_name() in hs.config.worker.writers.events,
|
||||
extra_tables=[
|
||||
("current_state_delta_stream", "stream_id"),
|
||||
("ex_outlier_stream", "event_stream_ordering"),
|
||||
],
|
||||
)
|
||||
self._backfill_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
|
||||
@@ -48,7 +48,7 @@ import attr
|
||||
from immutabledict import immutabledict
|
||||
from signedjson.key import decode_verify_key_bytes
|
||||
from signedjson.types import VerifyKey
|
||||
from typing_extensions import Self, TypedDict
|
||||
from typing_extensions import TypedDict
|
||||
from unpaddedbase64 import decode_base64
|
||||
from zope.interface import Interface
|
||||
|
||||
@@ -515,27 +515,6 @@ class AbstractMultiWriterStreamToken(metaclass=abc.ABCMeta):
|
||||
# at `self.stream`.
|
||||
return self.instance_map.get(instance_name, self.stream)
|
||||
|
||||
def is_before_or_eq(self, other_token: Self) -> bool:
|
||||
"""Wether this token is before the other token, i.e. every constituent
|
||||
part is before the other.
|
||||
|
||||
Essentially it is `self <= other`.
|
||||
|
||||
Note: if `self.is_before_or_eq(other_token) is False` then that does not
|
||||
imply that the reverse is True.
|
||||
"""
|
||||
if self.stream > other_token.stream:
|
||||
return False
|
||||
|
||||
instances = self.instance_map.keys() | other_token.instance_map.keys()
|
||||
for instance in instances:
|
||||
if self.instance_map.get(
|
||||
instance, self.stream
|
||||
) > other_token.instance_map.get(instance, other_token.stream):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
@attr.s(frozen=True, slots=True, order=False)
|
||||
class RoomStreamToken(AbstractMultiWriterStreamToken):
|
||||
@@ -1029,41 +1008,6 @@ class StreamToken:
|
||||
"""Returns the stream ID for the given key."""
|
||||
return getattr(self, key.value)
|
||||
|
||||
def is_before_or_eq(self, other_token: "StreamToken") -> bool:
|
||||
"""Wether this token is before the other token, i.e. every constituent
|
||||
part is before the other.
|
||||
|
||||
Essentially it is `self <= other`.
|
||||
|
||||
Note: if `self.is_before_or_eq(other_token) is False` then that does not
|
||||
imply that the reverse is True.
|
||||
"""
|
||||
|
||||
for _, key in StreamKeyType.__members__.items():
|
||||
if key == StreamKeyType.TYPING:
|
||||
# Typing stream is allowed to "reset", and so comparisons don't
|
||||
# really make sense as is.
|
||||
# TODO: Figure out a better way of tracking resets.
|
||||
continue
|
||||
|
||||
self_value = self.get_field(key)
|
||||
other_value = other_token.get_field(key)
|
||||
|
||||
if isinstance(self_value, RoomStreamToken):
|
||||
assert isinstance(other_value, RoomStreamToken)
|
||||
if not self_value.is_before_or_eq(other_value):
|
||||
return False
|
||||
elif isinstance(self_value, MultiWriterStreamToken):
|
||||
assert isinstance(other_value, MultiWriterStreamToken)
|
||||
if not self_value.is_before_or_eq(other_value):
|
||||
return False
|
||||
else:
|
||||
assert isinstance(other_value, int)
|
||||
if self_value > other_value:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
StreamToken.START = StreamToken(
|
||||
RoomStreamToken(stream=0), 0, 0, MultiWriterStreamToken(stream=0), 0, 0, 0, 0, 0, 0
|
||||
|
||||
@@ -116,8 +116,9 @@ class TestRatelimiter(unittest.HomeserverTestCase):
|
||||
# Should raise
|
||||
with self.assertRaises(LimitExceededError) as context:
|
||||
self.get_success_or_raise(
|
||||
limiter.ratelimit(None, key="test_id", _time_now_s=5)
|
||||
limiter.ratelimit(None, key="test_id", _time_now_s=5), by=0.5
|
||||
)
|
||||
|
||||
self.assertEqual(context.exception.retry_after_ms, 5000)
|
||||
|
||||
# Shouldn't raise
|
||||
@@ -192,7 +193,7 @@ class TestRatelimiter(unittest.HomeserverTestCase):
|
||||
# Second attempt, 1s later, will fail
|
||||
with self.assertRaises(LimitExceededError) as context:
|
||||
self.get_success_or_raise(
|
||||
limiter.ratelimit(None, key=("test_id",), _time_now_s=1)
|
||||
limiter.ratelimit(None, key=("test_id",), _time_now_s=1), by=0.5
|
||||
)
|
||||
self.assertEqual(context.exception.retry_after_ms, 9000)
|
||||
|
||||
|
||||
657
tests/events/test_auto_accept_invites.py
Normal file
657
tests/events/test_auto_accept_invites.py
Normal file
@@ -0,0 +1,657 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
# Originally licensed under the Apache License, Version 2.0:
|
||||
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||
#
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
import asyncio
|
||||
from asyncio import Future
|
||||
from http import HTTPStatus
|
||||
from typing import Any, Awaitable, Dict, List, Optional, Tuple, TypeVar, cast
|
||||
from unittest.mock import Mock
|
||||
|
||||
import attr
|
||||
from parameterized import parameterized
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.config.auto_accept_invites import AutoAcceptInvitesConfig
|
||||
from synapse.events.auto_accept_invites import InviteAutoAccepter
|
||||
from synapse.federation.federation_base import event_from_pdu_json
|
||||
from synapse.handlers.sync import JoinedSyncResult, SyncRequestKey, SyncVersion
|
||||
from synapse.module_api import ModuleApi
|
||||
from synapse.rest import admin
|
||||
from synapse.rest.client import login, room
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import StreamToken, create_requester
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests.handlers.test_sync import generate_sync_config
|
||||
from tests.unittest import (
|
||||
FederatingHomeserverTestCase,
|
||||
HomeserverTestCase,
|
||||
TestCase,
|
||||
override_config,
|
||||
)
|
||||
|
||||
|
||||
class AutoAcceptInvitesTestCase(FederatingHomeserverTestCase):
|
||||
"""
|
||||
Integration test cases for auto-accepting invites.
|
||||
"""
|
||||
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
login.register_servlets,
|
||||
room.register_servlets,
|
||||
]
|
||||
|
||||
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||
hs = self.setup_test_homeserver()
|
||||
self.handler = hs.get_federation_handler()
|
||||
self.store = hs.get_datastores().main
|
||||
return hs
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.sync_handler = self.hs.get_sync_handler()
|
||||
self.module_api = hs.get_module_api()
|
||||
|
||||
@parameterized.expand(
|
||||
[
|
||||
[False],
|
||||
[True],
|
||||
]
|
||||
)
|
||||
@override_config(
|
||||
{
|
||||
"auto_accept_invites": {
|
||||
"enabled": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
def test_auto_accept_invites(self, direct_room: bool) -> None:
|
||||
"""Test that a user automatically joins a room when invited, if the
|
||||
module is enabled.
|
||||
"""
|
||||
# A local user who sends an invite
|
||||
inviting_user_id = self.register_user("inviter", "pass")
|
||||
inviting_user_tok = self.login("inviter", "pass")
|
||||
|
||||
# A local user who receives an invite
|
||||
invited_user_id = self.register_user("invitee", "pass")
|
||||
self.login("invitee", "pass")
|
||||
|
||||
# Create a room and send an invite to the other user
|
||||
room_id = self.helper.create_room_as(
|
||||
inviting_user_id,
|
||||
is_public=False,
|
||||
tok=inviting_user_tok,
|
||||
)
|
||||
|
||||
self.helper.invite(
|
||||
room_id,
|
||||
inviting_user_id,
|
||||
invited_user_id,
|
||||
tok=inviting_user_tok,
|
||||
extra_data={"is_direct": direct_room},
|
||||
)
|
||||
|
||||
# Check that the invite receiving user has automatically joined the room when syncing
|
||||
join_updates, _ = sync_join(self, invited_user_id)
|
||||
self.assertEqual(len(join_updates), 1)
|
||||
|
||||
join_update: JoinedSyncResult = join_updates[0]
|
||||
self.assertEqual(join_update.room_id, room_id)
|
||||
|
||||
@override_config(
|
||||
{
|
||||
"auto_accept_invites": {
|
||||
"enabled": False,
|
||||
},
|
||||
}
|
||||
)
|
||||
def test_module_not_enabled(self) -> None:
|
||||
"""Test that a user does not automatically join a room when invited,
|
||||
if the module is not enabled.
|
||||
"""
|
||||
# A local user who sends an invite
|
||||
inviting_user_id = self.register_user("inviter", "pass")
|
||||
inviting_user_tok = self.login("inviter", "pass")
|
||||
|
||||
# A local user who receives an invite
|
||||
invited_user_id = self.register_user("invitee", "pass")
|
||||
self.login("invitee", "pass")
|
||||
|
||||
# Create a room and send an invite to the other user
|
||||
room_id = self.helper.create_room_as(
|
||||
inviting_user_id, is_public=False, tok=inviting_user_tok
|
||||
)
|
||||
|
||||
self.helper.invite(
|
||||
room_id,
|
||||
inviting_user_id,
|
||||
invited_user_id,
|
||||
tok=inviting_user_tok,
|
||||
)
|
||||
|
||||
# Check that the invite receiving user has not automatically joined the room when syncing
|
||||
join_updates, _ = sync_join(self, invited_user_id)
|
||||
self.assertEqual(len(join_updates), 0)
|
||||
|
||||
@override_config(
|
||||
{
|
||||
"auto_accept_invites": {
|
||||
"enabled": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
def test_invite_from_remote_user(self) -> None:
|
||||
"""Test that an invite from a remote user results in the invited user
|
||||
automatically joining the room.
|
||||
"""
|
||||
# A remote user who sends the invite
|
||||
remote_server = "otherserver"
|
||||
remote_user = "@otheruser:" + remote_server
|
||||
|
||||
# A local user who creates the room
|
||||
creator_user_id = self.register_user("creator", "pass")
|
||||
creator_user_tok = self.login("creator", "pass")
|
||||
|
||||
# A local user who receives an invite
|
||||
invited_user_id = self.register_user("invitee", "pass")
|
||||
self.login("invitee", "pass")
|
||||
|
||||
room_id = self.helper.create_room_as(
|
||||
room_creator=creator_user_id, tok=creator_user_tok
|
||||
)
|
||||
room_version = self.get_success(self.store.get_room_version(room_id))
|
||||
|
||||
invite_event = event_from_pdu_json(
|
||||
{
|
||||
"type": EventTypes.Member,
|
||||
"content": {"membership": "invite"},
|
||||
"room_id": room_id,
|
||||
"sender": remote_user,
|
||||
"state_key": invited_user_id,
|
||||
"depth": 32,
|
||||
"prev_events": [],
|
||||
"auth_events": [],
|
||||
"origin_server_ts": self.clock.time_msec(),
|
||||
},
|
||||
room_version,
|
||||
)
|
||||
self.get_success(
|
||||
self.handler.on_invite_request(
|
||||
remote_server,
|
||||
invite_event,
|
||||
invite_event.room_version,
|
||||
)
|
||||
)
|
||||
|
||||
# Check that the invite receiving user has automatically joined the room when syncing
|
||||
join_updates, _ = sync_join(self, invited_user_id)
|
||||
self.assertEqual(len(join_updates), 1)
|
||||
|
||||
join_update: JoinedSyncResult = join_updates[0]
|
||||
self.assertEqual(join_update.room_id, room_id)
|
||||
|
||||
@parameterized.expand(
|
||||
[
|
||||
[False, False],
|
||||
[True, True],
|
||||
]
|
||||
)
|
||||
@override_config(
|
||||
{
|
||||
"auto_accept_invites": {
|
||||
"enabled": True,
|
||||
"only_for_direct_messages": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
def test_accept_invite_direct_message(
|
||||
self,
|
||||
direct_room: bool,
|
||||
expect_auto_join: bool,
|
||||
) -> None:
|
||||
"""Tests that, if the module is configured to only accept DM invites, invites to DM rooms are still
|
||||
automatically accepted. Otherwise they are rejected.
|
||||
"""
|
||||
# A local user who sends an invite
|
||||
inviting_user_id = self.register_user("inviter", "pass")
|
||||
inviting_user_tok = self.login("inviter", "pass")
|
||||
|
||||
# A local user who receives an invite
|
||||
invited_user_id = self.register_user("invitee", "pass")
|
||||
self.login("invitee", "pass")
|
||||
|
||||
# Create a room and send an invite to the other user
|
||||
room_id = self.helper.create_room_as(
|
||||
inviting_user_id,
|
||||
is_public=False,
|
||||
tok=inviting_user_tok,
|
||||
)
|
||||
|
||||
self.helper.invite(
|
||||
room_id,
|
||||
inviting_user_id,
|
||||
invited_user_id,
|
||||
tok=inviting_user_tok,
|
||||
extra_data={"is_direct": direct_room},
|
||||
)
|
||||
|
||||
if expect_auto_join:
|
||||
# Check that the invite receiving user has automatically joined the room when syncing
|
||||
join_updates, _ = sync_join(self, invited_user_id)
|
||||
self.assertEqual(len(join_updates), 1)
|
||||
|
||||
join_update: JoinedSyncResult = join_updates[0]
|
||||
self.assertEqual(join_update.room_id, room_id)
|
||||
else:
|
||||
# Check that the invite receiving user has not automatically joined the room when syncing
|
||||
join_updates, _ = sync_join(self, invited_user_id)
|
||||
self.assertEqual(len(join_updates), 0)
|
||||
|
||||
@parameterized.expand(
|
||||
[
|
||||
[False, True],
|
||||
[True, False],
|
||||
]
|
||||
)
|
||||
@override_config(
|
||||
{
|
||||
"auto_accept_invites": {
|
||||
"enabled": True,
|
||||
"only_from_local_users": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
def test_accept_invite_local_user(
|
||||
self, remote_inviter: bool, expect_auto_join: bool
|
||||
) -> None:
|
||||
"""Tests that, if the module is configured to only accept invites from local users, invites
|
||||
from local users are still automatically accepted. Otherwise they are rejected.
|
||||
"""
|
||||
# A local user who sends an invite
|
||||
creator_user_id = self.register_user("inviter", "pass")
|
||||
creator_user_tok = self.login("inviter", "pass")
|
||||
|
||||
# A local user who receives an invite
|
||||
invited_user_id = self.register_user("invitee", "pass")
|
||||
self.login("invitee", "pass")
|
||||
|
||||
# Create a room and send an invite to the other user
|
||||
room_id = self.helper.create_room_as(
|
||||
creator_user_id, is_public=False, tok=creator_user_tok
|
||||
)
|
||||
|
||||
if remote_inviter:
|
||||
room_version = self.get_success(self.store.get_room_version(room_id))
|
||||
|
||||
# A remote user who sends the invite
|
||||
remote_server = "otherserver"
|
||||
remote_user = "@otheruser:" + remote_server
|
||||
|
||||
invite_event = event_from_pdu_json(
|
||||
{
|
||||
"type": EventTypes.Member,
|
||||
"content": {"membership": "invite"},
|
||||
"room_id": room_id,
|
||||
"sender": remote_user,
|
||||
"state_key": invited_user_id,
|
||||
"depth": 32,
|
||||
"prev_events": [],
|
||||
"auth_events": [],
|
||||
"origin_server_ts": self.clock.time_msec(),
|
||||
},
|
||||
room_version,
|
||||
)
|
||||
self.get_success(
|
||||
self.handler.on_invite_request(
|
||||
remote_server,
|
||||
invite_event,
|
||||
invite_event.room_version,
|
||||
)
|
||||
)
|
||||
else:
|
||||
self.helper.invite(
|
||||
room_id,
|
||||
creator_user_id,
|
||||
invited_user_id,
|
||||
tok=creator_user_tok,
|
||||
)
|
||||
|
||||
if expect_auto_join:
|
||||
# Check that the invite receiving user has automatically joined the room when syncing
|
||||
join_updates, _ = sync_join(self, invited_user_id)
|
||||
self.assertEqual(len(join_updates), 1)
|
||||
|
||||
join_update: JoinedSyncResult = join_updates[0]
|
||||
self.assertEqual(join_update.room_id, room_id)
|
||||
else:
|
||||
# Check that the invite receiving user has not automatically joined the room when syncing
|
||||
join_updates, _ = sync_join(self, invited_user_id)
|
||||
self.assertEqual(len(join_updates), 0)
|
||||
|
||||
|
||||
_request_key = 0
|
||||
|
||||
|
||||
def generate_request_key() -> SyncRequestKey:
|
||||
global _request_key
|
||||
_request_key += 1
|
||||
return ("request_key", _request_key)
|
||||
|
||||
|
||||
def sync_join(
|
||||
testcase: HomeserverTestCase,
|
||||
user_id: str,
|
||||
since_token: Optional[StreamToken] = None,
|
||||
) -> Tuple[List[JoinedSyncResult], StreamToken]:
|
||||
"""Perform a sync request for the given user and return the user join updates
|
||||
they've received, as well as the next_batch token.
|
||||
|
||||
This method assumes testcase.sync_handler points to the homeserver's sync handler.
|
||||
|
||||
Args:
|
||||
testcase: The testcase that is currently being run.
|
||||
user_id: The ID of the user to generate a sync response for.
|
||||
since_token: An optional token to indicate from at what point to sync from.
|
||||
|
||||
Returns:
|
||||
A tuple containing a list of join updates, and the sync response's
|
||||
next_batch token.
|
||||
"""
|
||||
requester = create_requester(user_id)
|
||||
sync_config = generate_sync_config(requester.user.to_string())
|
||||
sync_result = testcase.get_success(
|
||||
testcase.hs.get_sync_handler().wait_for_sync_for_user(
|
||||
requester,
|
||||
sync_config,
|
||||
SyncVersion.SYNC_V2,
|
||||
generate_request_key(),
|
||||
since_token,
|
||||
)
|
||||
)
|
||||
|
||||
return sync_result.joined, sync_result.next_batch
|
||||
|
||||
|
||||
class InviteAutoAccepterInternalTestCase(TestCase):
|
||||
"""
|
||||
Test cases which exercise the internals of the InviteAutoAccepter.
|
||||
"""
|
||||
|
||||
def setUp(self) -> None:
|
||||
self.module = create_module()
|
||||
self.user_id = "@peter:test"
|
||||
self.invitee = "@lesley:test"
|
||||
self.remote_invitee = "@thomas:remote"
|
||||
|
||||
# We know our module API is a mock, but mypy doesn't.
|
||||
self.mocked_update_membership: Mock = self.module._api.update_room_membership # type: ignore[assignment]
|
||||
|
||||
async def test_accept_invite_with_failures(self) -> None:
|
||||
"""Tests that receiving an invite for a local user makes the module attempt to
|
||||
make the invitee join the room. This test verifies that it works if the call to
|
||||
update membership returns exceptions before successfully completing and returning an event.
|
||||
"""
|
||||
invite = MockEvent(
|
||||
sender="@inviter:test",
|
||||
state_key="@invitee:test",
|
||||
type="m.room.member",
|
||||
content={"membership": "invite"},
|
||||
)
|
||||
|
||||
join_event = MockEvent(
|
||||
sender="someone",
|
||||
state_key="someone",
|
||||
type="m.room.member",
|
||||
content={"membership": "join"},
|
||||
)
|
||||
# the first two calls raise an exception while the third call is successful
|
||||
self.mocked_update_membership.side_effect = [
|
||||
SynapseError(HTTPStatus.FORBIDDEN, "Forbidden"),
|
||||
SynapseError(HTTPStatus.FORBIDDEN, "Forbidden"),
|
||||
make_awaitable(join_event),
|
||||
]
|
||||
|
||||
# Stop mypy from complaining that we give on_new_event a MockEvent rather than an
|
||||
# EventBase.
|
||||
await self.module.on_new_event(event=invite) # type: ignore[arg-type]
|
||||
|
||||
await self.retry_assertions(
|
||||
self.mocked_update_membership,
|
||||
3,
|
||||
sender=invite.state_key,
|
||||
target=invite.state_key,
|
||||
room_id=invite.room_id,
|
||||
new_membership="join",
|
||||
)
|
||||
|
||||
async def test_accept_invite_failures(self) -> None:
|
||||
"""Tests that receiving an invite for a local user makes the module attempt to
|
||||
make the invitee join the room. This test verifies that if the update_membership call
|
||||
fails consistently, _retry_make_join will break the loop after the set number of retries and
|
||||
execution will continue.
|
||||
"""
|
||||
invite = MockEvent(
|
||||
sender=self.user_id,
|
||||
state_key=self.invitee,
|
||||
type="m.room.member",
|
||||
content={"membership": "invite"},
|
||||
)
|
||||
self.mocked_update_membership.side_effect = SynapseError(
|
||||
HTTPStatus.FORBIDDEN, "Forbidden"
|
||||
)
|
||||
|
||||
# Stop mypy from complaining that we give on_new_event a MockEvent rather than an
|
||||
# EventBase.
|
||||
await self.module.on_new_event(event=invite) # type: ignore[arg-type]
|
||||
|
||||
await self.retry_assertions(
|
||||
self.mocked_update_membership,
|
||||
5,
|
||||
sender=invite.state_key,
|
||||
target=invite.state_key,
|
||||
room_id=invite.room_id,
|
||||
new_membership="join",
|
||||
)
|
||||
|
||||
async def test_not_state(self) -> None:
|
||||
"""Tests that receiving an invite that's not a state event does nothing."""
|
||||
invite = MockEvent(
|
||||
sender=self.user_id, type="m.room.member", content={"membership": "invite"}
|
||||
)
|
||||
|
||||
# Stop mypy from complaining that we give on_new_event a MockEvent rather than an
|
||||
# EventBase.
|
||||
await self.module.on_new_event(event=invite) # type: ignore[arg-type]
|
||||
|
||||
self.mocked_update_membership.assert_not_called()
|
||||
|
||||
async def test_not_invite(self) -> None:
|
||||
"""Tests that receiving a membership update that's not an invite does nothing."""
|
||||
invite = MockEvent(
|
||||
sender=self.user_id,
|
||||
state_key=self.user_id,
|
||||
type="m.room.member",
|
||||
content={"membership": "join"},
|
||||
)
|
||||
|
||||
# Stop mypy from complaining that we give on_new_event a MockEvent rather than an
|
||||
# EventBase.
|
||||
await self.module.on_new_event(event=invite) # type: ignore[arg-type]
|
||||
|
||||
self.mocked_update_membership.assert_not_called()
|
||||
|
||||
async def test_not_membership(self) -> None:
|
||||
"""Tests that receiving a state event that's not a membership update does
|
||||
nothing.
|
||||
"""
|
||||
invite = MockEvent(
|
||||
sender=self.user_id,
|
||||
state_key=self.user_id,
|
||||
type="org.matrix.test",
|
||||
content={"foo": "bar"},
|
||||
)
|
||||
|
||||
# Stop mypy from complaining that we give on_new_event a MockEvent rather than an
|
||||
# EventBase.
|
||||
await self.module.on_new_event(event=invite) # type: ignore[arg-type]
|
||||
|
||||
self.mocked_update_membership.assert_not_called()
|
||||
|
||||
def test_config_parse(self) -> None:
|
||||
"""Tests that a correct configuration parses."""
|
||||
config = {
|
||||
"auto_accept_invites": {
|
||||
"enabled": True,
|
||||
"only_for_direct_messages": True,
|
||||
"only_from_local_users": True,
|
||||
}
|
||||
}
|
||||
parsed_config = AutoAcceptInvitesConfig()
|
||||
parsed_config.read_config(config)
|
||||
|
||||
self.assertTrue(parsed_config.enabled)
|
||||
self.assertTrue(parsed_config.accept_invites_only_for_direct_messages)
|
||||
self.assertTrue(parsed_config.accept_invites_only_from_local_users)
|
||||
|
||||
def test_runs_on_only_one_worker(self) -> None:
|
||||
"""
|
||||
Tests that the module only runs on the specified worker.
|
||||
"""
|
||||
# By default, we run on the main process...
|
||||
main_module = create_module(
|
||||
config_override={"auto_accept_invites": {"enabled": True}}, worker_name=None
|
||||
)
|
||||
cast(
|
||||
Mock, main_module._api.register_third_party_rules_callbacks
|
||||
).assert_called_once()
|
||||
|
||||
# ...and not on other workers (like synchrotrons)...
|
||||
sync_module = create_module(worker_name="synchrotron42")
|
||||
cast(
|
||||
Mock, sync_module._api.register_third_party_rules_callbacks
|
||||
).assert_not_called()
|
||||
|
||||
# ...unless we configured them to be the designated worker.
|
||||
specified_module = create_module(
|
||||
config_override={
|
||||
"auto_accept_invites": {
|
||||
"enabled": True,
|
||||
"worker_to_run_on": "account_data1",
|
||||
}
|
||||
},
|
||||
worker_name="account_data1",
|
||||
)
|
||||
cast(
|
||||
Mock, specified_module._api.register_third_party_rules_callbacks
|
||||
).assert_called_once()
|
||||
|
||||
async def retry_assertions(
|
||||
self, mock: Mock, call_count: int, **kwargs: Any
|
||||
) -> None:
|
||||
"""
|
||||
This is a hacky way to ensure that the assertions are not called before the other coroutine
|
||||
has a chance to call `update_room_membership`. It catches the exception caused by a failure,
|
||||
and sleeps the thread before retrying, up until 5 tries.
|
||||
|
||||
Args:
|
||||
call_count: the number of times the mock should have been called
|
||||
mock: the mocked function we want to assert on
|
||||
kwargs: keyword arguments to assert that the mock was called with
|
||||
"""
|
||||
|
||||
i = 0
|
||||
while i < 5:
|
||||
try:
|
||||
# Check that the mocked method is called the expected amount of times and with the right
|
||||
# arguments to attempt to make the user join the room.
|
||||
mock.assert_called_with(**kwargs)
|
||||
self.assertEqual(call_count, mock.call_count)
|
||||
break
|
||||
except AssertionError as e:
|
||||
i += 1
|
||||
if i == 5:
|
||||
# we've used up the tries, force the test to fail as we've already caught the exception
|
||||
self.fail(e)
|
||||
await asyncio.sleep(1)
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True)
|
||||
class MockEvent:
|
||||
"""Mocks an event. Only exposes properties the module uses."""
|
||||
|
||||
sender: str
|
||||
type: str
|
||||
content: Dict[str, Any]
|
||||
room_id: str = "!someroom"
|
||||
state_key: Optional[str] = None
|
||||
|
||||
def is_state(self) -> bool:
|
||||
"""Checks if the event is a state event by checking if it has a state key."""
|
||||
return self.state_key is not None
|
||||
|
||||
@property
|
||||
def membership(self) -> str:
|
||||
"""Extracts the membership from the event. Should only be called on an event
|
||||
that's a membership event, and will raise a KeyError otherwise.
|
||||
"""
|
||||
membership: str = self.content["membership"]
|
||||
return membership
|
||||
|
||||
|
||||
T = TypeVar("T")
|
||||
TV = TypeVar("TV")
|
||||
|
||||
|
||||
async def make_awaitable(value: T) -> T:
|
||||
return value
|
||||
|
||||
|
||||
def make_multiple_awaitable(result: TV) -> Awaitable[TV]:
|
||||
"""
|
||||
Makes an awaitable, suitable for mocking an `async` function.
|
||||
This uses Futures as they can be awaited multiple times so can be returned
|
||||
to multiple callers.
|
||||
"""
|
||||
future: Future[TV] = Future()
|
||||
future.set_result(result)
|
||||
return future
|
||||
|
||||
|
||||
def create_module(
|
||||
config_override: Optional[Dict[str, Any]] = None, worker_name: Optional[str] = None
|
||||
) -> InviteAutoAccepter:
|
||||
# Create a mock based on the ModuleApi spec, but override some mocked functions
|
||||
# because some capabilities are needed for running the tests.
|
||||
module_api = Mock(spec=ModuleApi)
|
||||
module_api.is_mine.side_effect = lambda a: a.split(":")[1] == "test"
|
||||
module_api.worker_name = worker_name
|
||||
module_api.sleep.return_value = make_multiple_awaitable(None)
|
||||
|
||||
if config_override is None:
|
||||
config_override = {}
|
||||
|
||||
config = AutoAcceptInvitesConfig()
|
||||
config.read_config(config_override)
|
||||
|
||||
return InviteAutoAccepter(config, module_api)
|
||||
@@ -483,6 +483,7 @@ class FederationTestCase(unittest.FederatingHomeserverTestCase):
|
||||
event.room_version,
|
||||
),
|
||||
exc=LimitExceededError,
|
||||
by=0.5,
|
||||
)
|
||||
|
||||
def _build_and_send_join_event(
|
||||
|
||||
@@ -70,6 +70,7 @@ class TestJoinsLimitedByPerRoomRateLimiter(FederatingHomeserverTestCase):
|
||||
action=Membership.JOIN,
|
||||
),
|
||||
LimitExceededError,
|
||||
by=0.5,
|
||||
)
|
||||
|
||||
@override_config({"rc_joins_per_room": {"per_second": 0, "burst_count": 2}})
|
||||
@@ -206,6 +207,7 @@ class TestJoinsLimitedByPerRoomRateLimiter(FederatingHomeserverTestCase):
|
||||
remote_room_hosts=[self.OTHER_SERVER_NAME],
|
||||
),
|
||||
LimitExceededError,
|
||||
by=0.5,
|
||||
)
|
||||
|
||||
# TODO: test that remote joins to a room are rate limited.
|
||||
@@ -273,6 +275,7 @@ class TestReplicatedJoinsLimitedByPerRoomRateLimiter(BaseMultiWorkerStreamTestCa
|
||||
action=Membership.JOIN,
|
||||
),
|
||||
LimitExceededError,
|
||||
by=0.5,
|
||||
)
|
||||
|
||||
# Try to join as Chris on the original worker. Should get denied because Alice
|
||||
@@ -285,6 +288,7 @@ class TestReplicatedJoinsLimitedByPerRoomRateLimiter(BaseMultiWorkerStreamTestCa
|
||||
action=Membership.JOIN,
|
||||
),
|
||||
LimitExceededError,
|
||||
by=0.5,
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -170,6 +170,7 @@ class RestHelper:
|
||||
targ: Optional[str] = None,
|
||||
expect_code: int = HTTPStatus.OK,
|
||||
tok: Optional[str] = None,
|
||||
extra_data: Optional[dict] = None,
|
||||
) -> JsonDict:
|
||||
return self.change_membership(
|
||||
room=room,
|
||||
@@ -178,6 +179,7 @@ class RestHelper:
|
||||
tok=tok,
|
||||
membership=Membership.INVITE,
|
||||
expect_code=expect_code,
|
||||
extra_data=extra_data,
|
||||
)
|
||||
|
||||
def join(
|
||||
|
||||
@@ -85,6 +85,7 @@ from twisted.web.server import Request, Site
|
||||
|
||||
from synapse.config.database import DatabaseConnectionConfig
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.events.auto_accept_invites import InviteAutoAccepter
|
||||
from synapse.events.presence_router import load_legacy_presence_router
|
||||
from synapse.handlers.auth import load_legacy_password_auth_providers
|
||||
from synapse.http.site import SynapseRequest
|
||||
@@ -1156,6 +1157,11 @@ def setup_test_homeserver(
|
||||
for module, module_config in hs.config.modules.loaded_modules:
|
||||
module(config=module_config, api=module_api)
|
||||
|
||||
if hs.config.auto_accept_invites.enabled:
|
||||
# Start the local auto_accept_invites module.
|
||||
m = InviteAutoAccepter(hs.config.auto_accept_invites, module_api)
|
||||
logger.info("Loaded local module %s", m)
|
||||
|
||||
load_legacy_spam_checkers(hs)
|
||||
load_legacy_third_party_event_rules(hs)
|
||||
load_legacy_presence_router(hs)
|
||||
|
||||
@@ -637,13 +637,13 @@ class HomeserverTestCase(TestCase):
|
||||
return self.successResultOf(deferred)
|
||||
|
||||
def get_failure(
|
||||
self, d: Awaitable[Any], exc: Type[_ExcType]
|
||||
self, d: Awaitable[Any], exc: Type[_ExcType], by: float = 0.0
|
||||
) -> _TypedFailure[_ExcType]:
|
||||
"""
|
||||
Run a Deferred and get a Failure from it. The failure must be of the type `exc`.
|
||||
"""
|
||||
deferred: Deferred[Any] = ensureDeferred(d) # type: ignore[arg-type]
|
||||
self.pump()
|
||||
self.pump(by)
|
||||
return self.failureResultOf(deferred, exc)
|
||||
|
||||
def get_success_or_raise(self, d: Awaitable[TV], by: float = 0.0) -> TV:
|
||||
|
||||
Reference in New Issue
Block a user