mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-09 01:30:18 +00:00
Compare commits
80 Commits
ts/spam-er
...
dmr/oidc-c
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bfa3ae3d3b | ||
|
|
44ac98422c | ||
|
|
245d5d66d3 | ||
|
|
26cb343d40 | ||
|
|
9ee0dab921 | ||
|
|
be593787de | ||
|
|
fd92608b32 | ||
|
|
4c66f55279 | ||
|
|
e2d9072543 | ||
|
|
88f603f845 | ||
|
|
bbaba3c27f | ||
|
|
3ba21b61a0 | ||
|
|
3097172832 | ||
|
|
2179c6376a | ||
|
|
51d9e34412 | ||
|
|
9b9b51be6a | ||
|
|
348b53fe9c | ||
|
|
8f1b555ec6 | ||
|
|
6ff99e3bea | ||
|
|
a1cb05b3e8 | ||
|
|
d38c73e9ab | ||
|
|
0fce474a40 | ||
|
|
19d79b6ebe | ||
|
|
3d8839c30c | ||
|
|
50ae4eafe1 | ||
|
|
682431efbe | ||
|
|
635f0d916b | ||
|
|
df4963548b | ||
|
|
a167304c8b | ||
|
|
deca250e3f | ||
|
|
d24a1486e5 | ||
|
|
1aa30f7b3e | ||
|
|
c22314c4e8 | ||
|
|
d4713d3e33 | ||
|
|
8afb7b55d0 | ||
|
|
37935b5183 | ||
|
|
0d17357fcd | ||
|
|
182ca78a12 | ||
|
|
5331fb5b47 | ||
|
|
6edefef602 | ||
|
|
942c30b16b | ||
|
|
24b590de32 | ||
|
|
a34a41f135 | ||
|
|
1402159bb8 | ||
|
|
32ef24fbd7 | ||
|
|
fcf951d5dc | ||
|
|
44d7bb13c3 | ||
|
|
5c3d525cad | ||
|
|
1fe202a1a3 | ||
|
|
6d8d1218dd | ||
|
|
3eafee629d | ||
|
|
e24c11afd6 | ||
|
|
83be72d76c | ||
|
|
4ea546067d | ||
|
|
3ce15cc7be | ||
|
|
b4eb163434 | ||
|
|
8060034612 | ||
|
|
a5c26750b5 | ||
|
|
86a515ccbf | ||
|
|
6f04ae7033 | ||
|
|
c3b232cb39 | ||
|
|
8689230a55 | ||
|
|
cde8af9a49 | ||
|
|
e8ae472d3b | ||
|
|
9013104429 | ||
|
|
aec69d2481 | ||
|
|
39bed28b28 | ||
|
|
c9fc2c0d22 | ||
|
|
57f6c496d0 | ||
|
|
17e1eb7749 | ||
|
|
de1e599b9d | ||
|
|
409573f6d0 | ||
|
|
bf7ce92bf7 | ||
|
|
db10f2c037 | ||
|
|
6ee61b9052 | ||
|
|
d38d242411 | ||
|
|
a559c8b0d9 | ||
|
|
9d8e380d2e | ||
|
|
dffecade7d | ||
|
|
a4c75918b3 |
38
CHANGES.md
38
CHANGES.md
@@ -1,7 +1,18 @@
|
||||
Synapse 1.59.0rc1 (2022-05-10)
|
||||
==============================
|
||||
Synapse 1.59.1 (2022-05-18)
|
||||
===========================
|
||||
|
||||
This release makes several changes that server administrators should be aware of:
|
||||
This release fixes a long-standing issue which could prevent Synapse's user directory for updating properly.
|
||||
|
||||
Bugfixes
|
||||
----------------
|
||||
|
||||
- Fix a long-standing bug where the user directory background process would fail to make forward progress if a user included a null codepoint in their display name or avatar. Contributed by Nick @ Beeper. ([\#12762](https://github.com/matrix-org/synapse/issues/12762))
|
||||
|
||||
|
||||
Synapse 1.59.0 (2022-05-17)
|
||||
===========================
|
||||
|
||||
Synapse 1.59 makes several changes that server administrators should be aware of:
|
||||
|
||||
- Device name lookup over federation is now disabled by default. ([\#12616](https://github.com/matrix-org/synapse/issues/12616))
|
||||
- The `synapse.app.appservice` and `synapse.app.user_dir` worker application types are now deprecated. ([\#12452](https://github.com/matrix-org/synapse/issues/12452), [\#12654](https://github.com/matrix-org/synapse/issues/12654))
|
||||
@@ -10,6 +21,27 @@ See [the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/
|
||||
|
||||
Additionally, this release removes the non-standard `m.login.jwt` login type from Synapse. It can be replaced with `org.matrix.login.jwt` for identical behaviour. This is only used if `jwt_config.enabled` is set to `true` in the configuration. ([\#12597](https://github.com/matrix-org/synapse/issues/12597))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix DB performance regression introduced in Synapse 1.59.0rc2. ([\#12745](https://github.com/matrix-org/synapse/issues/12745))
|
||||
|
||||
|
||||
Synapse 1.59.0rc2 (2022-05-16)
|
||||
==============================
|
||||
|
||||
Note: this release candidate includes a performance regression which can cause database disruption. Other release candidates in the v1.59.0 series are not affected, and a fix will be included in the v1.59.0 final release.
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in Synapse 1.58.0 where `/sync` would fail if the most recent event in a room was rejected. ([\#12729](https://github.com/matrix-org/synapse/issues/12729))
|
||||
|
||||
|
||||
Synapse 1.59.0rc1 (2022-05-10)
|
||||
==============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
|
||||
1
changelog.d/10533.misc
Normal file
1
changelog.d/10533.misc
Normal file
@@ -0,0 +1 @@
|
||||
Improve event caching mechanism to avoid having multiple copies of an event in memory at a time.
|
||||
1
changelog.d/12498.misc
Normal file
1
changelog.d/12498.misc
Normal file
@@ -0,0 +1 @@
|
||||
Preparation for faster-room-join work: return subsets of room state which we already have, immediately.
|
||||
1
changelog.d/12513.feature
Normal file
1
changelog.d/12513.feature
Normal file
@@ -0,0 +1 @@
|
||||
Measure the time taken in spam-checking callbacks and expose those measurements as metrics.
|
||||
1
changelog.d/12567.misc
Normal file
1
changelog.d/12567.misc
Normal file
@@ -0,0 +1 @@
|
||||
Replace string literal instances of stream key types with typed constants.
|
||||
1
changelog.d/12618.feature
Normal file
1
changelog.d/12618.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add a `default_power_level_content_override` config option to set default room power levels per room preset.
|
||||
1
changelog.d/12623.feature
Normal file
1
changelog.d/12623.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add support for [MSC3787: Allowing knocks to restricted rooms](https://github.com/matrix-org/matrix-spec-proposals/pull/3787).
|
||||
1
changelog.d/12673.feature
Normal file
1
changelog.d/12673.feature
Normal file
@@ -0,0 +1 @@
|
||||
Synapse will now reload [cache config](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#caching) when it receives a [SIGHUP](https://en.wikipedia.org/wiki/SIGHUP) signal.
|
||||
1
changelog.d/12680.misc
Normal file
1
changelog.d/12680.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove code which updates unused database column `application_services_state.last_txn`.
|
||||
1
changelog.d/12691.misc
Normal file
1
changelog.d/12691.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove an unneeded class in the push code.
|
||||
1
changelog.d/12693.misc
Normal file
1
changelog.d/12693.misc
Normal file
@@ -0,0 +1 @@
|
||||
Consolidate parsing of relation information from events.
|
||||
1
changelog.d/12696.bugfix
Normal file
1
changelog.d/12696.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug where an empty room would be created when a user with an insufficient power level tried to upgrade a room.
|
||||
1
changelog.d/12698.misc
Normal file
1
changelog.d/12698.misc
Normal file
@@ -0,0 +1 @@
|
||||
Respect the `@cancellable` flag for `DirectServe{Html,Json}Resource`s.
|
||||
1
changelog.d/12699.misc
Normal file
1
changelog.d/12699.misc
Normal file
@@ -0,0 +1 @@
|
||||
Respect the `@cancellable` flag for `RestServlet`s and `BaseFederationServlet`s.
|
||||
1
changelog.d/12700.misc
Normal file
1
changelog.d/12700.misc
Normal file
@@ -0,0 +1 @@
|
||||
Respect the `@cancellable` flag for `ReplicationEndpoint`s.
|
||||
1
changelog.d/12701.feature
Normal file
1
changelog.d/12701.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add a config options to allow for auto-tuning of caches.
|
||||
1
changelog.d/12705.misc
Normal file
1
changelog.d/12705.misc
Normal file
@@ -0,0 +1 @@
|
||||
Complain if a federation endpoint has the `@cancellable` flag, since some of the wrapper code may not handle cancellation correctly yet.
|
||||
1
changelog.d/12708.misc
Normal file
1
changelog.d/12708.misc
Normal file
@@ -0,0 +1 @@
|
||||
Enable cancellation of `GET /rooms/$room_id/members`, `GET /rooms/$room_id/state` and `GET /rooms/$room_id/state/$event_type/*` requests.
|
||||
1
changelog.d/12709.removal
Normal file
1
changelog.d/12709.removal
Normal file
@@ -0,0 +1 @@
|
||||
Require a body in POST requests to `/rooms/{roomId}/receipt/{receiptType}/{eventId}`, as required by the [Matrix specification](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3roomsroomidreceiptreceipttypeeventid). This breaks compatibility with Element Android 1.2.0 and earlier: users of those clients will be unable to send read receipts.
|
||||
1
changelog.d/12711.misc
Normal file
1
changelog.d/12711.misc
Normal file
@@ -0,0 +1 @@
|
||||
Optimize private read receipt filtering.
|
||||
1
changelog.d/12713.bugfix
Normal file
1
changelog.d/12713.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug introduced in Synapse 1.30.0 where empty rooms could be automatically created if a monthly active users limit is set.
|
||||
1
changelog.d/12715.doc
Normal file
1
changelog.d/12715.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix a typo in the Media Admin API documentation.
|
||||
1
changelog.d/12716.misc
Normal file
1
changelog.d/12716.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add type annotations to increase the number of modules passing `disallow-untyped-defs`.
|
||||
1
changelog.d/12717.misc
Normal file
1
changelog.d/12717.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add some type hints to datastore.
|
||||
1
changelog.d/12720.misc
Normal file
1
changelog.d/12720.misc
Normal file
@@ -0,0 +1 @@
|
||||
Drop the logging level of status messages for the URL preview cache expiry job from INFO to DEBUG.
|
||||
1
changelog.d/12721.bugfix
Normal file
1
changelog.d/12721.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix push to dismiss notifications when read on another client. Contributed by @SpiritCroc @ Beeper.
|
||||
1
changelog.d/12723.misc
Normal file
1
changelog.d/12723.misc
Normal file
@@ -0,0 +1 @@
|
||||
Downgrade some OIDC errors to warnings in the logs, to reduce the noise of Sentry reports.
|
||||
1
changelog.d/12726.misc
Normal file
1
changelog.d/12726.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add type annotations to increase the number of modules passing `disallow-untyped-defs`.
|
||||
1
changelog.d/12727.doc
Normal file
1
changelog.d/12727.doc
Normal file
@@ -0,0 +1 @@
|
||||
Update the OpenID Connect example for Keycloak to be compatible with newer versions of Keycloak. Contributed by @nhh.
|
||||
1
changelog.d/12731.misc
Normal file
1
changelog.d/12731.misc
Normal file
@@ -0,0 +1 @@
|
||||
Update configs used by Complement to allow more invites/3PID validations during tests.
|
||||
1
changelog.d/12734.misc
Normal file
1
changelog.d/12734.misc
Normal file
@@ -0,0 +1 @@
|
||||
Tidy up and type-hint the database engine modules.
|
||||
1
changelog.d/12742.doc
Normal file
1
changelog.d/12742.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typo in server listener documentation.
|
||||
1
changelog.d/12747.bugfix
Normal file
1
changelog.d/12747.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix poor database performance when reading the cache invalidation stream for large servers with lots of workers.
|
||||
1
changelog.d/12748.doc
Normal file
1
changelog.d/12748.doc
Normal file
@@ -0,0 +1 @@
|
||||
Link to the configuration manual from the welcome page of the documentation.
|
||||
1
changelog.d/12749.doc
Normal file
1
changelog.d/12749.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typo in 'run_background_tasks_on' option name in configuration manual documentation.
|
||||
1
changelog.d/12753.misc
Normal file
1
changelog.d/12753.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add some type hints to datastore.
|
||||
1
changelog.d/12759.doc
Normal file
1
changelog.d/12759.doc
Normal file
@@ -0,0 +1 @@
|
||||
Add information regarding the `rc_invites` ratelimiting option to the configuration docs.
|
||||
1
changelog.d/12761.doc
Normal file
1
changelog.d/12761.doc
Normal file
@@ -0,0 +1 @@
|
||||
Add documentation for cancellation of request processing.
|
||||
1
changelog.d/12762.misc
Normal file
1
changelog.d/12762.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug where the user directory background process would fail to make forward progress if a user included a null codepoint in their display name or avatar.
|
||||
1
changelog.d/12765.doc
Normal file
1
changelog.d/12765.doc
Normal file
@@ -0,0 +1 @@
|
||||
Recommend using docker to run tests against postgres.
|
||||
1
changelog.d/12769.misc
Normal file
1
changelog.d/12769.misc
Normal file
@@ -0,0 +1 @@
|
||||
Tweak the mypy plugin so that `@cached` can accept `on_invalidate=None`.
|
||||
1
changelog.d/12770.bugfix
Normal file
1
changelog.d/12770.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Delete events from the `federation_inbound_events_staging` table when a room is purged through the admin API.
|
||||
1
changelog.d/12772.misc
Normal file
1
changelog.d/12772.misc
Normal file
@@ -0,0 +1 @@
|
||||
Move methods that call `add_push_rule` to the `PushRuleStore` class.
|
||||
1
changelog.d/12774.misc
Normal file
1
changelog.d/12774.misc
Normal file
@@ -0,0 +1 @@
|
||||
Make handling of federation Authorization header (more) compliant with RFC7230.
|
||||
1
changelog.d/12775.misc
Normal file
1
changelog.d/12775.misc
Normal file
@@ -0,0 +1 @@
|
||||
Refactor `resolve_state_groups_for_events` to not pull out full state when no state resolution happens.
|
||||
1
changelog.d/12779.bugfix
Normal file
1
changelog.d/12779.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Give a meaningful error message when a client tries to create a room with an invalid alias localpart.
|
||||
1
changelog.d/12781.misc
Normal file
1
changelog.d/12781.misc
Normal file
@@ -0,0 +1 @@
|
||||
Do not keep going if there are 5 back-to-back background update failures.
|
||||
1
changelog.d/12783.misc
Normal file
1
changelog.d/12783.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix federation when using the demo scripts.
|
||||
1
changelog.d/12785.doc
Normal file
1
changelog.d/12785.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix invalid YAML syntax in the example documentation for the `url_preview_accept_language` config option.
|
||||
18
debian/changelog
vendored
18
debian/changelog
vendored
@@ -1,3 +1,21 @@
|
||||
matrix-synapse-py3 (1.59.1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.59.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 18 May 2022 11:41:46 +0100
|
||||
|
||||
matrix-synapse-py3 (1.59.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.59.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 17 May 2022 10:26:50 +0100
|
||||
|
||||
matrix-synapse-py3 (1.59.0~rc2) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.59.0rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Mon, 16 May 2022 12:52:15 +0100
|
||||
|
||||
matrix-synapse-py3 (1.59.0~rc1) stable; urgency=medium
|
||||
|
||||
* Adjust how the `exported-requirements.txt` file is generated as part of
|
||||
|
||||
@@ -12,6 +12,7 @@ export PYTHONPATH
|
||||
|
||||
echo "$PYTHONPATH"
|
||||
|
||||
# Create servers which listen on HTTP at 808x and HTTPS at 848x.
|
||||
for port in 8080 8081 8082; do
|
||||
echo "Starting server on port $port... "
|
||||
|
||||
@@ -19,10 +20,12 @@ for port in 8080 8081 8082; do
|
||||
mkdir -p demo/$port
|
||||
pushd demo/$port || exit
|
||||
|
||||
# Generate the configuration for the homeserver at localhost:848x.
|
||||
# Generate the configuration for the homeserver at localhost:848x, note that
|
||||
# the homeserver name needs to match the HTTPS listening port for federation
|
||||
# to properly work..
|
||||
python3 -m synapse.app.homeserver \
|
||||
--generate-config \
|
||||
--server-name "localhost:$port" \
|
||||
--server-name "localhost:$https_port" \
|
||||
--config-path "$port.config" \
|
||||
--report-stats no
|
||||
|
||||
|
||||
@@ -53,6 +53,18 @@ rc_joins:
|
||||
per_second: 9999
|
||||
burst_count: 9999
|
||||
|
||||
rc_3pid_validation:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
|
||||
rc_invites:
|
||||
per_room:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
per_user:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
|
||||
federation_rr_transactions_per_room_per_second: 9999
|
||||
|
||||
## Experimental Features ##
|
||||
|
||||
@@ -87,6 +87,18 @@ rc_joins:
|
||||
per_second: 9999
|
||||
burst_count: 9999
|
||||
|
||||
rc_3pid_validation:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
|
||||
rc_invites:
|
||||
per_room:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
per_user:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
|
||||
federation_rr_transactions_per_room_per_second: 9999
|
||||
|
||||
## API Configuration ##
|
||||
|
||||
@@ -89,6 +89,7 @@
|
||||
- [Database Schemas](development/database_schema.md)
|
||||
- [Experimental features](development/experimental_features.md)
|
||||
- [Synapse Architecture]()
|
||||
- [Cancellation](development/synapse_architecture/cancellation.md)
|
||||
- [Log Contexts](log_contexts.md)
|
||||
- [Replication](replication.md)
|
||||
- [TCP Replication](tcp_replication.md)
|
||||
|
||||
@@ -289,7 +289,7 @@ POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
|
||||
|
||||
URL Parameters
|
||||
|
||||
* `unix_timestamp_in_ms`: string representing a positive integer - Unix timestamp in milliseconds.
|
||||
* `before_ts`: string representing a positive integer - Unix timestamp in milliseconds.
|
||||
All cached media that was last accessed before this timestamp will be removed.
|
||||
|
||||
Response:
|
||||
|
||||
@@ -206,7 +206,32 @@ This means that we need to run our unit tests against PostgreSQL too. Our CI doe
|
||||
this automatically for pull requests and release candidates, but it's sometimes
|
||||
useful to reproduce this locally.
|
||||
|
||||
To do so, [configure Postgres](../postgres.md) and run `trial` with the
|
||||
#### Using Docker
|
||||
|
||||
The easiest way to do so is to run Postgres via a docker container. In one
|
||||
terminal:
|
||||
|
||||
```shell
|
||||
docker run --rm -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_USER=postgres -e POSTGRES_DB=postgress -p 5432:5432 postgres:14
|
||||
```
|
||||
|
||||
If you see an error like
|
||||
|
||||
```
|
||||
docker: Error response from daemon: driver failed programming external connectivity on endpoint nice_ride (b57bbe2e251b70015518d00c9981e8cb8346b5c785250341a6c53e3c899875f1): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use.
|
||||
```
|
||||
|
||||
then something is already bound to port 5432. You're probably already running postgres locally.
|
||||
|
||||
Once you have a postgres server running, invoke `trial` in a second terminal:
|
||||
|
||||
```shell
|
||||
SYNAPSE_POSTGRES=1 SYNAPSE_POSTGRES_HOST=127.0.0.1 SYNAPSE_POSTGRES_USER=postgres SYNAPSE_POSTGRES_PASSWORD=mysecretpassword poetry run trial tests
|
||||
````
|
||||
|
||||
#### Using an existing Postgres installation
|
||||
|
||||
If you have postgres already installed on your system, you can run `trial` with the
|
||||
following environment variables matching your configuration:
|
||||
|
||||
- `SYNAPSE_POSTGRES` to anything nonempty
|
||||
@@ -229,8 +254,8 @@ You don't need to specify the host, user, port or password if your Postgres
|
||||
server is set to authenticate you over the UNIX socket (i.e. if the `psql` command
|
||||
works without further arguments).
|
||||
|
||||
Your Postgres account needs to be able to create databases.
|
||||
|
||||
Your Postgres account needs to be able to create databases; see the postgres
|
||||
docs for [`ALTER ROLE`](https://www.postgresql.org/docs/current/sql-alterrole.html).
|
||||
|
||||
## Run the integration tests ([Sytest](https://github.com/matrix-org/sytest)).
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
Requires you to have a [Synapse development environment setup](https://matrix-org.github.io/synapse/develop/development/contributing_guide.html#4-install-the-dependencies).
|
||||
|
||||
The demo setup allows running three federation Synapse servers, with server
|
||||
names `localhost:8080`, `localhost:8081`, and `localhost:8082`.
|
||||
names `localhost:8480`, `localhost:8481`, and `localhost:8482`.
|
||||
|
||||
You can access them via any Matrix client over HTTP at `localhost:8080`,
|
||||
`localhost:8081`, and `localhost:8082` or over HTTPS at `localhost:8480`,
|
||||
@@ -20,9 +20,10 @@ and the servers are configured in a highly insecure way, including:
|
||||
The servers are configured to store their data under `demo/8080`, `demo/8081`, and
|
||||
`demo/8082`. This includes configuration, logs, SQLite databases, and media.
|
||||
|
||||
Note that when joining a public room on a different HS via "#foo:bar.net", then
|
||||
you are (in the current impl) joining a room with room_id "foo". This means that
|
||||
it won't work if your HS already has a room with that name.
|
||||
Note that when joining a public room on a different homeserver via "#foo:bar.net",
|
||||
then you are (in the current implementation) joining a room with room_id "foo".
|
||||
This means that it won't work if your homeserver already has a room with that
|
||||
name.
|
||||
|
||||
## Using the demo scripts
|
||||
|
||||
|
||||
392
docs/development/synapse_architecture/cancellation.md
Normal file
392
docs/development/synapse_architecture/cancellation.md
Normal file
@@ -0,0 +1,392 @@
|
||||
# Cancellation
|
||||
Sometimes, requests take a long time to service and clients disconnect
|
||||
before Synapse produces a response. To avoid wasting resources, Synapse
|
||||
can cancel request processing for select endpoints marked with the
|
||||
`@cancellable` decorator.
|
||||
|
||||
Synapse makes use of Twisted's `Deferred.cancel()` feature to make
|
||||
cancellation work. The `@cancellable` decorator does nothing by itself
|
||||
and merely acts as a flag, signalling to developers and other code alike
|
||||
that a method can be cancelled.
|
||||
|
||||
## Enabling cancellation for an endpoint
|
||||
1. Check that the endpoint method, and any `async` functions in its call
|
||||
tree handle cancellation correctly. See
|
||||
[Handling cancellation correctly](#handling-cancellation-correctly)
|
||||
for a list of things to look out for.
|
||||
2. Add the `@cancellable` decorator to the `on_GET/POST/PUT/DELETE`
|
||||
method. It's not recommended to make non-`GET` methods cancellable,
|
||||
since cancellation midway through some database updates is less
|
||||
likely to be handled correctly.
|
||||
|
||||
## Mechanics
|
||||
There are two stages to cancellation: downward propagation of a
|
||||
`cancel()` call, followed by upwards propagation of a `CancelledError`
|
||||
out of a blocked `await`.
|
||||
Both Twisted and asyncio have a cancellation mechanism.
|
||||
|
||||
| | Method | Exception | Exception inherits from |
|
||||
|---------------|---------------------|-----------------------------------------|-------------------------|
|
||||
| Twisted | `Deferred.cancel()` | `twisted.internet.defer.CancelledError` | `Exception` (!) |
|
||||
| asyncio | `Task.cancel()` | `asyncio.CancelledError` | `BaseException` |
|
||||
|
||||
### Deferred.cancel()
|
||||
When Synapse starts handling a request, it runs the async method
|
||||
responsible for handling it using `defer.ensureDeferred`, which returns
|
||||
a `Deferred`. For example:
|
||||
|
||||
```python
|
||||
def do_something() -> Deferred[None]:
|
||||
...
|
||||
|
||||
@cancellable
|
||||
async def on_GET() -> Tuple[int, JsonDict]:
|
||||
d = make_deferred_yieldable(do_something())
|
||||
await d
|
||||
return 200, {}
|
||||
|
||||
request = defer.ensureDeferred(on_GET())
|
||||
```
|
||||
|
||||
When a client disconnects early, Synapse checks for the presence of the
|
||||
`@cancellable` decorator on `on_GET`. Since `on_GET` is cancellable,
|
||||
`Deferred.cancel()` is called on the `Deferred` from
|
||||
`defer.ensureDeferred`, ie. `request`. Twisted knows which `Deferred`
|
||||
`request` is waiting on and passes the `cancel()` call on to `d`.
|
||||
|
||||
The `Deferred` being waited on, `d`, may have its own handling for
|
||||
`cancel()` and pass the call on to other `Deferred`s.
|
||||
|
||||
Eventually, a `Deferred` handles the `cancel()` call by resolving itself
|
||||
with a `CancelledError`.
|
||||
|
||||
### CancelledError
|
||||
The `CancelledError` gets raised out of the `await` and bubbles up, as
|
||||
per normal Python exception handling.
|
||||
|
||||
## Handling cancellation correctly
|
||||
In general, when writing code that might be subject to cancellation, two
|
||||
things must be considered:
|
||||
* The effect of `CancelledError`s raised out of `await`s.
|
||||
* The effect of `Deferred`s being `cancel()`ed.
|
||||
|
||||
Examples of code that handles cancellation incorrectly include:
|
||||
* `try-except` blocks which swallow `CancelledError`s.
|
||||
* Code that shares the same `Deferred`, which may be cancelled, between
|
||||
multiple requests.
|
||||
* Code that starts some processing that's exempt from cancellation, but
|
||||
uses a logging context from cancellable code. The logging context
|
||||
will be finished upon cancellation, while the uncancelled processing
|
||||
is still using it.
|
||||
|
||||
Some common patterns are listed below in more detail.
|
||||
|
||||
### `async` function calls
|
||||
Most functions in Synapse are relatively straightforward from a
|
||||
cancellation standpoint: they don't do anything with `Deferred`s and
|
||||
purely call and `await` other `async` functions.
|
||||
|
||||
An `async` function handles cancellation correctly if its own code
|
||||
handles cancellation correctly and all the async function it calls
|
||||
handle cancellation correctly. For example:
|
||||
```python
|
||||
async def do_two_things() -> None:
|
||||
check_something()
|
||||
await do_something()
|
||||
await do_something_else()
|
||||
```
|
||||
`do_two_things` handles cancellation correctly if `do_something` and
|
||||
`do_something_else` handle cancellation correctly.
|
||||
|
||||
That is, when checking whether a function handles cancellation
|
||||
correctly, its implementation and all its `async` function calls need to
|
||||
be checked, recursively.
|
||||
|
||||
As `check_something` is not `async`, it does not need to be checked.
|
||||
|
||||
### CancelledErrors
|
||||
Because Twisted's `CancelledError`s are `Exception`s, it's easy to
|
||||
accidentally catch and suppress them. Care must be taken to ensure that
|
||||
`CancelledError`s are allowed to propagate upwards.
|
||||
|
||||
<table width="100%">
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Bad**:
|
||||
```python
|
||||
try:
|
||||
await do_something()
|
||||
except Exception:
|
||||
# `CancelledError` gets swallowed here.
|
||||
logger.info(...)
|
||||
```
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
try:
|
||||
await do_something()
|
||||
except CancelledError:
|
||||
raise
|
||||
except Exception:
|
||||
logger.info(...)
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**OK**:
|
||||
```python
|
||||
try:
|
||||
check_something()
|
||||
# A `CancelledError` won't ever be raised here.
|
||||
except Exception:
|
||||
logger.info(...)
|
||||
```
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
try:
|
||||
await do_something()
|
||||
except ValueError:
|
||||
logger.info(...)
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
#### defer.gatherResults
|
||||
`defer.gatherResults` produces a `Deferred` which:
|
||||
* broadcasts `cancel()` calls to every `Deferred` being waited on.
|
||||
* wraps the first exception it sees in a `FirstError`.
|
||||
|
||||
Together, this means that `CancelledError`s will be wrapped in
|
||||
a `FirstError` unless unwrapped. Such `FirstError`s are liable to be
|
||||
swallowed, so they must be unwrapped.
|
||||
|
||||
<table width="100%">
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Bad**:
|
||||
```python
|
||||
async def do_something() -> None:
|
||||
await make_deferred_yieldable(
|
||||
defer.gatherResults([...], consumeErrors=True)
|
||||
)
|
||||
|
||||
try:
|
||||
await do_something()
|
||||
except CancelledError:
|
||||
raise
|
||||
except Exception:
|
||||
# `FirstError(CancelledError)` gets swallowed here.
|
||||
logger.info(...)
|
||||
```
|
||||
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
async def do_something() -> None:
|
||||
await make_deferred_yieldable(
|
||||
defer.gatherResults([...], consumeErrors=True)
|
||||
).addErrback(unwrapFirstError)
|
||||
|
||||
try:
|
||||
await do_something()
|
||||
except CancelledError:
|
||||
raise
|
||||
except Exception:
|
||||
logger.info(...)
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
### Creation of `Deferred`s
|
||||
If a function creates a `Deferred`, the effect of cancelling it must be considered. `Deferred`s that get shared are likely to have unintended behaviour when cancelled.
|
||||
|
||||
<table width="100%">
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Bad**:
|
||||
```python
|
||||
cache: Dict[str, Deferred[None]] = {}
|
||||
|
||||
def wait_for_room(room_id: str) -> Deferred[None]:
|
||||
deferred = cache.get(room_id)
|
||||
if deferred is None:
|
||||
deferred = Deferred()
|
||||
cache[room_id] = deferred
|
||||
# `deferred` can have multiple waiters.
|
||||
# All of them will observe a `CancelledError`
|
||||
# if any one of them is cancelled.
|
||||
return make_deferred_yieldable(deferred)
|
||||
|
||||
# Request 1
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
# Request 2
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
```
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
cache: Dict[str, Deferred[None]] = {}
|
||||
|
||||
def wait_for_room(room_id: str) -> Deferred[None]:
|
||||
deferred = cache.get(room_id)
|
||||
if deferred is None:
|
||||
deferred = Deferred()
|
||||
cache[room_id] = deferred
|
||||
# `deferred` will never be cancelled now.
|
||||
# A `CancelledError` will still come out of
|
||||
# the `await`.
|
||||
# `delay_cancellation` may also be used.
|
||||
return make_deferred_yieldable(stop_cancellation(deferred))
|
||||
|
||||
# Request 1
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
# Request 2
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
cache: Dict[str, List[Deferred[None]]] = {}
|
||||
|
||||
def wait_for_room(room_id: str) -> Deferred[None]:
|
||||
if room_id not in cache:
|
||||
cache[room_id] = []
|
||||
# Each request gets its own `Deferred` to wait on.
|
||||
deferred = Deferred()
|
||||
cache[room_id]].append(deferred)
|
||||
return make_deferred_yieldable(deferred)
|
||||
|
||||
# Request 1
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
# Request 2
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
```
|
||||
</td>
|
||||
</table>
|
||||
|
||||
### Uncancelled processing
|
||||
Some `async` functions may kick off some `async` processing which is
|
||||
intentionally protected from cancellation, by `stop_cancellation` or
|
||||
other means. If the `async` processing inherits the logcontext of the
|
||||
request which initiated it, care must be taken to ensure that the
|
||||
logcontext is not finished before the `async` processing completes.
|
||||
|
||||
<table width="100%">
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Bad**:
|
||||
```python
|
||||
cache: Optional[ObservableDeferred[None]] = None
|
||||
|
||||
async def do_something_else(
|
||||
to_resolve: Deferred[None]
|
||||
) -> None:
|
||||
await ...
|
||||
logger.info("done!")
|
||||
to_resolve.callback(None)
|
||||
|
||||
async def do_something() -> None:
|
||||
if not cache:
|
||||
to_resolve = Deferred()
|
||||
cache = ObservableDeferred(to_resolve)
|
||||
# `do_something_else` will never be cancelled and
|
||||
# can outlive the `request-1` logging context.
|
||||
run_in_background(do_something_else, to_resolve)
|
||||
|
||||
await make_deferred_yieldable(cache.observe())
|
||||
|
||||
with LoggingContext("request-1"):
|
||||
await do_something()
|
||||
```
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
cache: Optional[ObservableDeferred[None]] = None
|
||||
|
||||
async def do_something_else(
|
||||
to_resolve: Deferred[None]
|
||||
) -> None:
|
||||
await ...
|
||||
logger.info("done!")
|
||||
to_resolve.callback(None)
|
||||
|
||||
async def do_something() -> None:
|
||||
if not cache:
|
||||
to_resolve = Deferred()
|
||||
cache = ObservableDeferred(to_resolve)
|
||||
run_in_background(do_something_else, to_resolve)
|
||||
# We'll wait until `do_something_else` is
|
||||
# done before raising a `CancelledError`.
|
||||
await make_deferred_yieldable(
|
||||
delay_cancellation(cache.observe())
|
||||
)
|
||||
else:
|
||||
await make_deferred_yieldable(cache.observe())
|
||||
|
||||
with LoggingContext("request-1"):
|
||||
await do_something()
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="50%">
|
||||
|
||||
**OK**:
|
||||
```python
|
||||
cache: Optional[ObservableDeferred[None]] = None
|
||||
|
||||
async def do_something_else(
|
||||
to_resolve: Deferred[None]
|
||||
) -> None:
|
||||
await ...
|
||||
logger.info("done!")
|
||||
to_resolve.callback(None)
|
||||
|
||||
async def do_something() -> None:
|
||||
if not cache:
|
||||
to_resolve = Deferred()
|
||||
cache = ObservableDeferred(to_resolve)
|
||||
# `do_something_else` will get its own independent
|
||||
# logging context. `request-1` will not count any
|
||||
# metrics from `do_something_else`.
|
||||
run_as_background_process(
|
||||
"do_something_else",
|
||||
do_something_else,
|
||||
to_resolve,
|
||||
)
|
||||
|
||||
await make_deferred_yieldable(cache.observe())
|
||||
|
||||
with LoggingContext("request-1"):
|
||||
await do_something()
|
||||
```
|
||||
</td>
|
||||
<td width="50%">
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
@@ -18,17 +18,6 @@ async def check_event_for_spam(event: "synapse.events.EventBase") -> Union[bool,
|
||||
|
||||
Called when receiving an event from a client or via federation. The callback must return
|
||||
either:
|
||||
- on `Decision.ALLOW`, the action is permitted.
|
||||
- on `Decision.DENY`, the action is rejected with a default error message/code.
|
||||
- on `Codes`, the action is rejected with a specific error message/code. In case
|
||||
of doubt, use `Codes.FORBIDDEN`.
|
||||
- (deprecated) on `False`, behave as `Decision.ALLOW`. Deprecated as methods in
|
||||
this API are inconsistent, some expect `True` for `ALLOW` and others `True`
|
||||
for `DENY`.
|
||||
- (deprecated) on `True`, behave as `Decision.DENY`. Deprecated as methods in
|
||||
this API are inconsistent, some expect `True` for `ALLOW` and others `True`
|
||||
for `DENY`.
|
||||
|
||||
- an error message string, to indicate the event must be rejected because of spam and
|
||||
give a rejection reason to forward to clients;
|
||||
- the boolean `True`, to indicate that the event is spammy, but not provide further details; or
|
||||
|
||||
@@ -159,7 +159,7 @@ Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to
|
||||
oidc_providers:
|
||||
- idp_id: keycloak
|
||||
idp_name: "My KeyCloak server"
|
||||
issuer: "https://127.0.0.1:8443/auth/realms/{realm_name}"
|
||||
issuer: "https://127.0.0.1:8443/realms/{realm_name}"
|
||||
client_id: "synapse"
|
||||
client_secret: "copy secret generated from above"
|
||||
scopes: ["openid", "profile"]
|
||||
|
||||
@@ -289,7 +289,7 @@ presence:
|
||||
# federation: the server-server API (/_matrix/federation). Also implies
|
||||
# 'media', 'keys', 'openid'
|
||||
#
|
||||
# keys: the key discovery API (/_matrix/keys).
|
||||
# keys: the key discovery API (/_matrix/key).
|
||||
#
|
||||
# media: the media API (/_matrix/media).
|
||||
#
|
||||
@@ -730,6 +730,12 @@ retention:
|
||||
# A cache 'factor' is a multiplier that can be applied to each of
|
||||
# Synapse's caches in order to increase or decrease the maximum
|
||||
# number of entries that can be stored.
|
||||
#
|
||||
# The configuration for cache factors (caches.global_factor and
|
||||
# caches.per_cache_factors) can be reloaded while the application is running,
|
||||
# by sending a SIGHUP signal to the Synapse process. Changes to other parts of
|
||||
# the caching config will NOT be applied after a SIGHUP is received; a restart
|
||||
# is necessary.
|
||||
|
||||
# The number of events to cache in memory. Not affected by
|
||||
# caches.global_factor.
|
||||
@@ -778,6 +784,24 @@ caches:
|
||||
#
|
||||
#cache_entry_ttl: 30m
|
||||
|
||||
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
|
||||
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
|
||||
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
|
||||
# this option, and all three of the options must be specified for this feature to work.
|
||||
#cache_autotuning:
|
||||
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
|
||||
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
|
||||
# the flag below, or until the `min_cache_ttl` is hit.
|
||||
#max_cache_memory_usage: 1024M
|
||||
|
||||
# This flag sets a rough target for the desired memory usage of the caches.
|
||||
#target_cache_memory_usage: 758M
|
||||
|
||||
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
|
||||
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
|
||||
# from being emptied while Synapse is evicting due to memory.
|
||||
#min_cache_ttl: 5m
|
||||
|
||||
# Controls how long the results of a /sync request are cached for after
|
||||
# a successful response is returned. A higher duration can help clients with
|
||||
# intermittent connections, at the cost of higher memory usage.
|
||||
@@ -2462,6 +2486,40 @@ push:
|
||||
#
|
||||
#encryption_enabled_by_default_for_room_type: invite
|
||||
|
||||
# Override the default power levels for rooms created on this server, per
|
||||
# room creation preset.
|
||||
#
|
||||
# The appropriate dictionary for the room preset will be applied on top
|
||||
# of the existing power levels content.
|
||||
#
|
||||
# Useful if you know that your users need special permissions in rooms
|
||||
# that they create (e.g. to send particular types of state events without
|
||||
# needing an elevated power level). This takes the same shape as the
|
||||
# `power_level_content_override` parameter in the /createRoom API, but
|
||||
# is applied before that parameter.
|
||||
#
|
||||
# Valid keys are some or all of `private_chat`, `trusted_private_chat`
|
||||
# and `public_chat`. Inside each of those should be any of the
|
||||
# properties allowed in `power_level_content_override` in the
|
||||
# /createRoom API. If any property is missing, its default value will
|
||||
# continue to be used. If any property is present, it will overwrite
|
||||
# the existing default completely (so if the `events` property exists,
|
||||
# the default event power levels will be ignored).
|
||||
#
|
||||
#default_power_level_content_override:
|
||||
# private_chat:
|
||||
# "events":
|
||||
# "com.example.myeventtype" : 0
|
||||
# "m.room.avatar": 50
|
||||
# "m.room.canonical_alias": 50
|
||||
# "m.room.encryption": 100
|
||||
# "m.room.history_visibility": 100
|
||||
# "m.room.name": 50
|
||||
# "m.room.power_levels": 100
|
||||
# "m.room.server_acl": 100
|
||||
# "m.room.tombstone": 100
|
||||
# "events_default": 1
|
||||
|
||||
|
||||
# Uncomment to allow non-server-admin users to create groups on this server
|
||||
#
|
||||
|
||||
@@ -467,13 +467,13 @@ Sub-options for each listener include:
|
||||
|
||||
Valid resource names are:
|
||||
|
||||
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies 'media' and 'static'.
|
||||
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies `media` and `static`.
|
||||
|
||||
* `consent`: user consent forms (/_matrix/consent). See [here](../../consent_tracking.md) for more.
|
||||
|
||||
* `federation`: the server-server API (/_matrix/federation). Also implies `media`, `keys`, `openid`
|
||||
|
||||
* `keys`: the key discovery API (/_matrix/keys).
|
||||
* `keys`: the key discovery API (/_matrix/key).
|
||||
|
||||
* `media`: the media API (/_matrix/media).
|
||||
|
||||
@@ -1119,7 +1119,17 @@ Caching can be configured through the following sub-options:
|
||||
with intermittent connections, at the cost of higher memory usage.
|
||||
By default, this is zero, which means that sync responses are not cached
|
||||
at all.
|
||||
|
||||
* `cache_autotuning` and its sub-options `max_cache_memory_usage`, `target_cache_memory_usage`, and
|
||||
`min_cache_ttl` work in conjunction with each other to maintain a balance between cache memory
|
||||
usage and cache entry availability. You must be using [jemalloc](https://github.com/matrix-org/synapse#help-synapse-is-slow-and-eats-all-my-ramcpu)
|
||||
to utilize this option, and all three of the options must be specified for this feature to work.
|
||||
* `max_cache_memory_usage` sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
|
||||
They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
|
||||
the flag below, or until the `min_cache_ttl` is hit.
|
||||
* `target_memory_usage` sets a rough target for the desired memory usage of the caches.
|
||||
* `min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
|
||||
caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
|
||||
from being emptied while Synapse is evicting due to memory.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
@@ -1127,9 +1137,29 @@ caches:
|
||||
global_factor: 1.0
|
||||
per_cache_factors:
|
||||
get_users_who_share_room_with_user: 2.0
|
||||
expire_caches: false
|
||||
sync_response_cache_duration: 2m
|
||||
cache_autotuning:
|
||||
max_cache_memory_usage: 1024M
|
||||
target_cache_memory_usage: 758M
|
||||
min_cache_ttl: 5m
|
||||
```
|
||||
|
||||
### Reloading cache factors
|
||||
|
||||
The cache factors (i.e. `caches.global_factor` and `caches.per_cache_factors`) may be reloaded at any time by sending a
|
||||
[`SIGHUP`](https://en.wikipedia.org/wiki/SIGHUP) signal to Synapse using e.g.
|
||||
|
||||
```commandline
|
||||
kill -HUP [PID_OF_SYNAPSE_PROCESS]
|
||||
```
|
||||
|
||||
If you are running multiple workers, you must individually update the worker
|
||||
config file and send this signal to each worker process.
|
||||
|
||||
If you're using the [example systemd service](https://github.com/matrix-org/synapse/blob/develop/contrib/systemd/matrix-synapse.service)
|
||||
file in Synapse's `contrib` directory, you can send a `SIGHUP` signal by using
|
||||
`systemctl reload matrix-synapse`.
|
||||
|
||||
---
|
||||
## Database ##
|
||||
Config options related to database settings.
|
||||
@@ -1164,7 +1194,7 @@ For more information on using Synapse with Postgres,
|
||||
see [here](../../postgres.md).
|
||||
|
||||
Example SQLite configuration:
|
||||
```
|
||||
```yaml
|
||||
database:
|
||||
name: sqlite3
|
||||
args:
|
||||
@@ -1172,7 +1202,7 @@ database:
|
||||
```
|
||||
|
||||
Example Postgres configuration:
|
||||
```
|
||||
```yaml
|
||||
database:
|
||||
name: psycopg2
|
||||
txn_limit: 10000
|
||||
@@ -1327,6 +1357,20 @@ This option sets ratelimiting how often invites can be sent in a room or to a
|
||||
specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10` and
|
||||
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`.
|
||||
|
||||
Client requests that invite user(s) when [creating a
|
||||
room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3createroom)
|
||||
will count against the `rc_invites.per_room` limit, whereas
|
||||
client requests to [invite a single user to a
|
||||
room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3roomsroomidinvite)
|
||||
will count against both the `rc_invites.per_user` and `rc_invites.per_room` limits.
|
||||
|
||||
Federation requests to invite a user will count against the `rc_invites.per_user`
|
||||
limit only, as Synapse presumes ratelimiting by room will be done by the sending server.
|
||||
|
||||
The `rc_invites.per_user` limit applies to the *receiver* of the invite, rather than the
|
||||
sender, meaning that a `rc_invite.per_user.burst_count` of 5 mandates that a single user
|
||||
cannot *receive* more than a burst of 5 invites at a time.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
rc_invites:
|
||||
@@ -1635,10 +1679,10 @@ Defaults to "en".
|
||||
Example configuration:
|
||||
```yaml
|
||||
url_preview_accept_language:
|
||||
- en-UK
|
||||
- en-US;q=0.9
|
||||
- fr;q=0.8
|
||||
- *;q=0.7
|
||||
- 'en-UK'
|
||||
- 'en-US;q=0.9'
|
||||
- 'fr;q=0.8'
|
||||
- '*;q=0.7'
|
||||
```
|
||||
----
|
||||
Config option: `oembed`
|
||||
@@ -3298,6 +3342,32 @@ room_list_publication_rules:
|
||||
room_id: "*"
|
||||
action: allow
|
||||
```
|
||||
|
||||
---
|
||||
Config option: `default_power_level_content_override`
|
||||
|
||||
The `default_power_level_content_override` option controls the default power
|
||||
levels for rooms.
|
||||
|
||||
Useful if you know that your users need special permissions in rooms
|
||||
that they create (e.g. to send particular types of state events without
|
||||
needing an elevated power level). This takes the same shape as the
|
||||
`power_level_content_override` parameter in the /createRoom API, but
|
||||
is applied before that parameter.
|
||||
|
||||
Note that each key provided inside a preset (for example `events` in the example
|
||||
below) will overwrite all existing defaults inside that key. So in the example
|
||||
below, newly-created private_chat rooms will have no rules for any event types
|
||||
except `com.example.foo`.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
default_power_level_content_override:
|
||||
private_chat: { "events": { "com.example.foo" : 0 } }
|
||||
trusted_private_chat: null
|
||||
public_chat: null
|
||||
```
|
||||
|
||||
---
|
||||
## Opentracing ##
|
||||
Configuration options related to Opentracing support.
|
||||
@@ -3398,7 +3468,7 @@ stream_writers:
|
||||
typing: worker1
|
||||
```
|
||||
---
|
||||
Config option: `run_background_task_on`
|
||||
Config option: `run_background_tasks_on`
|
||||
|
||||
The worker that is used to run background tasks (e.g. cleaning up expired
|
||||
data). If not provided this defaults to the main process.
|
||||
|
||||
@@ -7,10 +7,10 @@ team.
|
||||
## Installing and using Synapse
|
||||
|
||||
This documentation covers topics for **installation**, **configuration** and
|
||||
**maintainence** of your Synapse process:
|
||||
**maintenance** of your Synapse process:
|
||||
|
||||
* Learn how to [install](setup/installation.md) and
|
||||
[configure](usage/configuration/index.html) your own instance, perhaps with [Single
|
||||
[configure](usage/configuration/config_documentation.md) your own instance, perhaps with [Single
|
||||
Sign-On](usage/configuration/user_authentication/index.html).
|
||||
|
||||
* See how to [upgrade](upgrade.md) between Synapse versions.
|
||||
@@ -65,7 +65,7 @@ following documentation:
|
||||
|
||||
Want to help keep Synapse going but don't know how to code? Synapse is a
|
||||
[Matrix.org Foundation](https://matrix.org) project. Consider becoming a
|
||||
supportor on [Liberapay](https://liberapay.com/matrixdotorg),
|
||||
supporter on [Liberapay](https://liberapay.com/matrixdotorg),
|
||||
[Patreon](https://patreon.com/matrixdotorg) or through
|
||||
[PayPal](https://paypal.me/matrixdotorg) via a one-time donation.
|
||||
|
||||
|
||||
59
mypy.ini
59
mypy.ini
@@ -1,6 +1,6 @@
|
||||
[mypy]
|
||||
namespace_packages = True
|
||||
plugins = mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
|
||||
plugins = pydantic.mypy, mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
|
||||
follow_imports = normal
|
||||
check_untyped_defs = True
|
||||
show_error_codes = True
|
||||
@@ -27,9 +27,6 @@ exclude = (?x)
|
||||
|synapse/storage/databases/__init__.py
|
||||
|synapse/storage/databases/main/cache.py
|
||||
|synapse/storage/databases/main/devices.py
|
||||
|synapse/storage/databases/main/event_federation.py
|
||||
|synapse/storage/databases/main/push_rule.py
|
||||
|synapse/storage/databases/main/roommember.py
|
||||
|synapse/storage/schema/
|
||||
|
||||
|tests/api/test_auth.py
|
||||
@@ -89,6 +86,12 @@ exclude = (?x)
|
||||
|tests/utils.py
|
||||
)$
|
||||
|
||||
[pydantic-mypy]
|
||||
init_forbid_extra = True
|
||||
init_typed = True
|
||||
warn_required_dynamic_aliases = True
|
||||
warn_untyped_fields = True
|
||||
|
||||
[mypy-synapse._scripts.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
@@ -119,15 +122,39 @@ disallow_untyped_defs = True
|
||||
[mypy-synapse.federation.transport.client]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-synapse.groups.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.handlers.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.http.federation.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.http.connectproxyclient]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.http.proxyagent]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.http.request_metrics]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.http.server]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.logging._remote]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.logging.context]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.logging.formatter]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.logging.handlers]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.metrics.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
@@ -157,6 +184,9 @@ disallow_untyped_defs = True
|
||||
[mypy-synapse.state.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.background_updates]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.account_data]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
@@ -196,18 +226,39 @@ disallow_untyped_defs = True
|
||||
[mypy-synapse.storage.databases.main.state_deltas]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.stream]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.transactions]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.user_erasure_store]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.engines.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.prepare_database]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.persist_events]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.state]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.types]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.util.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.streams.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.types]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
|
||||
54
poetry.lock
generated
54
poetry.lock
generated
@@ -778,6 +778,21 @@ category = "main"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
||||
|
||||
[[package]]
|
||||
name = "pydantic"
|
||||
version = "1.9.0"
|
||||
description = "Data validation and settings management using python 3.6 type hinting"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.6.1"
|
||||
|
||||
[package.dependencies]
|
||||
typing-extensions = ">=3.7.4.3"
|
||||
|
||||
[package.extras]
|
||||
dotenv = ["python-dotenv (>=0.10.4)"]
|
||||
email = ["email-validator (>=1.0.3)"]
|
||||
|
||||
[[package]]
|
||||
name = "pyflakes"
|
||||
version = "2.4.0"
|
||||
@@ -1563,7 +1578,7 @@ url_preview = ["lxml"]
|
||||
[metadata]
|
||||
lock-version = "1.1"
|
||||
python-versions = "^3.7.1"
|
||||
content-hash = "d39d5ac5d51c014581186b7691999b861058b569084c525523baf70b77f292b1"
|
||||
content-hash = "54ec27d5187386653b8d0d13ed843f86ae68b3ebbee633c82dfffc7605b99f74"
|
||||
|
||||
[metadata.files]
|
||||
attrs = [
|
||||
@@ -2251,6 +2266,43 @@ pycparser = [
|
||||
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
|
||||
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
|
||||
]
|
||||
pydantic = [
|
||||
{file = "pydantic-1.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cb23bcc093697cdea2708baae4f9ba0e972960a835af22560f6ae4e7e47d33f5"},
|
||||
{file = "pydantic-1.9.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:1d5278bd9f0eee04a44c712982343103bba63507480bfd2fc2790fa70cd64cf4"},
|
||||
{file = "pydantic-1.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ab624700dc145aa809e6f3ec93fb8e7d0f99d9023b713f6a953637429b437d37"},
|
||||
{file = "pydantic-1.9.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c8d7da6f1c1049eefb718d43d99ad73100c958a5367d30b9321b092771e96c25"},
|
||||
{file = "pydantic-1.9.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:3c3b035103bd4e2e4a28da9da7ef2fa47b00ee4a9cf4f1a735214c1bcd05e0f6"},
|
||||
{file = "pydantic-1.9.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:3011b975c973819883842c5ab925a4e4298dffccf7782c55ec3580ed17dc464c"},
|
||||
{file = "pydantic-1.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:086254884d10d3ba16da0588604ffdc5aab3f7f09557b998373e885c690dd398"},
|
||||
{file = "pydantic-1.9.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:0fe476769acaa7fcddd17cadd172b156b53546ec3614a4d880e5d29ea5fbce65"},
|
||||
{file = "pydantic-1.9.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c8e9dcf1ac499679aceedac7e7ca6d8641f0193c591a2d090282aaf8e9445a46"},
|
||||
{file = "pydantic-1.9.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d1e4c28f30e767fd07f2ddc6f74f41f034d1dd6bc526cd59e63a82fe8bb9ef4c"},
|
||||
{file = "pydantic-1.9.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:c86229333cabaaa8c51cf971496f10318c4734cf7b641f08af0a6fbf17ca3054"},
|
||||
{file = "pydantic-1.9.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:c0727bda6e38144d464daec31dff936a82917f431d9c39c39c60a26567eae3ed"},
|
||||
{file = "pydantic-1.9.0-cp36-cp36m-win_amd64.whl", hash = "sha256:dee5ef83a76ac31ab0c78c10bd7d5437bfdb6358c95b91f1ba7ff7b76f9996a1"},
|
||||
{file = "pydantic-1.9.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:d9c9bdb3af48e242838f9f6e6127de9be7063aad17b32215ccc36a09c5cf1070"},
|
||||
{file = "pydantic-1.9.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ee7e3209db1e468341ef41fe263eb655f67f5c5a76c924044314e139a1103a2"},
|
||||
{file = "pydantic-1.9.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0b6037175234850ffd094ca77bf60fb54b08b5b22bc85865331dd3bda7a02fa1"},
|
||||
{file = "pydantic-1.9.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:b2571db88c636d862b35090ccf92bf24004393f85c8870a37f42d9f23d13e032"},
|
||||
{file = "pydantic-1.9.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8b5ac0f1c83d31b324e57a273da59197c83d1bb18171e512908fe5dc7278a1d6"},
|
||||
{file = "pydantic-1.9.0-cp37-cp37m-win_amd64.whl", hash = "sha256:bbbc94d0c94dd80b3340fc4f04fd4d701f4b038ebad72c39693c794fd3bc2d9d"},
|
||||
{file = "pydantic-1.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:e0896200b6a40197405af18828da49f067c2fa1f821491bc8f5bde241ef3f7d7"},
|
||||
{file = "pydantic-1.9.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7bdfdadb5994b44bd5579cfa7c9b0e1b0e540c952d56f627eb227851cda9db77"},
|
||||
{file = "pydantic-1.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:574936363cd4b9eed8acdd6b80d0143162f2eb654d96cb3a8ee91d3e64bf4cf9"},
|
||||
{file = "pydantic-1.9.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c556695b699f648c58373b542534308922c46a1cda06ea47bc9ca45ef5b39ae6"},
|
||||
{file = "pydantic-1.9.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:f947352c3434e8b937e3aa8f96f47bdfe6d92779e44bb3f41e4c213ba6a32145"},
|
||||
{file = "pydantic-1.9.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:5e48ef4a8b8c066c4a31409d91d7ca372a774d0212da2787c0d32f8045b1e034"},
|
||||
{file = "pydantic-1.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:96f240bce182ca7fe045c76bcebfa0b0534a1bf402ed05914a6f1dadff91877f"},
|
||||
{file = "pydantic-1.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:815ddebb2792efd4bba5488bc8fde09c29e8ca3227d27cf1c6990fc830fd292b"},
|
||||
{file = "pydantic-1.9.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:6c5b77947b9e85a54848343928b597b4f74fc364b70926b3c4441ff52620640c"},
|
||||
{file = "pydantic-1.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c68c3bc88dbda2a6805e9a142ce84782d3930f8fdd9655430d8576315ad97ce"},
|
||||
{file = "pydantic-1.9.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5a79330f8571faf71bf93667d3ee054609816f10a259a109a0738dac983b23c3"},
|
||||
{file = "pydantic-1.9.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f5a64b64ddf4c99fe201ac2724daada8595ada0d102ab96d019c1555c2d6441d"},
|
||||
{file = "pydantic-1.9.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a733965f1a2b4090a5238d40d983dcd78f3ecea221c7af1497b845a9709c1721"},
|
||||
{file = "pydantic-1.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:2cc6a4cb8a118ffec2ca5fcb47afbacb4f16d0ab8b7350ddea5e8ef7bcc53a16"},
|
||||
{file = "pydantic-1.9.0-py3-none-any.whl", hash = "sha256:085ca1de245782e9b46cefcf99deecc67d418737a1fd3f6a4f511344b613a5b3"},
|
||||
{file = "pydantic-1.9.0.tar.gz", hash = "sha256:742645059757a56ecd886faf4ed2441b9c0cd406079c2b4bee51bcc3fbcd510a"},
|
||||
]
|
||||
pyflakes = [
|
||||
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
|
||||
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
|
||||
|
||||
@@ -54,7 +54,7 @@ skip_gitignore = true
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.59.0rc1"
|
||||
version = "1.59.1"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "Apache-2.0"
|
||||
@@ -182,6 +182,7 @@ hiredis = { version = "*", optional = true }
|
||||
Pympler = { version = "*", optional = true }
|
||||
parameterized = { version = ">=0.7.4", optional = true }
|
||||
idna = { version = ">=2.5", optional = true }
|
||||
pydantic = ">=1.9.0"
|
||||
|
||||
[tool.poetry.extras]
|
||||
# NB: Packages that should be part of `pip install matrix-synapse[all]` need to be specified
|
||||
|
||||
@@ -21,7 +21,7 @@ from typing import Callable, Optional, Type
|
||||
from mypy.nodes import ARG_NAMED_OPT
|
||||
from mypy.plugin import MethodSigContext, Plugin
|
||||
from mypy.typeops import bind_self
|
||||
from mypy.types import CallableType, NoneType
|
||||
from mypy.types import CallableType, NoneType, UnionType
|
||||
|
||||
|
||||
class SynapsePlugin(Plugin):
|
||||
@@ -72,13 +72,20 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
|
||||
|
||||
# Third, we add an optional "on_invalidate" argument.
|
||||
#
|
||||
# This is a callable which accepts no input and returns nothing.
|
||||
calltyp = CallableType(
|
||||
arg_types=[],
|
||||
arg_kinds=[],
|
||||
arg_names=[],
|
||||
ret_type=NoneType(),
|
||||
fallback=ctx.api.named_generic_type("builtins.function", []),
|
||||
# This is a either
|
||||
# - a callable which accepts no input and returns nothing, or
|
||||
# - None.
|
||||
calltyp = UnionType(
|
||||
[
|
||||
NoneType(),
|
||||
CallableType(
|
||||
arg_types=[],
|
||||
arg_kinds=[],
|
||||
arg_names=[],
|
||||
ret_type=NoneType(),
|
||||
fallback=ctx.api.named_generic_type("builtins.function", []),
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
arg_types.append(calltyp)
|
||||
@@ -95,7 +102,7 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
|
||||
|
||||
|
||||
def plugin(version: str) -> Type[SynapsePlugin]:
|
||||
# This is the entry point of the plugin, and let's us deal with the fact
|
||||
# This is the entry point of the plugin, and lets us deal with the fact
|
||||
# that the mypy plugin interface is *not* stable by looking at the version
|
||||
# string.
|
||||
#
|
||||
|
||||
@@ -65,6 +65,8 @@ class JoinRules:
|
||||
PRIVATE: Final = "private"
|
||||
# As defined for MSC3083.
|
||||
RESTRICTED: Final = "restricted"
|
||||
# As defined for MSC3787.
|
||||
KNOCK_RESTRICTED: Final = "knock_restricted"
|
||||
|
||||
|
||||
class RestrictedJoinRuleTypes:
|
||||
|
||||
@@ -81,6 +81,9 @@ class RoomVersion:
|
||||
msc2716_historical: bool
|
||||
# MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events
|
||||
msc2716_redactions: bool
|
||||
# MSC3787: Adds support for a `knock_restricted` join rule, mixing concepts of
|
||||
# knocks and restricted join rules into the same join condition.
|
||||
msc3787_knock_restricted_join_rule: bool
|
||||
|
||||
|
||||
class RoomVersions:
|
||||
@@ -99,6 +102,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V2 = RoomVersion(
|
||||
"2",
|
||||
@@ -115,6 +119,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V3 = RoomVersion(
|
||||
"3",
|
||||
@@ -131,6 +136,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V4 = RoomVersion(
|
||||
"4",
|
||||
@@ -147,6 +153,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V5 = RoomVersion(
|
||||
"5",
|
||||
@@ -163,6 +170,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V6 = RoomVersion(
|
||||
"6",
|
||||
@@ -179,6 +187,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
MSC2176 = RoomVersion(
|
||||
"org.matrix.msc2176",
|
||||
@@ -195,6 +204,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V7 = RoomVersion(
|
||||
"7",
|
||||
@@ -211,6 +221,7 @@ class RoomVersions:
|
||||
msc2403_knocking=True,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V8 = RoomVersion(
|
||||
"8",
|
||||
@@ -227,6 +238,7 @@ class RoomVersions:
|
||||
msc2403_knocking=True,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V9 = RoomVersion(
|
||||
"9",
|
||||
@@ -243,6 +255,7 @@ class RoomVersions:
|
||||
msc2403_knocking=True,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
MSC2716v3 = RoomVersion(
|
||||
"org.matrix.msc2716v3",
|
||||
@@ -259,6 +272,24 @@ class RoomVersions:
|
||||
msc2403_knocking=True,
|
||||
msc2716_historical=True,
|
||||
msc2716_redactions=True,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
MSC3787 = RoomVersion(
|
||||
"org.matrix.msc3787",
|
||||
RoomDisposition.UNSTABLE,
|
||||
EventFormatVersions.V3,
|
||||
StateResolutionVersions.V2,
|
||||
enforce_key_validity=True,
|
||||
special_case_aliases_auth=False,
|
||||
strict_canonicaljson=True,
|
||||
limit_notifications_power_levels=True,
|
||||
msc2176_redaction_rules=False,
|
||||
msc3083_join_rules=True,
|
||||
msc3375_redaction_rules=True,
|
||||
msc2403_knocking=True,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=True,
|
||||
)
|
||||
|
||||
|
||||
@@ -276,6 +307,7 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
|
||||
RoomVersions.V8,
|
||||
RoomVersions.V9,
|
||||
RoomVersions.MSC2716v3,
|
||||
RoomVersions.MSC3787,
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@@ -49,9 +49,12 @@ from twisted.logger import LoggingFile, LogLevel
|
||||
from twisted.protocols.tls import TLSMemoryBIOFactory
|
||||
from twisted.python.threadpool import ThreadPool
|
||||
|
||||
import synapse.util.caches
|
||||
from synapse.api.constants import MAX_PDU_SIZE
|
||||
from synapse.app import check_bind_error
|
||||
from synapse.app.phone_stats_home import start_phone_stats_home
|
||||
from synapse.config import ConfigError
|
||||
from synapse.config._base import format_config_error
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.config.server import ManholeConfig
|
||||
from synapse.crypto import context_factory
|
||||
@@ -432,6 +435,10 @@ async def start(hs: "HomeServer") -> None:
|
||||
signal.signal(signal.SIGHUP, run_sighup)
|
||||
|
||||
register_sighup(refresh_certificate, hs)
|
||||
register_sighup(reload_cache_config, hs.config)
|
||||
|
||||
# Apply the cache config.
|
||||
hs.config.caches.resize_all_caches()
|
||||
|
||||
# Load the certificate from disk.
|
||||
refresh_certificate(hs)
|
||||
@@ -486,6 +493,43 @@ async def start(hs: "HomeServer") -> None:
|
||||
atexit.register(gc.freeze)
|
||||
|
||||
|
||||
def reload_cache_config(config: HomeServerConfig) -> None:
|
||||
"""Reload cache config from disk and immediately apply it.resize caches accordingly.
|
||||
|
||||
If the config is invalid, a `ConfigError` is logged and no changes are made.
|
||||
|
||||
Otherwise, this:
|
||||
- replaces the `caches` section on the given `config` object,
|
||||
- resizes all caches according to the new cache factors, and
|
||||
|
||||
Note that the following cache config keys are read, but not applied:
|
||||
- event_cache_size: used to set a max_size and _original_max_size on
|
||||
EventsWorkerStore._get_event_cache when it is created. We'd have to update
|
||||
the _original_max_size (and maybe
|
||||
- sync_response_cache_duration: would have to update the timeout_sec attribute on
|
||||
HomeServer -> SyncHandler -> ResponseCache.
|
||||
- track_memory_usage. This affects synapse.util.caches.TRACK_MEMORY_USAGE which
|
||||
influences Synapse's self-reported metrics.
|
||||
|
||||
Also, the HTTPConnectionPool in SimpleHTTPClient sets its maxPersistentPerHost
|
||||
parameter based on the global_factor. This won't be applied on a config reload.
|
||||
"""
|
||||
try:
|
||||
previous_cache_config = config.reload_config_section("caches")
|
||||
except ConfigError as e:
|
||||
logger.warning("Failed to reload cache config")
|
||||
for f in format_config_error(e):
|
||||
logger.warning(f)
|
||||
else:
|
||||
logger.debug(
|
||||
"New cache config. Was:\n %s\nNow:\n",
|
||||
previous_cache_config.__dict__,
|
||||
config.caches.__dict__,
|
||||
)
|
||||
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
|
||||
config.caches.resize_all_caches()
|
||||
|
||||
|
||||
def setup_sentry(hs: "HomeServer") -> None:
|
||||
"""Enable sentry integration, if enabled in configuration"""
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from typing import Dict, Iterable, Iterator, List
|
||||
from typing import Dict, Iterable, List
|
||||
|
||||
from matrix_common.versionstring import get_distribution_version_string
|
||||
|
||||
@@ -45,7 +45,7 @@ from synapse.app._base import (
|
||||
redirect_stdio_to_logs,
|
||||
register_start,
|
||||
)
|
||||
from synapse.config._base import ConfigError
|
||||
from synapse.config._base import ConfigError, format_config_error
|
||||
from synapse.config.emailconfig import ThreepidBehaviour
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.config.server import ListenerConfig
|
||||
@@ -399,38 +399,6 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
|
||||
return hs
|
||||
|
||||
|
||||
def format_config_error(e: ConfigError) -> Iterator[str]:
|
||||
"""
|
||||
Formats a config error neatly
|
||||
|
||||
The idea is to format the immediate error, plus the "causes" of those errors,
|
||||
hopefully in a way that makes sense to the user. For example:
|
||||
|
||||
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
|
||||
Failed to parse config for module 'JinjaOidcMappingProvider':
|
||||
invalid jinja template:
|
||||
unexpected end of template, expected 'end of print statement'.
|
||||
|
||||
Args:
|
||||
e: the error to be formatted
|
||||
|
||||
Returns: An iterator which yields string fragments to be formatted
|
||||
"""
|
||||
yield "Error in configuration"
|
||||
|
||||
if e.path:
|
||||
yield " at '%s'" % (".".join(e.path),)
|
||||
|
||||
yield ":\n %s" % (e.msg,)
|
||||
|
||||
parent_e = e.__cause__
|
||||
indent = 1
|
||||
while parent_e:
|
||||
indent += 1
|
||||
yield ":\n%s%s" % (" " * indent, str(parent_e))
|
||||
parent_e = parent_e.__cause__
|
||||
|
||||
|
||||
def run(hs: HomeServer) -> None:
|
||||
_base.start_reactor(
|
||||
"synapse-homeserver",
|
||||
|
||||
@@ -16,14 +16,18 @@
|
||||
|
||||
import argparse
|
||||
import errno
|
||||
import logging
|
||||
import os
|
||||
from collections import OrderedDict
|
||||
from hashlib import sha256
|
||||
from textwrap import dedent
|
||||
from typing import (
|
||||
Any,
|
||||
ClassVar,
|
||||
Collection,
|
||||
Dict,
|
||||
Iterable,
|
||||
Iterator,
|
||||
List,
|
||||
MutableMapping,
|
||||
Optional,
|
||||
@@ -40,6 +44,8 @@ import yaml
|
||||
|
||||
from synapse.util.templates import _create_mxc_to_http_filter, _format_ts_filter
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ConfigError(Exception):
|
||||
"""Represents a problem parsing the configuration
|
||||
@@ -55,6 +61,38 @@ class ConfigError(Exception):
|
||||
self.path = path
|
||||
|
||||
|
||||
def format_config_error(e: ConfigError) -> Iterator[str]:
|
||||
"""
|
||||
Formats a config error neatly
|
||||
|
||||
The idea is to format the immediate error, plus the "causes" of those errors,
|
||||
hopefully in a way that makes sense to the user. For example:
|
||||
|
||||
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
|
||||
Failed to parse config for module 'JinjaOidcMappingProvider':
|
||||
invalid jinja template:
|
||||
unexpected end of template, expected 'end of print statement'.
|
||||
|
||||
Args:
|
||||
e: the error to be formatted
|
||||
|
||||
Returns: An iterator which yields string fragments to be formatted
|
||||
"""
|
||||
yield "Error in configuration"
|
||||
|
||||
if e.path:
|
||||
yield " at '%s'" % (".".join(e.path),)
|
||||
|
||||
yield ":\n %s" % (e.msg,)
|
||||
|
||||
parent_e = e.__cause__
|
||||
indent = 1
|
||||
while parent_e:
|
||||
indent += 1
|
||||
yield ":\n%s%s" % (" " * indent, str(parent_e))
|
||||
parent_e = parent_e.__cause__
|
||||
|
||||
|
||||
# We split these messages out to allow packages to override with package
|
||||
# specific instructions.
|
||||
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS = """\
|
||||
@@ -119,7 +157,7 @@ class Config:
|
||||
defined in subclasses.
|
||||
"""
|
||||
|
||||
section: str
|
||||
section: ClassVar[str]
|
||||
|
||||
def __init__(self, root_config: "RootConfig" = None):
|
||||
self.root = root_config
|
||||
@@ -309,9 +347,12 @@ class RootConfig:
|
||||
class, lower-cased and with "Config" removed.
|
||||
"""
|
||||
|
||||
config_classes = []
|
||||
config_classes: List[Type[Config]] = []
|
||||
|
||||
def __init__(self, config_files: Collection[str] = ()):
|
||||
# Capture absolute paths here, so we can reload config after we daemonize.
|
||||
self.config_files = [os.path.abspath(path) for path in config_files]
|
||||
|
||||
def __init__(self):
|
||||
for config_class in self.config_classes:
|
||||
if config_class.section is None:
|
||||
raise ValueError("%r requires a section name" % (config_class,))
|
||||
@@ -512,12 +553,10 @@ class RootConfig:
|
||||
object from parser.parse_args(..)`
|
||||
"""
|
||||
|
||||
obj = cls()
|
||||
|
||||
config_args = parser.parse_args(argv)
|
||||
|
||||
config_files = find_config_files(search_paths=config_args.config_path)
|
||||
|
||||
obj = cls(config_files)
|
||||
if not config_files:
|
||||
parser.error("Must supply a config file.")
|
||||
|
||||
@@ -627,7 +666,7 @@ class RootConfig:
|
||||
|
||||
generate_missing_configs = config_args.generate_missing_configs
|
||||
|
||||
obj = cls()
|
||||
obj = cls(config_files)
|
||||
|
||||
if config_args.generate_config:
|
||||
if config_args.report_stats is None:
|
||||
@@ -727,6 +766,34 @@ class RootConfig:
|
||||
) -> None:
|
||||
self.invoke_all("generate_files", config_dict, config_dir_path)
|
||||
|
||||
def reload_config_section(self, section_name: str) -> Config:
|
||||
"""Reconstruct the given config section, leaving all others unchanged.
|
||||
|
||||
This works in three steps:
|
||||
|
||||
1. Create a new instance of the relevant `Config` subclass.
|
||||
2. Call `read_config` on that instance to parse the new config.
|
||||
3. Replace the existing config instance with the new one.
|
||||
|
||||
:raises ValueError: if the given `section` does not exist.
|
||||
:raises ConfigError: for any other problems reloading config.
|
||||
|
||||
:returns: the previous config object, which no longer has a reference to this
|
||||
RootConfig.
|
||||
"""
|
||||
existing_config: Optional[Config] = getattr(self, section_name, None)
|
||||
if existing_config is None:
|
||||
raise ValueError(f"Unknown config section '{section_name}'")
|
||||
logger.info("Reloading config section '%s'", section_name)
|
||||
|
||||
new_config_data = read_config_files(self.config_files)
|
||||
new_config = type(existing_config)(self)
|
||||
new_config.read_config(new_config_data)
|
||||
setattr(self, section_name, new_config)
|
||||
|
||||
existing_config.root = None
|
||||
return existing_config
|
||||
|
||||
|
||||
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
|
||||
"""Read the config files into a dict
|
||||
|
||||
@@ -1,15 +1,19 @@
|
||||
import argparse
|
||||
from typing import (
|
||||
Any,
|
||||
Collection,
|
||||
Dict,
|
||||
Iterable,
|
||||
Iterator,
|
||||
List,
|
||||
Literal,
|
||||
MutableMapping,
|
||||
Optional,
|
||||
Tuple,
|
||||
Type,
|
||||
TypeVar,
|
||||
Union,
|
||||
overload,
|
||||
)
|
||||
|
||||
import jinja2
|
||||
@@ -64,6 +68,8 @@ class ConfigError(Exception):
|
||||
self.msg = msg
|
||||
self.path = path
|
||||
|
||||
def format_config_error(e: ConfigError) -> Iterator[str]: ...
|
||||
|
||||
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
|
||||
MISSING_REPORT_STATS_SPIEL: str
|
||||
MISSING_SERVER_NAME: str
|
||||
@@ -117,7 +123,8 @@ class RootConfig:
|
||||
background_updates: background_updates.BackgroundUpdateConfig
|
||||
|
||||
config_classes: List[Type["Config"]] = ...
|
||||
def __init__(self) -> None: ...
|
||||
config_files: List[str]
|
||||
def __init__(self, config_files: Collection[str] = ...) -> None: ...
|
||||
def invoke_all(
|
||||
self, func_name: str, *args: Any, **kwargs: Any
|
||||
) -> MutableMapping[str, Any]: ...
|
||||
@@ -157,6 +164,12 @@ class RootConfig:
|
||||
def generate_missing_files(
|
||||
self, config_dict: dict, config_dir_path: str
|
||||
) -> None: ...
|
||||
@overload
|
||||
def reload_config_section(
|
||||
self, section_name: Literal["caches"]
|
||||
) -> cache.CacheConfig: ...
|
||||
@overload
|
||||
def reload_config_section(self, section_name: str) -> Config: ...
|
||||
|
||||
class Config:
|
||||
root: RootConfig
|
||||
|
||||
@@ -69,11 +69,11 @@ def _canonicalise_cache_name(cache_name: str) -> str:
|
||||
def add_resizable_cache(
|
||||
cache_name: str, cache_resize_callback: Callable[[float], None]
|
||||
) -> None:
|
||||
"""Register a cache that's size can dynamically change
|
||||
"""Register a cache whose size can dynamically change
|
||||
|
||||
Args:
|
||||
cache_name: A reference to the cache
|
||||
cache_resize_callback: A callback function that will be ran whenever
|
||||
cache_resize_callback: A callback function that will run whenever
|
||||
the cache needs to be resized
|
||||
"""
|
||||
# Some caches have '*' in them which we strip out.
|
||||
@@ -96,6 +96,13 @@ class CacheConfig(Config):
|
||||
section = "caches"
|
||||
_environ = os.environ
|
||||
|
||||
event_cache_size: int
|
||||
cache_factors: Dict[str, float]
|
||||
global_factor: float
|
||||
track_memory_usage: bool
|
||||
expiry_time_msec: Optional[int]
|
||||
sync_response_cache_duration: int
|
||||
|
||||
@staticmethod
|
||||
def reset() -> None:
|
||||
"""Resets the caches to their defaults. Used for tests."""
|
||||
@@ -115,6 +122,12 @@ class CacheConfig(Config):
|
||||
# A cache 'factor' is a multiplier that can be applied to each of
|
||||
# Synapse's caches in order to increase or decrease the maximum
|
||||
# number of entries that can be stored.
|
||||
#
|
||||
# The configuration for cache factors (caches.global_factor and
|
||||
# caches.per_cache_factors) can be reloaded while the application is running,
|
||||
# by sending a SIGHUP signal to the Synapse process. Changes to other parts of
|
||||
# the caching config will NOT be applied after a SIGHUP is received; a restart
|
||||
# is necessary.
|
||||
|
||||
# The number of events to cache in memory. Not affected by
|
||||
# caches.global_factor.
|
||||
@@ -163,6 +176,24 @@ class CacheConfig(Config):
|
||||
#
|
||||
#cache_entry_ttl: 30m
|
||||
|
||||
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
|
||||
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
|
||||
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
|
||||
# this option, and all three of the options must be specified for this feature to work.
|
||||
#cache_autotuning:
|
||||
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
|
||||
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
|
||||
# the flag below, or until the `min_cache_ttl` is hit.
|
||||
#max_cache_memory_usage: 1024M
|
||||
|
||||
# This flag sets a rough target for the desired memory usage of the caches.
|
||||
#target_cache_memory_usage: 758M
|
||||
|
||||
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
|
||||
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
|
||||
# from being emptied while Synapse is evicting due to memory.
|
||||
#min_cache_ttl: 5m
|
||||
|
||||
# Controls how long the results of a /sync request are cached for after
|
||||
# a successful response is returned. A higher duration can help clients with
|
||||
# intermittent connections, at the cost of higher memory usage.
|
||||
@@ -174,21 +205,21 @@ class CacheConfig(Config):
|
||||
"""
|
||||
|
||||
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
|
||||
"""Populate this config object with values from `config`.
|
||||
|
||||
This method does NOT resize existing or future caches: use `resize_all_caches`.
|
||||
We use two separate methods so that we can reject bad config before applying it.
|
||||
"""
|
||||
self.event_cache_size = self.parse_size(
|
||||
config.get("event_cache_size", _DEFAULT_EVENT_CACHE_SIZE)
|
||||
)
|
||||
self.cache_factors: Dict[str, float] = {}
|
||||
self.cache_factors = {}
|
||||
|
||||
cache_config = config.get("caches") or {}
|
||||
self.global_factor = cache_config.get(
|
||||
"global_factor", properties.default_factor_size
|
||||
)
|
||||
self.global_factor = cache_config.get("global_factor", _DEFAULT_FACTOR_SIZE)
|
||||
if not isinstance(self.global_factor, (int, float)):
|
||||
raise ConfigError("caches.global_factor must be a number.")
|
||||
|
||||
# Set the global one so that it's reflected in new caches
|
||||
properties.default_factor_size = self.global_factor
|
||||
|
||||
# Load cache factors from the config
|
||||
individual_factors = cache_config.get("per_cache_factors") or {}
|
||||
if not isinstance(individual_factors, dict):
|
||||
@@ -230,7 +261,7 @@ class CacheConfig(Config):
|
||||
cache_entry_ttl = cache_config.get("cache_entry_ttl", "30m")
|
||||
|
||||
if expire_caches:
|
||||
self.expiry_time_msec: Optional[int] = self.parse_duration(cache_entry_ttl)
|
||||
self.expiry_time_msec = self.parse_duration(cache_entry_ttl)
|
||||
else:
|
||||
self.expiry_time_msec = None
|
||||
|
||||
@@ -250,23 +281,38 @@ class CacheConfig(Config):
|
||||
)
|
||||
self.expiry_time_msec = self.parse_duration(expiry_time)
|
||||
|
||||
self.cache_autotuning = cache_config.get("cache_autotuning")
|
||||
if self.cache_autotuning:
|
||||
max_memory_usage = self.cache_autotuning.get("max_cache_memory_usage")
|
||||
self.cache_autotuning["max_cache_memory_usage"] = self.parse_size(
|
||||
max_memory_usage
|
||||
)
|
||||
|
||||
target_mem_size = self.cache_autotuning.get("target_cache_memory_usage")
|
||||
self.cache_autotuning["target_cache_memory_usage"] = self.parse_size(
|
||||
target_mem_size
|
||||
)
|
||||
|
||||
min_cache_ttl = self.cache_autotuning.get("min_cache_ttl")
|
||||
self.cache_autotuning["min_cache_ttl"] = self.parse_duration(min_cache_ttl)
|
||||
|
||||
self.sync_response_cache_duration = self.parse_duration(
|
||||
cache_config.get("sync_response_cache_duration", 0)
|
||||
)
|
||||
|
||||
# Resize all caches (if necessary) with the new factors we've loaded
|
||||
self.resize_all_caches()
|
||||
|
||||
# Store this function so that it can be called from other classes without
|
||||
# needing an instance of Config
|
||||
properties.resize_all_caches_func = self.resize_all_caches
|
||||
|
||||
def resize_all_caches(self) -> None:
|
||||
"""Ensure all cache sizes are up to date
|
||||
"""Ensure all cache sizes are up-to-date.
|
||||
|
||||
For each cache, run the mapped callback function with either
|
||||
a specific cache factor or the default, global one.
|
||||
"""
|
||||
# Set the global factor size, so that new caches are appropriately sized.
|
||||
properties.default_factor_size = self.global_factor
|
||||
|
||||
# Store this function so that it can be called from other classes without
|
||||
# needing an instance of CacheConfig
|
||||
properties.resize_all_caches_func = self.resize_all_caches
|
||||
|
||||
# block other threads from modifying _CACHES while we iterate it.
|
||||
with _CACHES_LOCK:
|
||||
for cache_name, callback in _CACHES.items():
|
||||
|
||||
240
synapse/config/oidc2.py
Normal file
240
synapse/config/oidc2.py
Normal file
@@ -0,0 +1,240 @@
|
||||
from enum import Enum
|
||||
from typing import TYPE_CHECKING, Any, Mapping, Optional, Tuple
|
||||
|
||||
from pydantic import BaseModel, StrictBool, StrictStr, constr, validator
|
||||
from pydantic.fields import ModelField
|
||||
|
||||
from synapse.util.stringutils import parse_and_validate_mxc_uri
|
||||
|
||||
# Ugly workaround for https://github.com/samuelcolvin/pydantic/issues/156. Mypy doesn't
|
||||
# consider expressions like `constr(...)` to be valid types.
|
||||
if TYPE_CHECKING:
|
||||
IDP_ID_TYPE = str
|
||||
IDP_BRAND_TYPE = str
|
||||
else:
|
||||
IDP_ID_TYPE = constr(
|
||||
strict=True,
|
||||
min_length=1,
|
||||
max_length=250,
|
||||
regex="^[A-Za-z0-9._~-]+$", # noqa: F722
|
||||
)
|
||||
IDP_BRAND_TYPE = constr(
|
||||
strict=True,
|
||||
min_length=1,
|
||||
max_length=255,
|
||||
regex="^[a-z][a-z0-9_.-]*$", # noqa: F722
|
||||
)
|
||||
|
||||
|
||||
# the following list of enum members is the same as the keys of
|
||||
# authlib.oauth2.auth.ClientAuth.DEFAULT_AUTH_METHODS. We inline it
|
||||
# to avoid importing authlib here.
|
||||
class ClientAuthMethods(str, Enum):
|
||||
# The duplication is unfortunate. 3.11 should have StrEnum though,
|
||||
# and there is a backport available for 3.8.6.
|
||||
client_secret_basic = "client_secret_basic"
|
||||
client_secret_post = "client_secret_post"
|
||||
none = "none"
|
||||
|
||||
|
||||
class UserProfileMethod(str, Enum):
|
||||
# The duplication is unfortunate. 3.11 should have StrEnum though,
|
||||
# and there is a backport available for 3.8.6.
|
||||
auto = "auto"
|
||||
userinfo_endpoint = "userinfo_endpoint"
|
||||
|
||||
|
||||
class SSOAttributeRequirement(BaseModel):
|
||||
class Config:
|
||||
# Complain if someone provides a field that's not one of those listed here.
|
||||
# Pydantic suggests making your own BaseModel subclass if you want to do this,
|
||||
# see https://pydantic-docs.helpmanual.io/usage/model_config/#change-behaviour-globally
|
||||
extra = "forbid"
|
||||
|
||||
attribute: StrictStr
|
||||
# Note: a comment in config/oidc.py suggests that `value` may be optional. But
|
||||
# The JSON schema seems to forbid this.
|
||||
value: StrictStr
|
||||
|
||||
|
||||
class ClientSecretJWTKey(BaseModel):
|
||||
class Config:
|
||||
extra = "forbid"
|
||||
|
||||
# a pem-encoded signing key
|
||||
# TODO: how should we handle key_file?
|
||||
key: StrictStr
|
||||
|
||||
# properties to include in the JWT header
|
||||
# TODO: validator should enforce that jwt_header contains an 'alg'.
|
||||
jwt_header: Mapping[str, str]
|
||||
|
||||
# properties to include in the JWT payload.
|
||||
jwt_payload: Mapping[str, str] = {}
|
||||
|
||||
|
||||
class OIDCProviderModel(BaseModel):
|
||||
"""
|
||||
Notes on Pydantic:
|
||||
- I've used StrictStr because a plain `str` e.g. accepts integers and calls str()
|
||||
on them
|
||||
- pulling out constr() into IDP_ID_TYPE is a little awkward, but necessary to keep
|
||||
mypy happy
|
||||
-
|
||||
"""
|
||||
|
||||
# a unique identifier for this identity provider. Used in the 'user_external_ids'
|
||||
# table, as well as the query/path parameter used in the login protocol.
|
||||
idp_id: IDP_ID_TYPE
|
||||
|
||||
@validator("idp_id")
|
||||
def ensure_idp_id_prefix(cls, idp_id: str) -> str:
|
||||
"""Prefix the given IDP with a prefix specific to the SSO mechanism, to avoid
|
||||
clashes with other mechs (such as SAML, CAS).
|
||||
|
||||
We allow "oidc" as an exception so that people migrating from old-style
|
||||
"oidc_config" format (which has long used "oidc" as its idp_id) can migrate to
|
||||
a new-style "oidc_providers" entry without changing the idp_id for their provider
|
||||
(and thereby invalidating their user_external_ids data).
|
||||
"""
|
||||
if idp_id != "oidc":
|
||||
return "oidc-" + idp_id
|
||||
return idp_id
|
||||
|
||||
# user-facing name for this identity provider.
|
||||
idp_name: StrictStr
|
||||
|
||||
# Optional MXC URI for icon for this IdP.
|
||||
idp_icon: Optional[StrictStr]
|
||||
|
||||
@validator("idp_icon")
|
||||
def idp_icon_is_an_mxc_url(cls, idp_icon: str) -> str:
|
||||
parse_and_validate_mxc_uri(idp_icon)
|
||||
return idp_icon
|
||||
|
||||
# Optional brand identifier for this IdP.
|
||||
idp_brand: Optional[StrictStr]
|
||||
|
||||
# whether the OIDC discovery mechanism is used to discover endpoints
|
||||
discover: StrictBool = True
|
||||
|
||||
# the OIDC issuer. Used to validate tokens and (if discovery is enabled) to
|
||||
# discover the provider's endpoints.
|
||||
issuer: StrictStr
|
||||
|
||||
# oauth2 client id to use
|
||||
client_id: StrictStr
|
||||
|
||||
# oauth2 client secret to use. if `None`, use client_secret_jwt_key to generate
|
||||
# a secret.
|
||||
client_secret: Optional[StrictStr]
|
||||
|
||||
# key to use to construct a JWT to use as a client secret. May be `None` if
|
||||
# `client_secret` is set.
|
||||
# TODO: test that ClientSecretJWTKey is being parsed correctly
|
||||
client_secret_jwt_key: Optional[ClientSecretJWTKey]
|
||||
|
||||
# TODO: what is the precise relationship between client_auth_method, client_secret
|
||||
# and client_secret_jwt_key? Is there anything we should enforce with a validator?
|
||||
# auth method to use when exchanging the token.
|
||||
# Valid values are 'client_secret_basic', 'client_secret_post' and
|
||||
# 'none'.
|
||||
client_auth_method: ClientAuthMethods = ClientAuthMethods.client_secret_basic
|
||||
|
||||
# list of scopes to request
|
||||
scopes: Tuple[StrictStr, ...] = ("openid",)
|
||||
|
||||
# the oauth2 authorization endpoint. Required if discovery is disabled.
|
||||
authorization_endpoint: Optional[StrictStr]
|
||||
|
||||
# the oauth2 token endpoint. Required if discovery is disabled.
|
||||
token_endpoint: Optional[StrictStr]
|
||||
|
||||
# Normally, validators aren't run when fields don't have a value provided.
|
||||
# Using validate=True ensures we run the validator even in that situation.
|
||||
@validator("authorization_endpoint", "token_endpoint", always=True)
|
||||
def endpoints_required_if_discovery_disabled(
|
||||
cls,
|
||||
endpoint_url: Optional[str],
|
||||
values: Mapping[str, Any],
|
||||
field: ModelField,
|
||||
) -> Optional[str]:
|
||||
# `if "discover" in values means: don't run our checks if "discover" didn't
|
||||
# pass validation. (NB: validation order is the field definition order)
|
||||
if "discover" in values and not values["discover"] and endpoint_url is None:
|
||||
raise ValueError(f"{field.name} is required if discovery is disabled")
|
||||
return endpoint_url
|
||||
|
||||
# the OIDC userinfo endpoint. Required if discovery is disabled and the
|
||||
# "openid" scope is not requested.
|
||||
userinfo_endpoint: Optional[StrictStr]
|
||||
|
||||
@validator("userinfo_endpoint", always=True)
|
||||
def userinfo_endpoint_required_without_discovery_and_without_openid_scope(
|
||||
cls, userinfo_endpoint: Optional[str], values: Mapping[str, Any]
|
||||
) -> Optional[str]:
|
||||
discovery_disabled = "discover" in values and not values["discover"]
|
||||
openid_scope_not_requested = (
|
||||
"scopes" in values and "openid" not in values["scopes"]
|
||||
)
|
||||
if (
|
||||
discovery_disabled
|
||||
and openid_scope_not_requested
|
||||
and userinfo_endpoint is None
|
||||
):
|
||||
raise ValueError(
|
||||
"userinfo_requirement is required if discovery is disabled and"
|
||||
"the 'openid' scope is not requested"
|
||||
)
|
||||
return userinfo_endpoint
|
||||
|
||||
# URI where to fetch the JWKS. Required if discovery is disabled and the
|
||||
# "openid" scope is used.
|
||||
jwks_uri: Optional[StrictStr]
|
||||
|
||||
@validator("jwks_uri", always=True)
|
||||
def jwks_uri_required_without_discovery_but_with_openid_scope(
|
||||
cls, jwks_uri: Optional[str], values: Mapping[str, Any]
|
||||
) -> Optional[str]:
|
||||
discovery_disabled = "discover" in values and not values["discover"]
|
||||
openid_scope_requested = "scopes" in values and "openid" in values["scopes"]
|
||||
if discovery_disabled and openid_scope_requested and jwks_uri is None:
|
||||
raise ValueError(
|
||||
"jwks_uri is required if discovery is disabled and"
|
||||
"the 'openid' scope is not requested"
|
||||
)
|
||||
return jwks_uri
|
||||
|
||||
# Whether to skip metadata verification
|
||||
skip_verification: StrictBool = False
|
||||
|
||||
# Whether to fetch the user profile from the userinfo endpoint. Valid
|
||||
# values are: "auto" or "userinfo_endpoint".
|
||||
user_profile_method: UserProfileMethod = UserProfileMethod.auto
|
||||
|
||||
# whether to allow a user logging in via OIDC to match a pre-existing account
|
||||
# instead of failing
|
||||
allow_existing_users: StrictBool = False
|
||||
|
||||
# the class of the user mapping provider
|
||||
# TODO there was logic for this
|
||||
user_mapping_provider_class: Any # TODO: Type
|
||||
|
||||
# the config of the user mapping provider
|
||||
# TODO
|
||||
user_mapping_provider_config: Any
|
||||
|
||||
# required attributes to require in userinfo to allow login/registration
|
||||
# TODO: wouldn't this be better expressed as a Mapping[str, str]?
|
||||
attribute_requirements: Tuple[SSOAttributeRequirement, ...] = ()
|
||||
|
||||
|
||||
class LegacyOIDCProviderModel(OIDCProviderModel):
|
||||
# These fields could be omitted in the old scheme.
|
||||
idp_id: IDP_ID_TYPE = "oidc"
|
||||
idp_name: StrictStr = "OIDC"
|
||||
|
||||
|
||||
# TODO
|
||||
# top-level config: check we don't have any duplicate idp_ids now
|
||||
# compute callback url
|
||||
@@ -63,6 +63,19 @@ class RoomConfig(Config):
|
||||
"Invalid value for encryption_enabled_by_default_for_room_type"
|
||||
)
|
||||
|
||||
self.default_power_level_content_override = config.get(
|
||||
"default_power_level_content_override",
|
||||
None,
|
||||
)
|
||||
if self.default_power_level_content_override is not None:
|
||||
for preset in self.default_power_level_content_override:
|
||||
if preset not in vars(RoomCreationPreset).values():
|
||||
raise ConfigError(
|
||||
"Unrecognised room preset %s in default_power_level_content_override"
|
||||
% preset
|
||||
)
|
||||
# We validate the actual overrides when we try to apply them.
|
||||
|
||||
def generate_config_section(self, **kwargs: Any) -> str:
|
||||
return """\
|
||||
## Rooms ##
|
||||
@@ -83,4 +96,38 @@ class RoomConfig(Config):
|
||||
# will also not affect rooms created by other servers.
|
||||
#
|
||||
#encryption_enabled_by_default_for_room_type: invite
|
||||
|
||||
# Override the default power levels for rooms created on this server, per
|
||||
# room creation preset.
|
||||
#
|
||||
# The appropriate dictionary for the room preset will be applied on top
|
||||
# of the existing power levels content.
|
||||
#
|
||||
# Useful if you know that your users need special permissions in rooms
|
||||
# that they create (e.g. to send particular types of state events without
|
||||
# needing an elevated power level). This takes the same shape as the
|
||||
# `power_level_content_override` parameter in the /createRoom API, but
|
||||
# is applied before that parameter.
|
||||
#
|
||||
# Valid keys are some or all of `private_chat`, `trusted_private_chat`
|
||||
# and `public_chat`. Inside each of those should be any of the
|
||||
# properties allowed in `power_level_content_override` in the
|
||||
# /createRoom API. If any property is missing, its default value will
|
||||
# continue to be used. If any property is present, it will overwrite
|
||||
# the existing default completely (so if the `events` property exists,
|
||||
# the default event power levels will be ignored).
|
||||
#
|
||||
#default_power_level_content_override:
|
||||
# private_chat:
|
||||
# "events":
|
||||
# "com.example.myeventtype" : 0
|
||||
# "m.room.avatar": 50
|
||||
# "m.room.canonical_alias": 50
|
||||
# "m.room.encryption": 100
|
||||
# "m.room.history_visibility": 100
|
||||
# "m.room.name": 50
|
||||
# "m.room.power_levels": 100
|
||||
# "m.room.server_acl": 100
|
||||
# "m.room.tombstone": 100
|
||||
# "events_default": 1
|
||||
"""
|
||||
|
||||
@@ -996,7 +996,7 @@ class ServerConfig(Config):
|
||||
# federation: the server-server API (/_matrix/federation). Also implies
|
||||
# 'media', 'keys', 'openid'
|
||||
#
|
||||
# keys: the key discovery API (/_matrix/keys).
|
||||
# keys: the key discovery API (/_matrix/key).
|
||||
#
|
||||
# media: the media API (/_matrix/media).
|
||||
#
|
||||
|
||||
@@ -414,7 +414,12 @@ def _is_membership_change_allowed(
|
||||
raise AuthError(403, "You are banned from this room")
|
||||
elif join_rule == JoinRules.PUBLIC:
|
||||
pass
|
||||
elif room_version.msc3083_join_rules and join_rule == JoinRules.RESTRICTED:
|
||||
elif (
|
||||
room_version.msc3083_join_rules and join_rule == JoinRules.RESTRICTED
|
||||
) or (
|
||||
room_version.msc3787_knock_restricted_join_rule
|
||||
and join_rule == JoinRules.KNOCK_RESTRICTED
|
||||
):
|
||||
# This is the same as public, but the event must contain a reference
|
||||
# to the server who authorised the join. If the event does not contain
|
||||
# the proper content it is rejected.
|
||||
@@ -440,8 +445,13 @@ def _is_membership_change_allowed(
|
||||
if authorising_user_level < invite_level:
|
||||
raise AuthError(403, "Join event authorised by invalid server.")
|
||||
|
||||
elif join_rule == JoinRules.INVITE or (
|
||||
room_version.msc2403_knocking and join_rule == JoinRules.KNOCK
|
||||
elif (
|
||||
join_rule == JoinRules.INVITE
|
||||
or (room_version.msc2403_knocking and join_rule == JoinRules.KNOCK)
|
||||
or (
|
||||
room_version.msc3787_knock_restricted_join_rule
|
||||
and join_rule == JoinRules.KNOCK_RESTRICTED
|
||||
)
|
||||
):
|
||||
if not caller_in_room and not caller_invited:
|
||||
raise AuthError(403, "You are not invited to this room.")
|
||||
@@ -462,7 +472,10 @@ def _is_membership_change_allowed(
|
||||
if user_level < ban_level or user_level <= target_level:
|
||||
raise AuthError(403, "You don't have permission to ban")
|
||||
elif room_version.msc2403_knocking and Membership.KNOCK == membership:
|
||||
if join_rule != JoinRules.KNOCK:
|
||||
if join_rule != JoinRules.KNOCK and (
|
||||
not room_version.msc3787_knock_restricted_join_rule
|
||||
or join_rule != JoinRules.KNOCK_RESTRICTED
|
||||
):
|
||||
raise AuthError(403, "You don't have permission to knock")
|
||||
elif target_user_id != event.user_id:
|
||||
raise AuthError(403, "You cannot knock for other users")
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
import abc
|
||||
import collections.abc
|
||||
import os
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
@@ -32,9 +33,11 @@ from typing import (
|
||||
overload,
|
||||
)
|
||||
|
||||
import attr
|
||||
from typing_extensions import Literal
|
||||
from unpaddedbase64 import encode_base64
|
||||
|
||||
from synapse.api.constants import RelationTypes
|
||||
from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions
|
||||
from synapse.types import JsonDict, RoomStreamToken
|
||||
from synapse.util.caches import intern_dict
|
||||
@@ -615,3 +618,45 @@ def make_event_from_dict(
|
||||
return event_type(
|
||||
event_dict, room_version, internal_metadata_dict or {}, rejected_reason
|
||||
)
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class _EventRelation:
|
||||
# The target event of the relation.
|
||||
parent_id: str
|
||||
# The relation type.
|
||||
rel_type: str
|
||||
# The aggregation key. Will be None if the rel_type is not m.annotation or is
|
||||
# not a string.
|
||||
aggregation_key: Optional[str]
|
||||
|
||||
|
||||
def relation_from_event(event: EventBase) -> Optional[_EventRelation]:
|
||||
"""
|
||||
Attempt to parse relation information an event.
|
||||
|
||||
Returns:
|
||||
The event relation information, if it is valid. None, otherwise.
|
||||
"""
|
||||
relation = event.content.get("m.relates_to")
|
||||
if not relation or not isinstance(relation, collections.abc.Mapping):
|
||||
# No relation information.
|
||||
return None
|
||||
|
||||
# Relations must have a type and parent event ID.
|
||||
rel_type = relation.get("rel_type")
|
||||
if not isinstance(rel_type, str):
|
||||
return None
|
||||
|
||||
parent_id = relation.get("event_id")
|
||||
if not isinstance(parent_id, str):
|
||||
return None
|
||||
|
||||
# Annotations have a key field.
|
||||
aggregation_key = None
|
||||
if rel_type == RelationTypes.ANNOTATION:
|
||||
aggregation_key = relation.get("key")
|
||||
if not isinstance(aggregation_key, str):
|
||||
aggregation_key = None
|
||||
|
||||
return _EventRelation(parent_id, rel_type, aggregation_key)
|
||||
|
||||
@@ -27,12 +27,12 @@ from typing import (
|
||||
Union,
|
||||
)
|
||||
|
||||
from synapse.api.errors import Codes
|
||||
from synapse.rest.media.v1._base import FileInfo
|
||||
from synapse.rest.media.v1.media_storage import ReadableFileWrapper
|
||||
from synapse.spam_checker_api import ALLOW, Decision, RegistrationBehaviour
|
||||
from synapse.spam_checker_api import RegistrationBehaviour
|
||||
from synapse.types import RoomAlias, UserProfile
|
||||
from synapse.util.async_helpers import delay_cancellation, maybe_awaitable
|
||||
from synapse.util.metrics import Measure
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import synapse.events
|
||||
@@ -40,34 +40,17 @@ if TYPE_CHECKING:
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
DEPRECATED_BOOL = bool
|
||||
|
||||
CHECK_EVENT_FOR_SPAM_CALLBACK = Callable[
|
||||
["synapse.events.EventBase"],
|
||||
Awaitable[Union[ALLOW, Codes, str, DEPRECATED_BOOL]],
|
||||
]
|
||||
USER_MAY_JOIN_ROOM_CALLBACK = Callable[
|
||||
[str, str, bool], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
|
||||
]
|
||||
USER_MAY_INVITE_CALLBACK = Callable[
|
||||
[str, str, str], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
|
||||
]
|
||||
USER_MAY_SEND_3PID_INVITE_CALLBACK = Callable[
|
||||
[str, str, str, str], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
|
||||
]
|
||||
USER_MAY_CREATE_ROOM_CALLBACK = Callable[
|
||||
[str], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
|
||||
]
|
||||
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[
|
||||
[str, RoomAlias], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
|
||||
]
|
||||
USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[
|
||||
[str, str], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
|
||||
]
|
||||
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[
|
||||
[UserProfile], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
|
||||
Awaitable[Union[bool, str]],
|
||||
]
|
||||
USER_MAY_JOIN_ROOM_CALLBACK = Callable[[str, str, bool], Awaitable[bool]]
|
||||
USER_MAY_INVITE_CALLBACK = Callable[[str, str, str], Awaitable[bool]]
|
||||
USER_MAY_SEND_3PID_INVITE_CALLBACK = Callable[[str, str, str, str], Awaitable[bool]]
|
||||
USER_MAY_CREATE_ROOM_CALLBACK = Callable[[str], Awaitable[bool]]
|
||||
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[[str, RoomAlias], Awaitable[bool]]
|
||||
USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[[str, str], Awaitable[bool]]
|
||||
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[UserProfile], Awaitable[bool]]
|
||||
LEGACY_CHECK_REGISTRATION_FOR_SPAM_CALLBACK = Callable[
|
||||
[
|
||||
Optional[dict],
|
||||
@@ -83,11 +66,11 @@ CHECK_REGISTRATION_FOR_SPAM_CALLBACK = Callable[
|
||||
Collection[Tuple[str, str]],
|
||||
Optional[str],
|
||||
],
|
||||
Awaitable[Union[RegistrationBehaviour, Codes]],
|
||||
Awaitable[RegistrationBehaviour],
|
||||
]
|
||||
CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK = Callable[
|
||||
[ReadableFileWrapper, FileInfo],
|
||||
Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]],
|
||||
Awaitable[bool],
|
||||
]
|
||||
|
||||
|
||||
@@ -180,7 +163,10 @@ def load_legacy_spam_checkers(hs: "synapse.server.HomeServer") -> None:
|
||||
|
||||
|
||||
class SpamChecker:
|
||||
def __init__(self) -> None:
|
||||
def __init__(self, hs: "synapse.server.HomeServer") -> None:
|
||||
self.hs = hs
|
||||
self.clock = hs.get_clock()
|
||||
|
||||
self._check_event_for_spam_callbacks: List[CHECK_EVENT_FOR_SPAM_CALLBACK] = []
|
||||
self._user_may_join_room_callbacks: List[USER_MAY_JOIN_ROOM_CALLBACK] = []
|
||||
self._user_may_invite_callbacks: List[USER_MAY_INVITE_CALLBACK] = []
|
||||
@@ -258,7 +244,7 @@ class SpamChecker:
|
||||
|
||||
async def check_event_for_spam(
|
||||
self, event: "synapse.events.EventBase"
|
||||
) -> Union[ALLOW, Codes, str]:
|
||||
) -> Union[bool, str]:
|
||||
"""Checks if a given event is considered "spammy" by this server.
|
||||
|
||||
If the server considers an event spammy, then it will be rejected if
|
||||
@@ -269,29 +255,22 @@ class SpamChecker:
|
||||
event: the event to be checked
|
||||
|
||||
Returns:
|
||||
- on `ALLOW`, the event is considered good (non-spammy) and should
|
||||
be let through. Other spamcheck filters may still reject it.
|
||||
- on `Codes`, the event is considered spammy and is rejected with a specific
|
||||
error message/code.
|
||||
- on `str`, the event is considered spammy and the string is used as error
|
||||
message.
|
||||
True or a string if the event is spammy. If a string is returned it
|
||||
will be used as the error message returned to the user.
|
||||
"""
|
||||
for callback in self._check_event_for_spam_callbacks:
|
||||
res: Union[ALLOW, Codes, str, DEPRECATED_BOOL] = await delay_cancellation(
|
||||
callback(event)
|
||||
)
|
||||
if res is False or res is ALLOW:
|
||||
continue
|
||||
elif res is True:
|
||||
return Codes.FORBIDDEN
|
||||
else:
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
res: Union[bool, str] = await delay_cancellation(callback(event))
|
||||
if res:
|
||||
return res
|
||||
|
||||
return ALLOW
|
||||
return False
|
||||
|
||||
async def user_may_join_room(
|
||||
self, user_id: str, room_id: str, is_invited: bool
|
||||
) -> Decision:
|
||||
) -> bool:
|
||||
"""Checks if a given users is allowed to join a room.
|
||||
Not called when a user creates a room.
|
||||
|
||||
@@ -301,54 +280,54 @@ class SpamChecker:
|
||||
is_invited: Whether the user is invited into the room
|
||||
|
||||
Returns:
|
||||
- on `ALLOW`, the action is permitted.
|
||||
- on `Codes`, the action is rejected with a specific error message/code.
|
||||
Whether the user may join the room
|
||||
"""
|
||||
for callback in self._user_may_join_room_callbacks:
|
||||
may_join_room = await delay_cancellation(
|
||||
callback(user_id, room_id, is_invited)
|
||||
)
|
||||
if may_join_room is True or may_join_room is ALLOW:
|
||||
continue
|
||||
elif may_join_room is False:
|
||||
return Codes.FORBIDDEN
|
||||
else:
|
||||
return may_join_room
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_join_room = await delay_cancellation(
|
||||
callback(user_id, room_id, is_invited)
|
||||
)
|
||||
if may_join_room is False:
|
||||
return False
|
||||
|
||||
return ALLOW
|
||||
return True
|
||||
|
||||
async def user_may_invite(
|
||||
self, inviter_userid: str, invitee_userid: str, room_id: str
|
||||
) -> Decision:
|
||||
) -> bool:
|
||||
"""Checks if a given user may send an invite
|
||||
|
||||
If this method returns false, the invite will be rejected.
|
||||
|
||||
Args:
|
||||
inviter_userid: The user ID of the sender of the invitation
|
||||
invitee_userid: The user ID targeted in the invitation
|
||||
room_id: The room ID
|
||||
|
||||
Returns:
|
||||
- on `ALLOW`, the action is permitted.
|
||||
- on `Codes`, the action is rejected with a specific error message/code.
|
||||
True if the user may send an invite, otherwise False
|
||||
"""
|
||||
for callback in self._user_may_invite_callbacks:
|
||||
may_invite = await delay_cancellation(
|
||||
callback(inviter_userid, invitee_userid, room_id)
|
||||
)
|
||||
if may_invite is True or may_invite is ALLOW:
|
||||
continue
|
||||
elif may_invite is False:
|
||||
return Codes.FORBIDDEN
|
||||
else:
|
||||
return may_invite
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_invite = await delay_cancellation(
|
||||
callback(inviter_userid, invitee_userid, room_id)
|
||||
)
|
||||
if may_invite is False:
|
||||
return False
|
||||
|
||||
return ALLOW
|
||||
return True
|
||||
|
||||
async def user_may_send_3pid_invite(
|
||||
self, inviter_userid: str, medium: str, address: str, room_id: str
|
||||
) -> Decision:
|
||||
) -> bool:
|
||||
"""Checks if a given user may invite a given threepid into the room
|
||||
|
||||
If this method returns false, the threepid invite will be rejected.
|
||||
|
||||
Note that if the threepid is already associated with a Matrix user ID, Synapse
|
||||
will call user_may_invite with said user ID instead.
|
||||
|
||||
@@ -359,94 +338,90 @@ class SpamChecker:
|
||||
room_id: The room ID
|
||||
|
||||
Returns:
|
||||
- on `ALLOW`, the action is permitted.
|
||||
- on `Codes`, the action is rejected with a specific error message/code.
|
||||
True if the user may send the invite, otherwise False
|
||||
"""
|
||||
for callback in self._user_may_send_3pid_invite_callbacks:
|
||||
may_send_3pid_invite = await delay_cancellation(
|
||||
callback(inviter_userid, medium, address, room_id)
|
||||
)
|
||||
if may_send_3pid_invite is True or may_send_3pid_invite is ALLOW:
|
||||
continue
|
||||
elif may_send_3pid_invite is False:
|
||||
return Codes.FORBIDDEN
|
||||
else:
|
||||
return may_send_3pid_invite
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_send_3pid_invite = await delay_cancellation(
|
||||
callback(inviter_userid, medium, address, room_id)
|
||||
)
|
||||
if may_send_3pid_invite is False:
|
||||
return False
|
||||
|
||||
return ALLOW
|
||||
return True
|
||||
|
||||
async def user_may_create_room(self, userid: str) -> Decision:
|
||||
async def user_may_create_room(self, userid: str) -> bool:
|
||||
"""Checks if a given user may create a room
|
||||
|
||||
If this method returns false, the creation request will be rejected.
|
||||
|
||||
Args:
|
||||
userid: The ID of the user attempting to create a room
|
||||
|
||||
Returns:
|
||||
- on `ALLOW`, the action is permitted.
|
||||
- on `Codes`, the action is rejected with a specific error message/code.
|
||||
True if the user may create a room, otherwise False
|
||||
"""
|
||||
for callback in self._user_may_create_room_callbacks:
|
||||
may_create_room = await delay_cancellation(callback(userid))
|
||||
if may_create_room is True or may_create_room is ALLOW:
|
||||
continue
|
||||
elif may_create_room is False:
|
||||
return Codes.FORBIDDEN
|
||||
else:
|
||||
return may_create_room
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_create_room = await delay_cancellation(callback(userid))
|
||||
if may_create_room is False:
|
||||
return False
|
||||
|
||||
return ALLOW
|
||||
return True
|
||||
|
||||
async def user_may_create_room_alias(
|
||||
self, userid: str, room_alias: RoomAlias
|
||||
) -> Decision:
|
||||
) -> bool:
|
||||
"""Checks if a given user may create a room alias
|
||||
|
||||
If this method returns false, the association request will be rejected.
|
||||
|
||||
Args:
|
||||
userid: The ID of the user attempting to create a room alias
|
||||
room_alias: The alias to be created
|
||||
|
||||
Returns:
|
||||
- on `ALLOW`, the action is permitted.
|
||||
- on `Codes`, the action is rejected with a specific error message/code.
|
||||
True if the user may create a room alias, otherwise False
|
||||
"""
|
||||
for callback in self._user_may_create_room_alias_callbacks:
|
||||
may_create_room_alias = await delay_cancellation(
|
||||
callback(userid, room_alias)
|
||||
)
|
||||
if may_create_room_alias is True or may_create_room_alias is ALLOW:
|
||||
continue
|
||||
elif may_create_room_alias is False:
|
||||
return Codes.FORBIDDEN
|
||||
else:
|
||||
return may_create_room_alias
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_create_room_alias = await delay_cancellation(
|
||||
callback(userid, room_alias)
|
||||
)
|
||||
if may_create_room_alias is False:
|
||||
return False
|
||||
|
||||
return ALLOW
|
||||
return True
|
||||
|
||||
async def user_may_publish_room(
|
||||
self, userid: str, room_id: str
|
||||
) -> Union[ALLOW, Codes, DEPRECATED_BOOL]:
|
||||
async def user_may_publish_room(self, userid: str, room_id: str) -> bool:
|
||||
"""Checks if a given user may publish a room to the directory
|
||||
|
||||
If this method returns false, the publish request will be rejected.
|
||||
|
||||
Args:
|
||||
userid: The user ID attempting to publish the room
|
||||
room_id: The ID of the room that would be published
|
||||
|
||||
Returns:
|
||||
- on `ALLOW`, the action is permitted.
|
||||
- on `Codes`, the action is rejected with a specific error message/code.
|
||||
True if the user may publish the room, otherwise False
|
||||
"""
|
||||
for callback in self._user_may_publish_room_callbacks:
|
||||
may_publish_room = await delay_cancellation(callback(userid, room_id))
|
||||
if may_publish_room is True or may_publish_room is ALLOW:
|
||||
continue
|
||||
elif may_publish_room is False:
|
||||
return Codes.FORBIDDEN
|
||||
else:
|
||||
return may_publish_room
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_publish_room = await delay_cancellation(callback(userid, room_id))
|
||||
if may_publish_room is False:
|
||||
return False
|
||||
|
||||
return ALLOW
|
||||
return True
|
||||
|
||||
async def check_username_for_spam(self, user_profile: UserProfile) -> Decision:
|
||||
async def check_username_for_spam(self, user_profile: UserProfile) -> bool:
|
||||
"""Checks if a user ID or display name are considered "spammy" by this server.
|
||||
|
||||
If the server considers a username spammy, then it will not be included in
|
||||
@@ -459,21 +434,19 @@ class SpamChecker:
|
||||
* avatar_url
|
||||
|
||||
Returns:
|
||||
- on `ALLOW`, the action is permitted.
|
||||
- on `Codes`, the action is rejected with a specific error message/code.
|
||||
True if the user is spammy.
|
||||
"""
|
||||
for callback in self._check_username_for_spam_callbacks:
|
||||
# Make a copy of the user profile object to ensure the spam checker cannot
|
||||
# modify it.
|
||||
is_spam = await delay_cancellation(callback(user_profile.copy()))
|
||||
if is_spam is False or is_spam is ALLOW:
|
||||
continue
|
||||
elif is_spam is True:
|
||||
return Codes.FORBIDDEN
|
||||
else:
|
||||
return is_spam
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
# Make a copy of the user profile object to ensure the spam checker cannot
|
||||
# modify it.
|
||||
res = await delay_cancellation(callback(user_profile.copy()))
|
||||
if res:
|
||||
return True
|
||||
|
||||
return ALLOW
|
||||
return False
|
||||
|
||||
async def check_registration_for_spam(
|
||||
self,
|
||||
@@ -498,11 +471,12 @@ class SpamChecker:
|
||||
"""
|
||||
|
||||
for callback in self._check_registration_for_spam_callbacks:
|
||||
behaviour = await delay_cancellation(
|
||||
callback(email_threepid, username, request_info, auth_provider_id)
|
||||
)
|
||||
if isinstance(behaviour, Codes):
|
||||
return behaviour
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
behaviour = await delay_cancellation(
|
||||
callback(email_threepid, username, request_info, auth_provider_id)
|
||||
)
|
||||
assert isinstance(behaviour, RegistrationBehaviour)
|
||||
if behaviour != RegistrationBehaviour.ALLOW:
|
||||
return behaviour
|
||||
@@ -511,7 +485,7 @@ class SpamChecker:
|
||||
|
||||
async def check_media_file_for_spam(
|
||||
self, file_wrapper: ReadableFileWrapper, file_info: FileInfo
|
||||
) -> Decision:
|
||||
) -> bool:
|
||||
"""Checks if a piece of newly uploaded media should be blocked.
|
||||
|
||||
This will be called for local uploads, downloads of remote media, each
|
||||
@@ -533,22 +507,22 @@ class SpamChecker:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
Args:
|
||||
file: An object that allows reading the contents of the media.
|
||||
file_info: Metadata about the file.
|
||||
|
||||
Returns:
|
||||
- on `ALLOW`, the action is permitted.
|
||||
- on `Codes`, the action is rejected with a specific error message/code.
|
||||
True if the media should be blocked or False if it should be
|
||||
allowed.
|
||||
"""
|
||||
|
||||
for callback in self._check_media_file_for_spam_callbacks:
|
||||
is_spam = await delay_cancellation(callback(file_wrapper, file_info))
|
||||
if is_spam is False or is_spam is ALLOW:
|
||||
continue
|
||||
elif is_spam is True:
|
||||
return Codes.FORBIDDEN
|
||||
else:
|
||||
return is_spam
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
spam = await delay_cancellation(callback(file_wrapper, file_info))
|
||||
if spam:
|
||||
return True
|
||||
|
||||
return ALLOW
|
||||
return False
|
||||
|
||||
@@ -15,7 +15,6 @@
|
||||
import logging
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
import synapse
|
||||
from synapse.api.constants import MAX_DEPTH, EventContentFields, EventTypes, Membership
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.api.room_versions import EventFormatVersions, RoomVersion
|
||||
@@ -99,9 +98,9 @@ class FederationBase:
|
||||
)
|
||||
return redacted_event
|
||||
|
||||
spam_check = await self.spam_checker.check_event_for_spam(pdu)
|
||||
result = await self.spam_checker.check_event_for_spam(pdu)
|
||||
|
||||
if spam_check is not synapse.spam_checker_api.ALLOW:
|
||||
if result:
|
||||
logger.warning("Event contains spam, soft-failing %s", pdu.event_id)
|
||||
# we redact (to save disk space) as well as soft-failing (to stop
|
||||
# using the event in prev_events).
|
||||
|
||||
@@ -15,7 +15,17 @@
|
||||
import abc
|
||||
import logging
|
||||
from collections import OrderedDict
|
||||
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Set, Tuple
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Collection,
|
||||
Dict,
|
||||
Hashable,
|
||||
Iterable,
|
||||
List,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
)
|
||||
|
||||
import attr
|
||||
from prometheus_client import Counter
|
||||
@@ -409,7 +419,7 @@ class FederationSender(AbstractFederationSender):
|
||||
)
|
||||
return
|
||||
|
||||
destinations: Optional[Set[str]] = None
|
||||
destinations: Optional[Collection[str]] = None
|
||||
if not event.prev_event_ids():
|
||||
# If there are no prev event IDs then the state is empty
|
||||
# and so no remote servers in the room
|
||||
@@ -444,7 +454,7 @@ class FederationSender(AbstractFederationSender):
|
||||
)
|
||||
return
|
||||
|
||||
destinations = {
|
||||
sharded_destinations = {
|
||||
d
|
||||
for d in destinations
|
||||
if self._federation_shard_config.should_handle(
|
||||
@@ -456,12 +466,12 @@ class FederationSender(AbstractFederationSender):
|
||||
# If we are sending the event on behalf of another server
|
||||
# then it already has the event and there is no reason to
|
||||
# send the event to it.
|
||||
destinations.discard(send_on_behalf_of)
|
||||
sharded_destinations.discard(send_on_behalf_of)
|
||||
|
||||
logger.debug("Sending %s to %r", event, destinations)
|
||||
logger.debug("Sending %s to %r", event, sharded_destinations)
|
||||
|
||||
if destinations:
|
||||
await self._send_pdu(event, destinations)
|
||||
if sharded_destinations:
|
||||
await self._send_pdu(event, sharded_destinations)
|
||||
|
||||
now = self.clock.time_msec()
|
||||
ts = await self.store.get_received_ts(event.event_id)
|
||||
|
||||
@@ -21,7 +21,7 @@ from typing import TYPE_CHECKING, Any, Awaitable, Callable, Dict, Optional, Tupl
|
||||
|
||||
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
|
||||
from synapse.api.urls import FEDERATION_V1_PREFIX
|
||||
from synapse.http.server import HttpServer, ServletCallback
|
||||
from synapse.http.server import HttpServer, ServletCallback, is_method_cancellable
|
||||
from synapse.http.servlet import parse_json_object_from_request
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import run_in_background
|
||||
@@ -169,14 +169,16 @@ def _parse_auth_header(header_bytes: bytes) -> Tuple[str, str, str, Optional[str
|
||||
"""
|
||||
try:
|
||||
header_str = header_bytes.decode("utf-8")
|
||||
params = header_str.split(" ")[1].split(",")
|
||||
params = re.split(" +", header_str)[1].split(",")
|
||||
param_dict: Dict[str, str] = {
|
||||
k: v for k, v in [param.split("=", maxsplit=1) for param in params]
|
||||
k.lower(): v for k, v in [param.split("=", maxsplit=1) for param in params]
|
||||
}
|
||||
|
||||
def strip_quotes(value: str) -> str:
|
||||
if value.startswith('"'):
|
||||
return value[1:-1]
|
||||
return re.sub(
|
||||
"\\\\(.)", lambda matchobj: matchobj.group(1), value[1:-1]
|
||||
)
|
||||
else:
|
||||
return value
|
||||
|
||||
@@ -373,6 +375,17 @@ class BaseFederationServlet:
|
||||
if code is None:
|
||||
continue
|
||||
|
||||
if is_method_cancellable(code):
|
||||
# The wrapper added by `self._wrap` will inherit the cancellable flag,
|
||||
# but the wrapper itself does not support cancellation yet.
|
||||
# Once resolved, the cancellation tests in
|
||||
# `tests/federation/transport/server/test__base.py` can be re-enabled.
|
||||
raise Exception(
|
||||
f"{self.__class__.__name__}.on_{method} has been marked as "
|
||||
"cancellable, but federation servlets do not support cancellation "
|
||||
"yet."
|
||||
)
|
||||
|
||||
server.register_paths(
|
||||
method,
|
||||
(pattern,),
|
||||
|
||||
@@ -934,7 +934,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
# Before deleting the group lets kick everyone out of it
|
||||
users = await self.store.get_users_in_group(group_id, include_private=True)
|
||||
|
||||
async def _kick_user_from_group(user_id):
|
||||
async def _kick_user_from_group(user_id: str) -> None:
|
||||
if self.hs.is_mine_id(user_id):
|
||||
groups_local = self.hs.get_groups_local_handler()
|
||||
assert isinstance(
|
||||
|
||||
@@ -23,7 +23,7 @@ from synapse.replication.http.account_data import (
|
||||
ReplicationUserAccountDataRestServlet,
|
||||
)
|
||||
from synapse.streams import EventSource
|
||||
from synapse.types import JsonDict, UserID
|
||||
from synapse.types import JsonDict, StreamKeyType, UserID
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -105,7 +105,7 @@ class AccountDataHandler:
|
||||
)
|
||||
|
||||
self._notifier.on_new_event(
|
||||
"account_data_key", max_stream_id, users=[user_id]
|
||||
StreamKeyType.ACCOUNT_DATA, max_stream_id, users=[user_id]
|
||||
)
|
||||
|
||||
await self._notify_modules(user_id, room_id, account_data_type, content)
|
||||
@@ -141,7 +141,7 @@ class AccountDataHandler:
|
||||
)
|
||||
|
||||
self._notifier.on_new_event(
|
||||
"account_data_key", max_stream_id, users=[user_id]
|
||||
StreamKeyType.ACCOUNT_DATA, max_stream_id, users=[user_id]
|
||||
)
|
||||
|
||||
await self._notify_modules(user_id, None, account_data_type, content)
|
||||
@@ -176,7 +176,7 @@ class AccountDataHandler:
|
||||
)
|
||||
|
||||
self._notifier.on_new_event(
|
||||
"account_data_key", max_stream_id, users=[user_id]
|
||||
StreamKeyType.ACCOUNT_DATA, max_stream_id, users=[user_id]
|
||||
)
|
||||
return max_stream_id
|
||||
else:
|
||||
@@ -201,7 +201,7 @@ class AccountDataHandler:
|
||||
)
|
||||
|
||||
self._notifier.on_new_event(
|
||||
"account_data_key", max_stream_id, users=[user_id]
|
||||
StreamKeyType.ACCOUNT_DATA, max_stream_id, users=[user_id]
|
||||
)
|
||||
return max_stream_id
|
||||
else:
|
||||
|
||||
@@ -38,6 +38,7 @@ from synapse.types import (
|
||||
JsonDict,
|
||||
RoomAlias,
|
||||
RoomStreamToken,
|
||||
StreamKeyType,
|
||||
UserID,
|
||||
)
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
@@ -213,8 +214,8 @@ class ApplicationServicesHandler:
|
||||
Args:
|
||||
stream_key: The stream the event came from.
|
||||
|
||||
`stream_key` can be "typing_key", "receipt_key", "presence_key",
|
||||
"to_device_key" or "device_list_key". Any other value for `stream_key`
|
||||
`stream_key` can be StreamKeyType.TYPING, StreamKeyType.RECEIPT, StreamKeyType.PRESENCE,
|
||||
StreamKeyType.TO_DEVICE or StreamKeyType.DEVICE_LIST. Any other value for `stream_key`
|
||||
will cause this function to return early.
|
||||
|
||||
Ephemeral events will only be pushed to appservices that have opted into
|
||||
@@ -235,11 +236,11 @@ class ApplicationServicesHandler:
|
||||
# Only the following streams are currently supported.
|
||||
# FIXME: We should use constants for these values.
|
||||
if stream_key not in (
|
||||
"typing_key",
|
||||
"receipt_key",
|
||||
"presence_key",
|
||||
"to_device_key",
|
||||
"device_list_key",
|
||||
StreamKeyType.TYPING,
|
||||
StreamKeyType.RECEIPT,
|
||||
StreamKeyType.PRESENCE,
|
||||
StreamKeyType.TO_DEVICE,
|
||||
StreamKeyType.DEVICE_LIST,
|
||||
):
|
||||
return
|
||||
|
||||
@@ -258,14 +259,14 @@ class ApplicationServicesHandler:
|
||||
|
||||
# Ignore to-device messages if the feature flag is not enabled
|
||||
if (
|
||||
stream_key == "to_device_key"
|
||||
stream_key == StreamKeyType.TO_DEVICE
|
||||
and not self._msc2409_to_device_messages_enabled
|
||||
):
|
||||
return
|
||||
|
||||
# Ignore device lists if the feature flag is not enabled
|
||||
if (
|
||||
stream_key == "device_list_key"
|
||||
stream_key == StreamKeyType.DEVICE_LIST
|
||||
and not self._msc3202_transaction_extensions_enabled
|
||||
):
|
||||
return
|
||||
@@ -283,15 +284,15 @@ class ApplicationServicesHandler:
|
||||
if (
|
||||
stream_key
|
||||
in (
|
||||
"typing_key",
|
||||
"receipt_key",
|
||||
"presence_key",
|
||||
"to_device_key",
|
||||
StreamKeyType.TYPING,
|
||||
StreamKeyType.RECEIPT,
|
||||
StreamKeyType.PRESENCE,
|
||||
StreamKeyType.TO_DEVICE,
|
||||
)
|
||||
and service.supports_ephemeral
|
||||
)
|
||||
or (
|
||||
stream_key == "device_list_key"
|
||||
stream_key == StreamKeyType.DEVICE_LIST
|
||||
and service.msc3202_transaction_extensions
|
||||
)
|
||||
]
|
||||
@@ -317,7 +318,7 @@ class ApplicationServicesHandler:
|
||||
logger.debug("Checking interested services for %s", stream_key)
|
||||
with Measure(self.clock, "notify_interested_services_ephemeral"):
|
||||
for service in services:
|
||||
if stream_key == "typing_key":
|
||||
if stream_key == StreamKeyType.TYPING:
|
||||
# Note that we don't persist the token (via set_appservice_stream_type_pos)
|
||||
# for typing_key due to performance reasons and due to their highly
|
||||
# ephemeral nature.
|
||||
@@ -333,7 +334,7 @@ class ApplicationServicesHandler:
|
||||
async with self._ephemeral_events_linearizer.queue(
|
||||
(service.id, stream_key)
|
||||
):
|
||||
if stream_key == "receipt_key":
|
||||
if stream_key == StreamKeyType.RECEIPT:
|
||||
events = await self._handle_receipts(service, new_token)
|
||||
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
|
||||
|
||||
@@ -342,7 +343,7 @@ class ApplicationServicesHandler:
|
||||
service, "read_receipt", new_token
|
||||
)
|
||||
|
||||
elif stream_key == "presence_key":
|
||||
elif stream_key == StreamKeyType.PRESENCE:
|
||||
events = await self._handle_presence(service, users, new_token)
|
||||
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
|
||||
|
||||
@@ -351,7 +352,7 @@ class ApplicationServicesHandler:
|
||||
service, "presence", new_token
|
||||
)
|
||||
|
||||
elif stream_key == "to_device_key":
|
||||
elif stream_key == StreamKeyType.TO_DEVICE:
|
||||
# Retrieve a list of to-device message events, as well as the
|
||||
# maximum stream token of the messages we were able to retrieve.
|
||||
to_device_messages = await self._get_to_device_messages(
|
||||
@@ -366,7 +367,7 @@ class ApplicationServicesHandler:
|
||||
service, "to_device", new_token
|
||||
)
|
||||
|
||||
elif stream_key == "device_list_key":
|
||||
elif stream_key == StreamKeyType.DEVICE_LIST:
|
||||
device_list_summary = await self._get_device_list_summary(
|
||||
service, new_token
|
||||
)
|
||||
|
||||
@@ -43,6 +43,7 @@ from synapse.metrics.background_process_metrics import (
|
||||
)
|
||||
from synapse.types import (
|
||||
JsonDict,
|
||||
StreamKeyType,
|
||||
StreamToken,
|
||||
UserID,
|
||||
get_domain_from_id,
|
||||
@@ -502,7 +503,7 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
# specify the user ID too since the user should always get their own device list
|
||||
# updates, even if they aren't in any rooms.
|
||||
self.notifier.on_new_event(
|
||||
"device_list_key", position, users={user_id}, rooms=room_ids
|
||||
StreamKeyType.DEVICE_LIST, position, users={user_id}, rooms=room_ids
|
||||
)
|
||||
|
||||
# We may need to do some processing asynchronously for local user IDs.
|
||||
@@ -523,7 +524,9 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
from_user_id, user_ids
|
||||
)
|
||||
|
||||
self.notifier.on_new_event("device_list_key", position, users=[from_user_id])
|
||||
self.notifier.on_new_event(
|
||||
StreamKeyType.DEVICE_LIST, position, users=[from_user_id]
|
||||
)
|
||||
|
||||
async def user_left_room(self, user: UserID, room_id: str) -> None:
|
||||
user_id = user.to_string()
|
||||
|
||||
@@ -26,7 +26,7 @@ from synapse.logging.opentracing import (
|
||||
set_tag,
|
||||
)
|
||||
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
|
||||
from synapse.types import JsonDict, Requester, UserID, get_domain_from_id
|
||||
from synapse.types import JsonDict, Requester, StreamKeyType, UserID, get_domain_from_id
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.stringutils import random_string
|
||||
|
||||
@@ -151,7 +151,7 @@ class DeviceMessageHandler:
|
||||
# Notify listeners that there are new to-device messages to process,
|
||||
# handing them the latest stream id.
|
||||
self.notifier.on_new_event(
|
||||
"to_device_key", last_stream_id, users=local_messages.keys()
|
||||
StreamKeyType.TO_DEVICE, last_stream_id, users=local_messages.keys()
|
||||
)
|
||||
|
||||
async def _check_for_unknown_devices(
|
||||
@@ -285,7 +285,7 @@ class DeviceMessageHandler:
|
||||
# Notify listeners that there are new to-device messages to process,
|
||||
# handing them the latest stream id.
|
||||
self.notifier.on_new_event(
|
||||
"to_device_key", last_stream_id, users=local_messages.keys()
|
||||
StreamKeyType.TO_DEVICE, last_stream_id, users=local_messages.keys()
|
||||
)
|
||||
|
||||
if self.federation_sender:
|
||||
|
||||
@@ -16,7 +16,6 @@ import logging
|
||||
import string
|
||||
from typing import TYPE_CHECKING, Iterable, List, Optional
|
||||
|
||||
import synapse
|
||||
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes
|
||||
from synapse.api.errors import (
|
||||
AuthError,
|
||||
@@ -72,6 +71,9 @@ class DirectoryHandler:
|
||||
if wchar in room_alias.localpart:
|
||||
raise SynapseError(400, "Invalid characters in room alias")
|
||||
|
||||
if ":" in room_alias.localpart:
|
||||
raise SynapseError(400, "Invalid character in room alias localpart: ':'.")
|
||||
|
||||
if not self.hs.is_mine(room_alias):
|
||||
raise SynapseError(400, "Room alias must be local")
|
||||
# TODO(erikj): Change this.
|
||||
@@ -138,13 +140,10 @@ class DirectoryHandler:
|
||||
403, "You must be in the room to create an alias for it"
|
||||
)
|
||||
|
||||
spam_check = await self.spam_checker.user_may_create_room_alias(
|
||||
if not await self.spam_checker.user_may_create_room_alias(
|
||||
user_id, room_alias
|
||||
)
|
||||
if spam_check is not synapse.spam_checker_api.ALLOW:
|
||||
raise AuthError(
|
||||
403, "This alias creation request has been rejected", spam_check
|
||||
)
|
||||
):
|
||||
raise AuthError(403, "This user is not permitted to create this alias")
|
||||
|
||||
if not self.config.roomdirectory.is_alias_creation_allowed(
|
||||
user_id, room_id, room_alias_str
|
||||
@@ -430,12 +429,9 @@ class DirectoryHandler:
|
||||
"""
|
||||
user_id = requester.user.to_string()
|
||||
|
||||
spam_check = await self.spam_checker.user_may_publish_room(user_id, room_id)
|
||||
if spam_check is not synapse.spam_checker_api.ALLOW:
|
||||
if not await self.spam_checker.user_may_publish_room(user_id, room_id):
|
||||
raise AuthError(
|
||||
403,
|
||||
"This request to publish a room to the room list has been rejected",
|
||||
spam_check,
|
||||
403, "This user is not permitted to publish rooms to the room list"
|
||||
)
|
||||
|
||||
if requester.is_guest:
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple
|
||||
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple
|
||||
|
||||
import attr
|
||||
from canonicaljson import encode_canonical_json
|
||||
@@ -1105,22 +1105,19 @@ class E2eKeysHandler:
|
||||
# can request over federation
|
||||
raise NotFoundError("No %s key found for %s" % (key_type, user_id))
|
||||
|
||||
(
|
||||
key,
|
||||
key_id,
|
||||
verify_key,
|
||||
) = await self._retrieve_cross_signing_keys_for_remote_user(user, key_type)
|
||||
|
||||
if key is None:
|
||||
cross_signing_keys = await self._retrieve_cross_signing_keys_for_remote_user(
|
||||
user, key_type
|
||||
)
|
||||
if cross_signing_keys is None:
|
||||
raise NotFoundError("No %s key found for %s" % (key_type, user_id))
|
||||
|
||||
return key, key_id, verify_key
|
||||
return cross_signing_keys
|
||||
|
||||
async def _retrieve_cross_signing_keys_for_remote_user(
|
||||
self,
|
||||
user: UserID,
|
||||
desired_key_type: str,
|
||||
) -> Tuple[Optional[dict], Optional[str], Optional[VerifyKey]]:
|
||||
) -> Optional[Tuple[Dict[str, Any], str, VerifyKey]]:
|
||||
"""Queries cross-signing keys for a remote user and saves them to the database
|
||||
|
||||
Only the key specified by `key_type` will be returned, while all retrieved keys
|
||||
@@ -1146,12 +1143,10 @@ class E2eKeysHandler:
|
||||
type(e),
|
||||
e,
|
||||
)
|
||||
return None, None, None
|
||||
return None
|
||||
|
||||
# Process each of the retrieved cross-signing keys
|
||||
desired_key = None
|
||||
desired_key_id = None
|
||||
desired_verify_key = None
|
||||
desired_key_data = None
|
||||
retrieved_device_ids = []
|
||||
for key_type in ["master", "self_signing"]:
|
||||
key_content = remote_result.get(key_type + "_key")
|
||||
@@ -1196,9 +1191,7 @@ class E2eKeysHandler:
|
||||
|
||||
# If this is the desired key type, save it and its ID/VerifyKey
|
||||
if key_type == desired_key_type:
|
||||
desired_key = key_content
|
||||
desired_verify_key = verify_key
|
||||
desired_key_id = key_id
|
||||
desired_key_data = key_content, key_id, verify_key
|
||||
|
||||
# At the same time, store this key in the db for subsequent queries
|
||||
await self.store.set_e2e_cross_signing_key(
|
||||
@@ -1212,7 +1205,7 @@ class E2eKeysHandler:
|
||||
user.to_string(), retrieved_device_ids
|
||||
)
|
||||
|
||||
return desired_key, desired_key_id, desired_verify_key
|
||||
return desired_key_data
|
||||
|
||||
|
||||
def _check_cross_signing_key(
|
||||
|
||||
@@ -241,7 +241,15 @@ class EventAuthHandler:
|
||||
|
||||
# If the join rule is not restricted, this doesn't apply.
|
||||
join_rules_event = await self._store.get_event(join_rules_event_id)
|
||||
return join_rules_event.content.get("join_rule") == JoinRules.RESTRICTED
|
||||
content_join_rule = join_rules_event.content.get("join_rule")
|
||||
if content_join_rule == JoinRules.RESTRICTED:
|
||||
return True
|
||||
|
||||
# also check for MSC3787 behaviour
|
||||
if room_version.msc3787_knock_restricted_join_rule:
|
||||
return content_join_rule == JoinRules.KNOCK_RESTRICTED
|
||||
|
||||
return False
|
||||
|
||||
async def get_rooms_that_allow_join(
|
||||
self, state_ids: StateMap[str]
|
||||
|
||||
@@ -27,7 +27,6 @@ from signedjson.key import decode_verify_key_bytes
|
||||
from signedjson.sign import verify_signed_json
|
||||
from unpaddedbase64 import decode_base64
|
||||
|
||||
import synapse
|
||||
from synapse import event_auth
|
||||
from synapse.api.constants import EventContentFields, EventTypes, Membership
|
||||
from synapse.api.errors import (
|
||||
@@ -800,14 +799,11 @@ class FederationHandler:
|
||||
if self.hs.config.server.block_non_admin_invites:
|
||||
raise SynapseError(403, "This server does not accept room invites")
|
||||
|
||||
spam_check = await self.spam_checker.user_may_invite(
|
||||
if not await self.spam_checker.user_may_invite(
|
||||
event.sender, event.state_key, event.room_id
|
||||
)
|
||||
if spam_check is not synapse.spam_checker_api.ALLOW:
|
||||
):
|
||||
raise SynapseError(
|
||||
403,
|
||||
"This user is not permitted to send invites to this server/user",
|
||||
spam_check,
|
||||
403, "This user is not permitted to send invites to this server/user"
|
||||
)
|
||||
|
||||
membership = event.content.get("membership")
|
||||
|
||||
@@ -103,7 +103,7 @@ class FederationEventHandler:
|
||||
self._event_creation_handler = hs.get_event_creation_handler()
|
||||
self._event_auth_handler = hs.get_event_auth_handler()
|
||||
self._message_handler = hs.get_message_handler()
|
||||
self._action_generator = hs.get_action_generator()
|
||||
self._bulk_push_rule_evaluator = hs.get_bulk_push_rule_evaluator()
|
||||
self._state_resolution_handler = hs.get_state_resolution_handler()
|
||||
# avoid a circular dependency by deferring execution here
|
||||
self._get_room_member_handler = hs.get_room_member_handler
|
||||
@@ -1913,7 +1913,7 @@ class FederationEventHandler:
|
||||
min_depth,
|
||||
)
|
||||
else:
|
||||
await self._action_generator.handle_push_actions_for_event(
|
||||
await self._bulk_push_rule_evaluator.action_for_event_by_user(
|
||||
event, context
|
||||
)
|
||||
|
||||
|
||||
@@ -30,6 +30,7 @@ from synapse.types import (
|
||||
Requester,
|
||||
RoomStreamToken,
|
||||
StateMap,
|
||||
StreamKeyType,
|
||||
StreamToken,
|
||||
UserID,
|
||||
)
|
||||
@@ -143,7 +144,7 @@ class InitialSyncHandler:
|
||||
to_key=int(now_token.receipt_key),
|
||||
)
|
||||
if self.hs.config.experimental.msc2285_enabled:
|
||||
receipt = ReceiptEventSource.filter_out_private(receipt, user_id)
|
||||
receipt = ReceiptEventSource.filter_out_private_receipts(receipt, user_id)
|
||||
|
||||
tags_by_room = await self.store.get_tags_for_user(user_id)
|
||||
|
||||
@@ -220,8 +221,10 @@ class InitialSyncHandler:
|
||||
self.storage, user_id, messages
|
||||
)
|
||||
|
||||
start_token = now_token.copy_and_replace("room_key", token)
|
||||
end_token = now_token.copy_and_replace("room_key", room_end_token)
|
||||
start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token)
|
||||
end_token = now_token.copy_and_replace(
|
||||
StreamKeyType.ROOM, room_end_token
|
||||
)
|
||||
time_now = self.clock.time_msec()
|
||||
|
||||
d["messages"] = {
|
||||
@@ -369,8 +372,8 @@ class InitialSyncHandler:
|
||||
self.storage, user_id, messages, is_peeking=is_peeking
|
||||
)
|
||||
|
||||
start_token = StreamToken.START.copy_and_replace("room_key", token)
|
||||
end_token = StreamToken.START.copy_and_replace("room_key", stream_token)
|
||||
start_token = StreamToken.START.copy_and_replace(StreamKeyType.ROOM, token)
|
||||
end_token = StreamToken.START.copy_and_replace(StreamKeyType.ROOM, stream_token)
|
||||
|
||||
time_now = self.clock.time_msec()
|
||||
|
||||
@@ -449,7 +452,9 @@ class InitialSyncHandler:
|
||||
if not receipts:
|
||||
return []
|
||||
if self.hs.config.experimental.msc2285_enabled:
|
||||
receipts = ReceiptEventSource.filter_out_private(receipts, user_id)
|
||||
receipts = ReceiptEventSource.filter_out_private_receipts(
|
||||
receipts, user_id
|
||||
)
|
||||
return receipts
|
||||
|
||||
presence, receipts, (messages, token) = await make_deferred_yieldable(
|
||||
@@ -472,7 +477,7 @@ class InitialSyncHandler:
|
||||
self.storage, user_id, messages, is_peeking=is_peeking
|
||||
)
|
||||
|
||||
start_token = now_token.copy_and_replace("room_key", token)
|
||||
start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token)
|
||||
end_token = now_token
|
||||
|
||||
time_now = self.clock.time_msec()
|
||||
|
||||
@@ -23,7 +23,6 @@ from canonicaljson import encode_canonical_json
|
||||
|
||||
from twisted.internet.interfaces import IDelayedCall
|
||||
|
||||
import synapse
|
||||
from synapse import event_auth
|
||||
from synapse.api.constants import (
|
||||
EventContentFields,
|
||||
@@ -45,7 +44,7 @@ from synapse.api.errors import (
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
|
||||
from synapse.api.urls import ConsentURIBuilder
|
||||
from synapse.event_auth import validate_event_for_room_version
|
||||
from synapse.events import EventBase
|
||||
from synapse.events import EventBase, relation_from_event
|
||||
from synapse.events.builder import EventBuilder
|
||||
from synapse.events.snapshot import EventContext
|
||||
from synapse.events.validator import EventValidator
|
||||
@@ -427,7 +426,7 @@ class EventCreationHandler:
|
||||
# This is to stop us from diverging history *too* much.
|
||||
self.limiter = Linearizer(max_count=5, name="room_event_creation_limit")
|
||||
|
||||
self.action_generator = hs.get_action_generator()
|
||||
self._bulk_push_rule_evaluator = hs.get_bulk_push_rule_evaluator()
|
||||
|
||||
self.spam_checker = hs.get_spam_checker()
|
||||
self.third_party_event_rules: "ThirdPartyEventRules" = (
|
||||
@@ -882,11 +881,11 @@ class EventCreationHandler:
|
||||
event.sender,
|
||||
)
|
||||
|
||||
spam_check = await self.spam_checker.check_event_for_spam(event)
|
||||
if spam_check is not synapse.spam_checker_api.ALLOW:
|
||||
raise SynapseError(
|
||||
403, "This message had been rejected as probable spam", spam_check
|
||||
)
|
||||
spam_error = await self.spam_checker.check_event_for_spam(event)
|
||||
if spam_error:
|
||||
if not isinstance(spam_error, str):
|
||||
spam_error = "Spam is not permitted here"
|
||||
raise SynapseError(403, spam_error, Codes.FORBIDDEN)
|
||||
|
||||
ev = await self.handle_new_client_event(
|
||||
requester=requester,
|
||||
@@ -1061,20 +1060,11 @@ class EventCreationHandler:
|
||||
SynapseError if the event is invalid.
|
||||
"""
|
||||
|
||||
relation = event.content.get("m.relates_to")
|
||||
relation = relation_from_event(event)
|
||||
if not relation:
|
||||
return
|
||||
|
||||
relation_type = relation.get("rel_type")
|
||||
if not relation_type:
|
||||
return
|
||||
|
||||
# Ensure the parent is real.
|
||||
relates_to = relation.get("event_id")
|
||||
if not relates_to:
|
||||
return
|
||||
|
||||
parent_event = await self.store.get_event(relates_to, allow_none=True)
|
||||
parent_event = await self.store.get_event(relation.parent_id, allow_none=True)
|
||||
if parent_event:
|
||||
# And in the same room.
|
||||
if parent_event.room_id != event.room_id:
|
||||
@@ -1083,28 +1073,31 @@ class EventCreationHandler:
|
||||
else:
|
||||
# There must be some reason that the client knows the event exists,
|
||||
# see if there are existing relations. If so, assume everything is fine.
|
||||
if not await self.store.event_is_target_of_relation(relates_to):
|
||||
if not await self.store.event_is_target_of_relation(relation.parent_id):
|
||||
# Otherwise, the client can't know about the parent event!
|
||||
raise SynapseError(400, "Can't send relation to unknown event")
|
||||
|
||||
# If this event is an annotation then we check that that the sender
|
||||
# can't annotate the same way twice (e.g. stops users from liking an
|
||||
# event multiple times).
|
||||
if relation_type == RelationTypes.ANNOTATION:
|
||||
aggregation_key = relation["key"]
|
||||
if relation.rel_type == RelationTypes.ANNOTATION:
|
||||
aggregation_key = relation.aggregation_key
|
||||
|
||||
if aggregation_key is None:
|
||||
raise SynapseError(400, "Missing aggregation key")
|
||||
|
||||
if len(aggregation_key) > 500:
|
||||
raise SynapseError(400, "Aggregation key is too long")
|
||||
|
||||
already_exists = await self.store.has_user_annotated_event(
|
||||
relates_to, event.type, aggregation_key, event.sender
|
||||
relation.parent_id, event.type, aggregation_key, event.sender
|
||||
)
|
||||
if already_exists:
|
||||
raise SynapseError(400, "Can't send same reaction twice")
|
||||
|
||||
# Don't attempt to start a thread if the parent event is a relation.
|
||||
elif relation_type == RelationTypes.THREAD:
|
||||
if await self.store.event_includes_relation(relates_to):
|
||||
elif relation.rel_type == RelationTypes.THREAD:
|
||||
if await self.store.event_includes_relation(relation.parent_id):
|
||||
raise SynapseError(
|
||||
400, "Cannot start threads from an event with a relation"
|
||||
)
|
||||
@@ -1250,7 +1243,9 @@ class EventCreationHandler:
|
||||
# and `state_groups` because they have `prev_events` that aren't persisted yet
|
||||
# (historical messages persisted in reverse-chronological order).
|
||||
if not event.internal_metadata.is_historical():
|
||||
await self.action_generator.handle_push_actions_for_event(event, context)
|
||||
await self._bulk_push_rule_evaluator.action_for_event_by_user(
|
||||
event, context
|
||||
)
|
||||
|
||||
try:
|
||||
# If we're a worker we need to hit out to the master.
|
||||
|
||||
@@ -224,7 +224,7 @@ class OidcHandler:
|
||||
self._sso_handler.render_error(request, "invalid_session", str(e))
|
||||
return
|
||||
except MacaroonInvalidSignatureException as e:
|
||||
logger.exception("Could not verify session for OIDC callback")
|
||||
logger.warning("Could not verify session for OIDC callback: %s", e)
|
||||
self._sso_handler.render_error(request, "mismatching_session", str(e))
|
||||
return
|
||||
|
||||
@@ -827,7 +827,7 @@ class OidcProvider:
|
||||
logger.debug("Exchanging OAuth2 code for a token")
|
||||
token = await self._exchange_code(code)
|
||||
except OidcError as e:
|
||||
logger.exception("Could not exchange OAuth2 code")
|
||||
logger.warning("Could not exchange OAuth2 code: %s", e)
|
||||
self._sso_handler.render_error(request, e.error, e.error_description)
|
||||
return
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ from synapse.handlers.room import ShutdownRoomResponse
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.storage.state import StateFilter
|
||||
from synapse.streams.config import PaginationConfig
|
||||
from synapse.types import JsonDict, Requester
|
||||
from synapse.types import JsonDict, Requester, StreamKeyType
|
||||
from synapse.util.async_helpers import ReadWriteLock
|
||||
from synapse.util.stringutils import random_string
|
||||
from synapse.visibility import filter_events_for_client
|
||||
@@ -491,7 +491,7 @@ class PaginationHandler:
|
||||
|
||||
if leave_token.topological < curr_topo:
|
||||
from_token = from_token.copy_and_replace(
|
||||
"room_key", leave_token
|
||||
StreamKeyType.ROOM, leave_token
|
||||
)
|
||||
|
||||
await self.hs.get_federation_handler().maybe_backfill(
|
||||
@@ -513,7 +513,7 @@ class PaginationHandler:
|
||||
event_filter=event_filter,
|
||||
)
|
||||
|
||||
next_token = from_token.copy_and_replace("room_key", next_key)
|
||||
next_token = from_token.copy_and_replace(StreamKeyType.ROOM, next_key)
|
||||
|
||||
if events:
|
||||
if event_filter:
|
||||
|
||||
@@ -66,7 +66,7 @@ from synapse.replication.tcp.commands import ClearUserSyncsCommand
|
||||
from synapse.replication.tcp.streams import PresenceFederationStream, PresenceStream
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.streams import EventSource
|
||||
from synapse.types import JsonDict, UserID, get_domain_from_id
|
||||
from synapse.types import JsonDict, StreamKeyType, UserID, get_domain_from_id
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.caches.descriptors import _CacheContext, cached
|
||||
from synapse.util.metrics import Measure
|
||||
@@ -522,7 +522,7 @@ class WorkerPresenceHandler(BasePresenceHandler):
|
||||
room_ids_to_states, users_to_states = parties
|
||||
|
||||
self.notifier.on_new_event(
|
||||
"presence_key",
|
||||
StreamKeyType.PRESENCE,
|
||||
stream_id,
|
||||
rooms=room_ids_to_states.keys(),
|
||||
users=users_to_states.keys(),
|
||||
@@ -1145,7 +1145,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
room_ids_to_states, users_to_states = parties
|
||||
|
||||
self.notifier.on_new_event(
|
||||
"presence_key",
|
||||
StreamKeyType.PRESENCE,
|
||||
stream_id,
|
||||
rooms=room_ids_to_states.keys(),
|
||||
users=[UserID.from_string(u) for u in users_to_states],
|
||||
|
||||
@@ -17,7 +17,13 @@ from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple
|
||||
from synapse.api.constants import ReceiptTypes
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.streams import EventSource
|
||||
from synapse.types import JsonDict, ReadReceipt, UserID, get_domain_from_id
|
||||
from synapse.types import (
|
||||
JsonDict,
|
||||
ReadReceipt,
|
||||
StreamKeyType,
|
||||
UserID,
|
||||
get_domain_from_id,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -129,7 +135,9 @@ class ReceiptsHandler:
|
||||
|
||||
affected_room_ids = list({r.room_id for r in receipts})
|
||||
|
||||
self.notifier.on_new_event("receipt_key", max_batch_id, rooms=affected_room_ids)
|
||||
self.notifier.on_new_event(
|
||||
StreamKeyType.RECEIPT, max_batch_id, rooms=affected_room_ids
|
||||
)
|
||||
# Note that the min here shouldn't be relied upon to be accurate.
|
||||
await self.hs.get_pusherpool().on_new_receipts(
|
||||
min_batch_id, max_batch_id, affected_room_ids
|
||||
@@ -165,43 +173,69 @@ class ReceiptEventSource(EventSource[int, JsonDict]):
|
||||
self.config = hs.config
|
||||
|
||||
@staticmethod
|
||||
def filter_out_private(events: List[JsonDict], user_id: str) -> List[JsonDict]:
|
||||
def filter_out_private_receipts(
|
||||
rooms: List[JsonDict], user_id: str
|
||||
) -> List[JsonDict]:
|
||||
"""
|
||||
This method takes in what is returned by
|
||||
get_linearized_receipts_for_rooms() and goes through read receipts
|
||||
filtering out m.read.private receipts if they were not sent by the
|
||||
current user.
|
||||
Filters a list of serialized receipts (as returned by /sync and /initialSync)
|
||||
and removes private read receipts of other users.
|
||||
|
||||
This operates on the return value of get_linearized_receipts_for_rooms(),
|
||||
which is wrapped in a cache. Care must be taken to ensure that the input
|
||||
values are not modified.
|
||||
|
||||
Args:
|
||||
rooms: A list of mappings, each mapping has a `content` field, which
|
||||
is a map of event ID -> receipt type -> user ID -> receipt information.
|
||||
|
||||
Returns:
|
||||
The same as rooms, but filtered.
|
||||
"""
|
||||
|
||||
visible_events = []
|
||||
result = []
|
||||
|
||||
# filter out private receipts the user shouldn't see
|
||||
for event in events:
|
||||
content = event.get("content", {})
|
||||
new_event = event.copy()
|
||||
new_event["content"] = {}
|
||||
# Iterate through each room's receipt content.
|
||||
for room in rooms:
|
||||
# The receipt content with other user's private read receipts removed.
|
||||
content = {}
|
||||
|
||||
for event_id, event_content in content.items():
|
||||
receipt_event = {}
|
||||
for receipt_type, receipt_content in event_content.items():
|
||||
if receipt_type == ReceiptTypes.READ_PRIVATE:
|
||||
user_rr = receipt_content.get(user_id, None)
|
||||
if user_rr:
|
||||
receipt_event[ReceiptTypes.READ_PRIVATE] = {
|
||||
user_id: user_rr.copy()
|
||||
}
|
||||
else:
|
||||
receipt_event[receipt_type] = receipt_content.copy()
|
||||
# Iterate over each event ID / receipts for that event.
|
||||
for event_id, orig_event_content in room.get("content", {}).items():
|
||||
event_content = orig_event_content
|
||||
# If there are private read receipts, additional logic is necessary.
|
||||
if ReceiptTypes.READ_PRIVATE in event_content:
|
||||
# Make a copy without private read receipts to avoid leaking
|
||||
# other user's private read receipts..
|
||||
event_content = {
|
||||
receipt_type: receipt_value
|
||||
for receipt_type, receipt_value in event_content.items()
|
||||
if receipt_type != ReceiptTypes.READ_PRIVATE
|
||||
}
|
||||
|
||||
# Only include the receipt event if it is non-empty.
|
||||
if receipt_event:
|
||||
new_event["content"][event_id] = receipt_event
|
||||
# Copy the current user's private read receipt from the
|
||||
# original content, if it exists.
|
||||
user_private_read_receipt = orig_event_content[
|
||||
ReceiptTypes.READ_PRIVATE
|
||||
].get(user_id, None)
|
||||
if user_private_read_receipt:
|
||||
event_content[ReceiptTypes.READ_PRIVATE] = {
|
||||
user_id: user_private_read_receipt
|
||||
}
|
||||
|
||||
# Append new_event to visible_events unless empty
|
||||
if len(new_event["content"].keys()) > 0:
|
||||
visible_events.append(new_event)
|
||||
# Include the event if there is at least one non-private read
|
||||
# receipt or the current user has a private read receipt.
|
||||
if event_content:
|
||||
content[event_id] = event_content
|
||||
|
||||
return visible_events
|
||||
# Include the event if there is at least one non-private read receipt
|
||||
# or the current user has a private read receipt.
|
||||
if content:
|
||||
# Build a new event to avoid mutating the cache.
|
||||
new_room = {k: v for k, v in room.items() if k != "content"}
|
||||
new_room["content"] = content
|
||||
result.append(new_room)
|
||||
|
||||
return result
|
||||
|
||||
async def get_new_events(
|
||||
self,
|
||||
@@ -223,7 +257,9 @@ class ReceiptEventSource(EventSource[int, JsonDict]):
|
||||
)
|
||||
|
||||
if self.config.experimental.msc2285_enabled:
|
||||
events = ReceiptEventSource.filter_out_private(events, user.to_string())
|
||||
events = ReceiptEventSource.filter_out_private_receipts(
|
||||
events, user.to_string()
|
||||
)
|
||||
|
||||
return events, to_key
|
||||
|
||||
|
||||
@@ -11,7 +11,6 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import collections.abc
|
||||
import logging
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
@@ -28,7 +27,7 @@ import attr
|
||||
|
||||
from synapse.api.constants import RelationTypes
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.events import EventBase
|
||||
from synapse.events import EventBase, relation_from_event
|
||||
from synapse.storage.databases.main.relations import _RelatedEvent
|
||||
from synapse.types import JsonDict, Requester, StreamToken, UserID
|
||||
from synapse.visibility import filter_events_for_client
|
||||
@@ -373,20 +372,21 @@ class RelationsHandler:
|
||||
if event.is_state():
|
||||
continue
|
||||
|
||||
relates_to = event.content.get("m.relates_to")
|
||||
relation_type = None
|
||||
if isinstance(relates_to, collections.abc.Mapping):
|
||||
relation_type = relates_to.get("rel_type")
|
||||
relates_to = relation_from_event(event)
|
||||
if relates_to:
|
||||
# An event which is a replacement (ie edit) or annotation (ie,
|
||||
# reaction) may not have any other event related to it.
|
||||
if relation_type in (RelationTypes.ANNOTATION, RelationTypes.REPLACE):
|
||||
if relates_to.rel_type in (
|
||||
RelationTypes.ANNOTATION,
|
||||
RelationTypes.REPLACE,
|
||||
):
|
||||
continue
|
||||
|
||||
# Track the event's relation information for later.
|
||||
relations_by_id[event.event_id] = relates_to.rel_type
|
||||
|
||||
# The event should get bundled aggregations.
|
||||
events_by_id[event.event_id] = event
|
||||
# Track the event's relation information for later.
|
||||
if isinstance(relation_type, str):
|
||||
relations_by_id[event.event_id] = relation_type
|
||||
|
||||
# event ID -> bundled aggregation in non-serialized form.
|
||||
results: Dict[str, BundledAggregations] = {}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user