Compare commits

...

171 Commits

Author SHA1 Message Date
Andrew Morgan
7cf88d743e Install gcc, lower poetry verbosity 2023-08-04 16:46:05 +01:00
Andrew Morgan
2f83b4f206 Enable debug logging for poetry 2023-08-04 15:43:10 +01:00
Andrew Morgan
0f8e3337ca changelog 2023-08-04 13:45:30 +01:00
Andrew Morgan
27efb8a38f Switch devenv from development branch to v0.6.3
Broke in https://github.com/matrix-org/synapse/pull/16019. This would have been caught
if someone ran 'devenv up' during manual testing of that PR, or better yet, if we had
CI for the nix developer environment.

Switch back to the latest release version, which doesn't have the upstream issue:
https://github.com/cachix/devenv/issues/756
2023-08-04 13:41:54 +01:00
Patrick Cloke
d98a43d922 Stabilize support for MSC3970: updated transaction semantics (scope to device_id) (#15629)
For now this maintains compatible with old Synapses by falling back
to using transaction semantics on a per-access token. A future version
of Synapse will drop support for this.
2023-08-04 07:47:18 -04:00
Shay
0a5f4f7665 Move support for application service query parameter authorization behind a configuration option (#16017) 2023-08-03 11:43:51 -07:00
Mathieu Velten
f0a860908b Allow config of the backoff algorithm for the federation client. (#15754)
Adds three new configuration variables:

* destination_min_retry_interval is identical to before (10mn).
* destination_retry_multiplier is now 2 instead of 5, the maximum value will
  be reached slower.
* destination_max_retry_interval is one day instead of (essentially) infinity.

Capping this will cause destinations to continue to be retried sometimes instead
of being lost forever. The previous value was 2 ^ 62 milliseconds.
2023-08-03 14:36:55 -04:00
reivilibre
9c462f18a4 Allow modules to check whether the current worker is configured to run background tasks. (#15991) 2023-08-03 08:42:19 -04:00
Patrick Cloke
4f5bccbbba Add forward-compatibility for the redacts property (MSC2174). (#16013)
The location of the redacts field changes in room version 11. Ensure
it is copied to the *new* location for *old* room versions for
forwards-compatibility with clients.

Note that copying it to the *old* location for the *new* room version
was previously handled.
2023-08-02 15:35:54 +00:00
Patrick Cloke
01a45869f0 Update MSC3958 support to interact with intentional mentions. (#15992)
* Updates the rule ID.
* Use `event_property_is` instead of `event_match`.

This updates the implementation of MSC3958 to match the latest
text from the MSC.
2023-08-02 08:41:32 -04:00
dependabot[bot]
ca5d5de79b Bump cryptography from 41.0.2 to 41.0.3 (#16048)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-02 09:46:32 +00:00
Andrew Morgan
a51b0862a1 Update flake.lock to fix running the nix developer environment on MacOS (#16019) 2023-08-02 07:47:16 +01:00
Patrick Cloke
8fe1fd906a Update certifi to 2023.7.22 and pygments to 2.15.1. (#16044) 2023-08-01 15:55:58 +00:00
Patrick Cloke
90ad836ed8 Properly setup the additional sequences in the portdb script. (#16043)
The un_partial_stated_event_stream_sequence and
application_services_txn_id_seq were never properly configured
in the portdb script, resulting in an error on start-up.
2023-08-01 10:36:33 -04:00
Mohit Rathee
5eb3fd785b Trim whitespace when setting display names (#16031) 2023-08-01 09:14:02 -04:00
Jason Little
7cbb2a00d1 Add metrics tracking for eviction to ResponseCache (#16028)
Track whether the ResponseCache is evicting due to invalidation
or due to time.
2023-08-01 08:10:49 -04:00
David Robertson
a4102d2a5f Merge branch 'master' into develop 2023-08-01 12:01:34 +01:00
David Robertson
190c990a76 1.89.0 2023-08-01 11:09:30 +01:00
Patrick Cloke
b7695ac388 Combine duplicated code for calculating an event ID from a txn ID (#16023)
Refactoring related to stabilization of MSC3970, refactor to combine
code which has the same logic.
2023-07-31 08:44:45 -04:00
dependabot[bot]
1fb5a7ad5d Bump serde from 1.0.175 to 1.0.179 (#16033)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.175 to 1.0.179.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.175...v1.0.179)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 14:08:35 +02:00
dependabot[bot]
fa2c116bef Bump immutabledict from 2.2.4 to 3.0.0 (#16034)
Bumps [immutabledict](https://github.com/corenting/immutabledict) from 2.2.4 to 3.0.0.
- [Release notes](https://github.com/corenting/immutabledict/releases)
- [Changelog](https://github.com/corenting/immutabledict/blob/master/CHANGELOG.md)
- [Commits](https://github.com/corenting/immutabledict/compare/v2.2.4...v3.0.0)

---
updated-dependencies:
- dependency-name: immutabledict
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 13:27:17 +02:00
Nils
e02f4b7de2 Do not expose Admin API in caddy reverse proxy example (#16027)
Signed-off-by: Nils ANDRÉ-CHANG <nils@nilsand.re>
2023-07-31 13:25:06 +02:00
dependabot[bot]
21407c6709 Bump service-identity from 21.1.0 to 23.1.0 (#16038)
Bumps [service-identity](https://github.com/pyca/service-identity) from 21.1.0 to 23.1.0.
- [Release notes](https://github.com/pyca/service-identity/releases)
- [Changelog](https://github.com/pyca/service-identity/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pyca/service-identity/compare/21.1.0...23.1.0)

---
updated-dependencies:
- dependency-name: service-identity
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 13:24:32 +02:00
Erik Johnston
ae55cc1e6b Add ability to wait for locks and add locks to purge history / room deletion (#15791)
c.f. #13476
2023-07-31 10:58:03 +01:00
dependabot[bot]
0c6142c4a1 Bump types-commonmark from 0.9.2.3 to 0.9.2.4 (#16037)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 10:47:25 +01:00
dependabot[bot]
fee0195b27 Bump serde_json from 1.0.103 to 1.0.104 (#16032)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.103 to 1.0.104.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.103...v1.0.104)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 10:23:00 +02:00
dependabot[bot]
76b2218599 Bump types-jsonschema from 4.17.0.8 to 4.17.0.10 (#16036)
Bumps [types-jsonschema](https://github.com/python/typeshed) from 4.17.0.8 to 4.17.0.10.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-jsonschema
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 10:21:48 +02:00
dependabot[bot]
ea4ece3fcc Bump types-netaddr from 0.8.0.8 to 0.8.0.9 (#16035)
Bumps [types-netaddr](https://github.com/python/typeshed) from 0.8.0.8 to 0.8.0.9.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-netaddr
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 10:21:34 +02:00
Shay
68b2611783 Clarify comment on key uploads over replication (#16016) 2023-07-27 15:08:46 -07:00
Mathieu Velten
a719b703d9 Fix 404 on /profile when the display name is empty but not the avatar (#16012) 2023-07-27 15:45:05 +02:00
Mathieu Velten
a461f1f846 Update PyYAML to 6.0.1 (#16011) 2023-07-27 14:51:26 +02:00
David Robertson
f9f3e89354 Attempt to fix labelling in docker workflow (#16009) 2023-07-27 13:47:48 +01:00
Shay
f98f4f2e16 Remove support for legacy application service paths (#15964) 2023-07-26 12:59:47 -07:00
Anshul Madnawat
58f8305114 Inline SQL queries using boolean parameters (#15525)
SQLite now supports TRUE and FALSE constants, simplify some
queries by inlining those instead of passing them as arguments.
2023-07-26 18:45:47 +00:00
Mo Balaa
96529c4236 Add synapse version as Docker container label (#15972)
Co-authored-by: Mo Balaa <balaa@fractalnetworks.co>
2023-07-26 16:16:12 +00:00
Mathieu Velten
6dc019d9dd Merge branch 'release-v1.89' into develop 2023-07-26 17:07:42 +02:00
dependabot[bot]
8d2a5586f7 Bump serde from 1.0.171 to 1.0.175 (#15982)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.171 to 1.0.175.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.171...v1.0.175)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-26 15:16:39 +01:00
Mathieu Velten
76e392b0fa Edit changelog 2023-07-26 16:13:39 +02:00
Mathieu Velten
d4ea465496 Remove changelog file 2023-07-26 14:54:37 +02:00
Mathieu Velten
8ebfd577e2 Bump DB version to 79 since synapse v1.88 was already there (#15998) 2023-07-26 14:51:44 +02:00
Mathieu Velten
dbee081d14 1.89.0rc1 2023-07-25 14:32:47 +02:00
dependabot[bot]
99b7b801c3 Bump pygithub from 1.58.2 to 1.59.0 (#15834)
Bumps [pygithub](https://github.com/pygithub/pygithub) from 1.58.2 to 1.59.0.
- [Release notes](https://github.com/pygithub/pygithub/releases)
- [Changelog](https://github.com/PyGithub/PyGithub/blob/main/doc/changes.rst)
- [Commits](https://github.com/pygithub/pygithub/compare/v1.58.2...v1.59.0)

---
updated-dependencies:
- dependency-name: pygithub
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-25 14:19:46 +02:00
Shay
641ff9ef7e Support MSC3814: Dehydrated Devices (#15929)
Signed-off-by: Nicolas Werner <n.werner@famedly.com>
Co-authored-by: Nicolas Werner <n.werner@famedly.com>
Co-authored-by: Nicolas Werner <89468146+nico-famedly@users.noreply.github.com>
Co-authored-by: Hubert Chathi <hubert@uhoreg.ca>
2023-07-24 08:23:19 -07:00
SnipeX_
05f8dada8b Fix broken Arch Linux package link (#15981) 2023-07-24 09:06:10 -04:00
Erik Johnston
654902a758 Resync stale devices in background (#15975)
This is so we don't block responding to federation transaction while we
try and fetch the device lists.
2023-07-24 13:43:43 +01:00
dependabot[bot]
4a711bf379 Bump click from 8.1.3 to 8.1.6 (#15984)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 10:17:02 +01:00
dependabot[bot]
fc566cdf0a Bump sentry-sdk from 1.26.0 to 1.28.1 (#15985)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 10:16:03 +01:00
dependabot[bot]
3b6208b835 Bump pillow from 9.4.0 to 10.0.0 (#15986)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 10:12:02 +01:00
dependabot[bot]
3b8348b06e Bump types-requests from 2.31.0.1 to 2.31.0.2 (#15983)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 10:03:05 +01:00
Shay
5c7364fea5 Properly handle redactions of creation events (#15973) 2023-07-23 16:32:01 -07:00
Shay
f08d05dd2c Actually stop reading from column user_id of tables profiles (#15955) 2023-07-23 16:30:54 -07:00
Shay
e1fa42249c Build packages for Debian Trixie (#15961) 2023-07-23 16:30:05 -07:00
Erik Johnston
fc1e534e41 Speed up updating state in large rooms (#15971)
This should speed up updating state in rooms with lots of state.
2023-07-20 15:51:28 +01:00
Will Lewis
835174180b Fixed grafana deploy annotations in the dashboard config, so it shows for those not managing matrix.org (#15957)
Removed the 'matrix.org' hardcorded instance setting

Originally introduced in #15674

Co-authored-by: wrjlewis <will.lewis@askattest.com>
2023-07-20 12:33:06 +00:00
Erik Johnston
fd44053b84 Don't log exceptions for every non-200 response (#15969)
Introduced in #15913
2023-07-20 11:07:58 +01:00
Erik Johnston
ad52db3b5c Reduce the amount of state we pull out (#15968) 2023-07-20 10:46:37 +01:00
Erik Johnston
67f9e5293e Ensure a long state res does not starve CPU (#15960)
We do this by yielding the reactor in hot loops.
2023-07-19 17:00:33 +00:00
Erik Johnston
19796e20aa Fix bad merge of #15933 (#15958)
This was because we reverted the bump of the schema version, so we were not applying the new deltas.
2023-07-19 12:17:08 +00:00
Erik Johnston
40a3583ba1 Fix race in triggers for read/write locks. (#15933) 2023-07-19 12:06:38 +01:00
Shay
cb6e2c6cc7 Fix background schema updates failing over a large upgrade gap (#15887) 2023-07-18 16:59:27 -07:00
Olivier Wilkinson (reivilibre)
8e8431bc6e Merge branch 'master' into develop 2023-07-18 16:45:39 +01:00
Olivier Wilkinson (reivilibre)
69699a9bd1 1.88.0 2023-07-18 14:06:00 +01:00
Patrick Cloke
6d81aec09f Support room version 11 (#15912)
And fix a bug in the implementation of the updated redaction
format (MSC2174) where the top-level redacts field was not
properly added for backwards-compatibility.
2023-07-18 08:44:59 -04:00
Shay
e625c3dca0 Revert "Stop writing to column user_id of tables profiles and user_filters. (#15953)
* Revert "Stop writing to column `user_id` of tables `profiles` and `user_filters` (#15787)"

This reverts commit f25b0f8808.

* newsfragement
2023-07-18 11:44:09 +01:00
Jason Little
199c270947 Add a locality to a few presence metrics (#15952) 2023-07-18 10:36:40 +01:00
Eric Eastwood
1c802de626 Re-introduce the outbound federation proxy (#15913)
Allow configuring the set of workers to proxy outbound federation traffic through (`outbound_federation_restricted_to`).

This is useful when you have a worker setup with `federation_sender` instances responsible for sending outbound federation requests and want to make sure *all* outbound federation traffic goes through those instances. Before this change, the generic workers would still contact federation themselves for things like profile lookups, backfill, etc. This PR allows you to set more strict access controls/firewall for all workers and only allow the `federation_sender`'s to contact the outside world.
2023-07-18 09:49:21 +01:00
dependabot[bot]
c692283751 Bump anyhow from 1.0.71 to 1.0.72 (#15949)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-17 13:20:34 +01:00
dependabot[bot]
43ee5d5bac Bump pyo3-log from 0.8.2 to 0.8.3 (#15951)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-17 10:46:26 +01:00
dependabot[bot]
1768dd3c27 Bump serde_json from 1.0.100 to 1.0.103 (#15950)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-17 10:45:46 +01:00
dependabot[bot]
0d522b58a6 Bump jsonschema from 4.17.3 to 4.18.3 (#15948)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-17 10:39:51 +01:00
dependabot[bot]
b0e66721a5 Bump typing-extensions from 4.5.0 to 4.7.1 (#15947)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-17 10:33:47 +01:00
dependabot[bot]
6396527015 Bump pydantic from 1.10.10 to 1.10.11 (#15946)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-17 10:30:46 +01:00
dependabot[bot]
d2f46ae370 Bump prometheus-client from 0.17.0 to 0.17.1 (#15945)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-17 10:30:10 +01:00
Andrew Morgan
85e0541db1 Pin the rust version in flake.nix, and bump to 1.70.0 to fix installing ruff (#15940) 2023-07-17 09:36:12 +01:00
dependabot[bot]
cba2df20b5 Bump cryptography from 41.0.1 to 41.0.2 (#15943)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-15 21:37:59 +01:00
Will Hunt
8d3656b994 Document that you cannot login as yourself on /_synapse/admin/v1/users/<user_id>/login (#15938) 2023-07-14 08:32:13 -04:00
Patrick Cloke
20ae617d14 Stop accepting 'user' parameter for application service registration. (#15928)
This is unspecced, but has existed for a very long time.
2023-07-13 07:23:56 -04:00
dependabot[bot]
2cacd0849a Bump types-pillow from 9.5.0.4 to 10.0.0.1 (#15932)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-13 11:21:28 +01:00
Patrick Cloke
204b66c203 Remove unneeded __init__. (#15926)
Remove an __init__ which only calls super() without changing the
input arguments.
2023-07-12 14:30:05 +00:00
Patrick Cloke
5bdf01fccd Fix running with an empty experimental features section. (#15925) 2023-07-12 12:39:25 +00:00
Erik Johnston
36c6b92bfc Fix push for invites received over federation (#15820) 2023-07-12 11:02:11 +00:00
Mathieu Velten
8eb7bb975e Mark get_user_in_directory private since only used in tests (#15884) 2023-07-12 11:09:13 +02:00
Eric Eastwood
3bdb9b07fd Make it more obvious which Python version runs on a given Linux distribution (#15909)
Make it more obvious which Python version runs on a given Linux distribution so when we end up dropping support for a given Python version, we can more easily find the reference to the Python version and remove any references for the distribution. We don't want to be running tests or building packages on a distribution that no longer has a supported Python version.

This way, we can avoid another situation like when we dropped support for Python 3.7 but forgot to drop the Debian Buster references everywhere (https://github.com/matrix-org/synapse/pull/15893)
2023-07-11 17:15:06 -05:00
Eric Eastwood
0371a354cf Better clarify how to run a worker instance (pass both configs) (#15921)
Previously, if you just followed the instructions per the docs, you just ran into an error:

```sh
$ poetry run synapse_worker --config-path homeserver_generic_worker1.yaml

Missing mandatory `server_name` config option.
```
2023-07-11 17:13:54 -05:00
Eric Eastwood
ae391db777 Better warning in logs when we fail to fetch an alias (#15922)
**Before:**
```
Error retrieving alias
```

**After:**
```
Error retrieving alias #foo:bar -> 401 Unauthorized
```

*Spawning from creating the [manual testing strategy for the outbound federation proxy](https://github.com/matrix-org/synapse/pull/15773).*
2023-07-11 17:12:41 -05:00
Eric Eastwood
d7fc87d973 Bump Unix sockets intro version (#15924)
https://github.com/matrix-org/synapse/pull/15708 didn't quite make the cut for `1.88.0` this morning.
2023-07-11 15:32:50 -05:00
Jason Little
224ef0b669 Unix Sockets for HTTP Replication (#15708)
Unix socket support for `federation` and `client` Listeners has existed now for a little while(since [1.81.0](https://github.com/matrix-org/synapse/pull/15353)), but there was one last hold out before it could be complete: HTTP Replication communication. This should finish it up. The Listeners would have always worked, but would have had no way to be talked to/at.

---------

Co-authored-by: Eric Eastwood <madlittlemods@gmail.com>
Co-authored-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
Co-authored-by: Eric Eastwood <erice@element.io>
2023-07-11 13:08:06 -05:00
Patrick Cloke
a4243183f0 Add + as an allowed character for Matrix IDs (MSC4009) (#15911) 2023-07-11 12:21:00 -04:00
David Robertson
92014fbf72 Don't build wheels for Python 3.7 (#15917)
* Don't build wheels for CPython or PyPy 3.7

* Update pyproject.toml comments

* Manually update the changelog
2023-07-11 15:16:19 +01:00
David Robertson
4ccfa16081 Call out upgrade notes in README 2023-07-11 10:34:09 +01:00
David Robertson
7c7bd9898b 1.88.0rc1 2023-07-11 10:28:11 +01:00
Michael Telatynski
b516d91999 Add Server to Access-Control-Expose-Headers header (#15908) 2023-07-11 09:18:50 +01:00
Eric Eastwood
2328e90fbb Make the media /upload tracing less ambiguous (#15888)
A lot of the functions have the same name in this space like `store_file`,
and we also do it multiple times for different reasons (main media repo,
other storage providers, thumbnails, etc) so it's good to differentiate
them so your head doesn't explode.

Follow-up to https://github.com/matrix-org/synapse/pull/15850

Tracing instrumentation to media `/upload` code paths to investigate https://github.com/matrix-org/synapse/issues/15841
2023-07-10 17:23:11 -05:00
Shay
5e82b07d2c Drop debian buster (#15893) 2023-07-10 10:39:36 -07:00
Eric Eastwood
c9bf644fa0 Revert "Federation outbound proxy" (#15910)
Revert "Federation outbound proxy (#15773)"

This reverts commit b07b14b494.
2023-07-10 11:10:20 -05:00
Eric Eastwood
a704a35dd7 Revert "Placeholder changelog"
This reverts commit 6e731e86bf.
2023-07-10 10:26:04 -05:00
Erik Johnston
e55a9b3e41 Fix downgrading to previous version of Synapse (#15907)
We do this by marking the constraint as deferrable.
2023-07-10 16:24:42 +01:00
Erik Johnston
6774f265b4 Fix building rust with nightly (#15906)
Also fix up a warning.
2023-07-10 16:24:04 +01:00
Eric Eastwood
6e731e86bf Placeholder changelog 2023-07-10 10:23:30 -05:00
dependabot[bot]
c971698bff Bump regex from 1.8.4 to 1.9.1 (#15902)
Bumps [regex](https://github.com/rust-lang/regex) from 1.8.4 to 1.9.1.
- [Release notes](https://github.com/rust-lang/regex/releases)
- [Changelog](https://github.com/rust-lang/regex/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/regex/compare/1.8.4...1.9.1)

---
updated-dependencies:
- dependency-name: regex
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-10 15:55:39 +01:00
dependabot[bot]
7477f43fd8 Bump serde_json from 1.0.99 to 1.0.100 (#15901)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.99 to 1.0.100.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.99...v1.0.100)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-10 15:34:26 +02:00
dependabot[bot]
3710fea19d Bump ruff from 0.0.275 to 0.0.277 (#15900)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.0.275 to 0.0.277.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/BREAKING_CHANGES.md)
- [Commits](https://github.com/astral-sh/ruff/compare/v0.0.275...v0.0.277)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-10 10:16:03 +00:00
dependabot[bot]
df8c8a4f45 Bump lxml from 4.9.2 to 4.9.3 (#15897)
Bumps [lxml](https://github.com/lxml/lxml) from 4.9.2 to 4.9.3.
- [Release notes](https://github.com/lxml/lxml/releases)
- [Changelog](https://github.com/lxml/lxml/blob/master/CHANGES.txt)
- [Commits](https://github.com/lxml/lxml/compare/lxml-4.9.2...lxml-4.9.3)

---
updated-dependencies:
- dependency-name: lxml
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-10 10:24:11 +01:00
Shay
8a529e4fb6 Stop running sytest on buster/python3.7 (#15892) 2023-07-07 12:04:55 -07:00
Shay
f25b0f8808 Stop writing to column user_id of tables profiles and user_filters (#15787) 2023-07-07 09:23:27 -07:00
Dirk Klimpel
677272caed Remove worker_replication_* settings from worker doc (#15872)
Co-authored-by: Mathieu Velten <mathieuv@matrix.org>
2023-07-07 08:09:41 +00:00
Jason Little
2481b7dfa4 Remove worker_replication_* deprecated settings, with helpful errors on startup (#15860)
Co-authored-by: reivilibre <oliverw@matrix.org>
2023-07-07 07:45:25 +00:00
sarthak shah
f19dd39dfc Update link to the clients webpage, fix #15825 (#15874) 2023-07-06 17:28:09 +02:00
Eric Eastwood
b07b14b494 Federation outbound proxy (#15773)
Allow configuring the set of workers to proxy outbound federation traffic through (`outbound_federation_restricted_to`).

This is useful when you have a worker setup with `federation_sender` instances responsible for sending outbound federation requests and want to make sure *all* outbound federation traffic goes through those instances. Before this change, the generic workers would still contact federation themselves for things like profile lookups, backfill, etc. This PR allows you to set more strict access controls/firewall for all workers and only allow the `federation_sender`'s to contact the outside world.

The original code is from @erikjohnston's branches which I've gotten in-shape to merge.
2023-07-05 18:53:55 -05:00
Eric Eastwood
561d06b481 Remove support for Python 3.7 (#15851)
Fix https://github.com/matrix-org/synapse/issues/15836
2023-07-05 18:45:42 -05:00
Erik Johnston
39d131b016 Add basic read/write lock (#15782) 2023-07-05 17:25:00 +01:00
Eric Eastwood
ce857c05d5 Add tracing to media /upload endpoint (#15850)
Add tracing instrumentation to media `/upload` code paths to investigate https://github.com/matrix-org/synapse/issues/15841
2023-07-05 10:22:21 -05:00
Sumner Evans
cc780b3f77 docs/admin_api: fix header level on 'Users' page (#15852)
Signed-off-by: Sumner Evans <sumner@beeper.com>
2023-07-05 16:15:56 +02:00
Jason Little
4cf9f92f39 Fix could not serialize access due to concurrent DELETE from presence_stream (#15826)
* Change update_presence to have a isolation level of READ_COMMITTED

* changelog
2023-07-05 11:44:02 +01:00
Erik Johnston
95a96b21eb Add foreign key constraint to event_forward_extremities. (#15751) 2023-07-05 09:43:19 +00:00
an0nfunc
c303eca8cc use Image.LANCZOS instead of Image.ANTIALIAS for thumbnail resize (#15876)
Image.ANTIALIAS is not defined in current pillow releases. Since ANTIALIAS was just using LANCZOS anyways, this is just a cosmetic change, but makes synapse work with most recent pillow releases.

Signed-off-by: Giovanni Harting <539@idlegandalf.com>
2023-07-05 10:52:12 +02:00
Michael Weimann
c8e81898b6 Add not_user_type param to the list accounts admin API (#15844)
Signed-off-by: Michael Weimann <michaelw@element.io>
2023-07-04 15:03:20 -07:00
Olivier Wilkinson (reivilibre)
861752b3aa Merge branch 'master' into develop 2023-07-04 17:40:37 +01:00
Olivier Wilkinson (reivilibre)
1294d10c70 Add notes about Python 3.7 EOL 2023-07-04 16:34:41 +01:00
Olivier Wilkinson (reivilibre)
718d7dfef2 Move warning up to the top 2023-07-04 16:26:50 +01:00
Olivier Wilkinson (reivilibre)
664ba14080 1.87.0 2023-07-04 16:25:33 +01:00
Paarth Shah
649848627c Pin pydantic to <2.0.0 (#15862)
Signed-off-by: Paarth Shah <mail@shahpaarth.com>
2023-07-04 16:22:33 +01:00
Paarth Shah
670d590f8a Pin pydantic to <2.0.0 (#15862)
Signed-off-by: Paarth Shah <mail@shahpaarth.com>
2023-07-04 09:33:24 +02:00
pacien
07d7cbfe69 devices: use combined ANY clause for faster cleanup (#15861)
Old device entries for the same user were being removed in individual
SQL commands, making the batch take way longer than necessary.

This combines the commands into a single one with a IN/ANY clause.

Example of log entry before the change, regularly observed with
"log_min_duration_statement = 10000" in PostgreSQL's config:

    LOG:  duration: 42538.282 ms  statement:
    DELETE FROM device_lists_stream
    WHERE user_id = '@someone' AND device_id = 'someid1'
    AND stream_id < 123456789
    ;
    DELETE FROM device_lists_stream
    WHERE user_id = '@someone' AND device_id = 'someid2'
    AND stream_id < 123456789
    ;
    [repeated for each device ID of that user, potentially a lot...]

With the patch applied on my instance for the past couple of days, I
no longer notice overly long statements of that particular kind.

Signed-off-by: pacien <pacien.trangirard@pacien.net>
2023-07-03 16:39:38 +02:00
reivilibre
cd8b73aa97 Fix the devenv up configuration which was ignoring the config overrides. (#15854)
* Fix use of config override directory in `devenv up`

`--config-directory` is for the generate config script; `-c` is for usage

* Add homeserver config override directory to gitignore

* Newsfile

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>

---------

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2023-07-03 11:39:52 +01:00
reivilibre
53aa26eddc Add a timeout that aborts any Postgres statement taking more than 1 hour. (#15853)
* Add a timeout to Postgres statements

* Newsfile

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>

---------

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2023-07-03 11:38:57 +01:00
dependabot[bot]
a587de96b8 Bump sentry-sdk from 1.25.1 to 1.26.0 (#15867)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.25.1 to 1.26.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.25.1...1.26.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-03 12:34:57 +02:00
dependabot[bot]
411ba44790 Bump types-pyopenssl from 23.2.0.0 to 23.2.0.1 (#15866)
Bumps [types-pyopenssl](https://github.com/python/typeshed) from 23.2.0.0 to 23.2.0.1.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyopenssl
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-03 12:34:20 +02:00
dependabot[bot]
aea94ca8cd Bump importlib-metadata from 6.6.0 to 6.7.0 (#15865)
Bumps [importlib-metadata](https://github.com/python/importlib_metadata) from 6.6.0 to 6.7.0.
- [Release notes](https://github.com/python/importlib_metadata/releases)
- [Changelog](https://github.com/python/importlib_metadata/blob/main/NEWS.rst)
- [Commits](https://github.com/python/importlib_metadata/compare/v6.6.0...v6.7.0)

---
updated-dependencies:
- dependency-name: importlib-metadata
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-03 12:33:47 +02:00
dependabot[bot]
9345361c6b Bump authlib from 1.2.0 to 1.2.1 (#15864)
Bumps [authlib](https://github.com/lepture/authlib) from 1.2.0 to 1.2.1.
- [Release notes](https://github.com/lepture/authlib/releases)
- [Changelog](https://github.com/lepture/authlib/blob/master/docs/changelog.rst)
- [Commits](https://github.com/lepture/authlib/compare/v1.2.0...v1.2.1)

---
updated-dependencies:
- dependency-name: authlib
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-03 12:33:27 +02:00
Eric Eastwood
13fc89148c Split out 2022 changes from the changelog (#15846)
Split out 2022 changes from the changelog so the rendered version in GitHub doesn't timeout as much.
2023-06-28 15:10:33 -05:00
Eric Eastwood
10ed3e233e Note last release with Python 3.7 support
See https://github.com/matrix-org/synapse/issues/15836
2023-06-27 10:34:11 -05:00
Eric Eastwood
472c2c72f6 Prepare changelog for v1.87.0rc1 2023-06-27 10:29:20 -05:00
Shay
78cfa55dad Fix sqlite user_filters upgrade (#15817) 2023-06-27 09:41:42 +01:00
dependabot[bot]
14c1bfd534 Bump serde_json from 1.0.97 to 1.0.99 (#15832)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.97 to 1.0.99.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.97...v1.0.99)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-26 15:36:33 +01:00
dependabot[bot]
70dc44f667 Bump towncrier from 22.12.0 to 23.6.0 (#15831)
Bumps [towncrier](https://github.com/twisted/towncrier) from 22.12.0 to 23.6.0.
- [Release notes](https://github.com/twisted/towncrier/releases)
- [Changelog](https://github.com/twisted/towncrier/blob/trunk/NEWS.rst)
- [Commits](https://github.com/twisted/towncrier/compare/22.12.0...23.6.0)

---
updated-dependencies:
- dependency-name: towncrier
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-26 15:36:07 +01:00
Erik Johnston
25c55a9d22 Add login spam checker API (#15838) 2023-06-26 14:12:20 +00:00
dependabot[bot]
52d8131e87 Bump types-opentracing from 2.4.10.4 to 2.4.10.5 (#15830)
Bumps [types-opentracing](https://github.com/python/typeshed) from 2.4.10.4 to 2.4.10.5.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-opentracing
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-26 09:26:01 +01:00
dependabot[bot]
53ea381ec3 Bump ruff from 0.0.272 to 0.0.275 (#15833)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.0.272 to 0.0.275.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/BREAKING_CHANGES.md)
- [Commits](https://github.com/astral-sh/ruff/compare/v0.0.272...v0.0.275)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-26 09:14:20 +01:00
dependabot[bot]
6e65ca0b36 Bump types-setuptools from 67.8.0.0 to 68.0.0.0 (#15835)
Bumps [types-setuptools](https://github.com/python/typeshed) from 67.8.0.0 to 68.0.0.0.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-setuptools
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-26 09:09:57 +01:00
dependabot[bot]
d535473520 Bump cryptography from 40.0.2 to 41.0.1 (#15800)
Bumps [cryptography](https://github.com/pyca/cryptography) from 40.0.2 to 41.0.1.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/40.0.2...41.0.1)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-22 16:32:53 +01:00
Nicolas Werner
e0c39d6bb5 Fix forgotten rooms missing in initial sync (#15815)
If you leave a room and forget it, then rejoin it, the room would be
missing from the next initial sync.

fixes #13262

Signed-off-by: Nicolas Werner <n.werner@famedly.com>
2023-06-21 14:56:31 +01:00
Erik Johnston
289ce3b8d9 Fix harmless exception in port DB script (#15814)
The port DB script would try and run database background tasks, which
could fail if the data they acted on was in the process of being ported.
These exceptions were non fatal.

Fixes #15789
2023-06-21 13:20:46 +00:00
Erik Johnston
6c749c5124 Fix typo in faster join docs (#15812)
Fixes #15756
2023-06-21 11:34:32 +01:00
Mathieu Velten
496f73103d Allow for the configuration of max request retries and min/max retry delays in the matrix federation client (#15783) 2023-06-21 10:41:11 +02:00
Erik Johnston
1fcefd8f3e Merge branch 'master' into develop 2023-06-20 18:56:18 +01:00
Mathieu Velten
7d3da399dd 1.86.0 2023-06-20 17:22:50 +02:00
Shay
6a5cf1a759 Fix Sytest environmental variable evaluation in CI (#15804) 2023-06-20 07:55:46 -07:00
ew-at-vier
2301a09d7a Fix admin api documentation typo (#15805)
* Fix admin api documentation typo

Signed-off-by: Eric Wolf <eric.wolf@vier.ai>
2023-06-20 10:45:26 +00:00
Eric Eastwood
887fa4b66b Switch from matrix:// to matrix-federation:// scheme for internal Synapse routing of outbound federation traffic (#15806)
`matrix://` is a registered specced scheme nowadays and doesn't make sense for
our internal to Synapse use case anymore. ([discussion]
(https://github.com/matrix-org/synapse/pull/15773#discussion_r1227598679))
2023-06-20 10:05:31 +01:00
dependabot[bot]
4ba528d9c3 Bump ijson from 3.2.0.post0 to 3.2.1 (#15802)
Bumps [ijson](https://github.com/ICRAR/ijson) from 3.2.0.post0 to 3.2.1.
- [Changelog](https://github.com/ICRAR/ijson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ICRAR/ijson/compare/v3.2.0.post0...v3.2.1)

---
updated-dependencies:
- dependency-name: ijson
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-19 10:30:17 +01:00
dependabot[bot]
5f9d5190aa Bump attrs from 22.2.0 to 23.1.0 (#15801)
Bumps [attrs](https://github.com/python-attrs/attrs) from 22.2.0 to 23.1.0.
- [Release notes](https://github.com/python-attrs/attrs/releases)
- [Changelog](https://github.com/python-attrs/attrs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/python-attrs/attrs/compare/22.2.0...23.1.0)

---
updated-dependencies:
- dependency-name: attrs
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-19 10:30:03 +01:00
dependabot[bot]
207cbe519d Bump phonenumbers from 8.13.13 to 8.13.14 (#15798)
Bumps [phonenumbers](https://github.com/daviddrysdale/python-phonenumbers) from 8.13.13 to 8.13.14.
- [Commits](https://github.com/daviddrysdale/python-phonenumbers/compare/v8.13.13...v8.13.14)

---
updated-dependencies:
- dependency-name: phonenumbers
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-19 10:29:10 +01:00
dependabot[bot]
d3cd9881c0 Bump ruff from 0.0.265 to 0.0.272 (#15799)
Bumps [ruff](https://github.com/charliermarsh/ruff) from 0.0.265 to 0.0.272.
- [Release notes](https://github.com/charliermarsh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/BREAKING_CHANGES.md)
- [Commits](https://github.com/charliermarsh/ruff/compare/v0.0.265...v0.0.272)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-19 10:28:57 +01:00
dependabot[bot]
10c509425f Bump serde_json from 1.0.96 to 1.0.97 (#15797)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.96 to 1.0.97.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.96...v1.0.97)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-19 10:28:43 +01:00
Eric Eastwood
0f02f0b4da Remove experimental MSC2716 implementation to incrementally import history into existing rooms (#15748)
Context for why we're removing the implementation:

 - https://github.com/matrix-org/matrix-spec-proposals/pull/2716#issuecomment-1487441010
 - https://github.com/matrix-org/matrix-spec-proposals/pull/2716#issuecomment-1504262734

Anyone wanting to continue MSC2716, should also address these leftover tasks: https://github.com/matrix-org/synapse/issues/10737

Closes https://github.com/matrix-org/synapse/issues/10737 in the fact that it is not longer necessary to track those things.
2023-06-16 14:12:24 -05:00
Andrew Morgan
2ac6c3bbb5 Don't always lock "user_ips" table when performing non-native upsert (#15788) 2023-06-16 15:25:44 +01:00
Mathieu Velten
0618bf94cd push rules: fix internal conversion from _type to value (#15781)
Also fix wrong rule names for `is_user_mention` and `is_room_mention`.
2023-06-16 14:17:02 +02:00
Mathieu Velten
f63d4a3a65 Regularly try to wake up dests instead of waiting for next PDU/EDU (#15743) 2023-06-16 10:15:12 +00:00
Josh Qou
d939120421 Fix unsafe hotserving behaviour for non-multimedia uploads. (#15680)
* Fix unsafe hotserving behaviour for non-multimedia uploads.

* invert disposition assert

* test_media_storage.py: run lint

* test_base.py: /inline/attachment/s

* Only return attachment for disposition type, update tests

* Update synapse/media/_base.py

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* Update changelog.d/15680.bugfix

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* add attribution

* Update changelog.

---------

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2023-06-15 14:23:27 +01:00
Tulir Asokan
1404f68a03 Fix joining rooms through aliases where the alias server isn't a real homeserver (#15776) 2023-06-14 15:42:33 +01:00
Mathieu Velten
87e5df9a6e Merge branch 'release-v1.86' into develop 2023-06-14 14:54:19 +02:00
Mathieu Velten
825c5909de 1.86.0rc2 2023-06-14 12:17:29 +02:00
Mathieu Velten
ef0d3d7bd9 Revert "Allow for the configuration of max request retries and min/max retry delays in the matrix federation client (#12504)"
This reverts commit d84e66144d.
2023-06-14 11:55:57 +02:00
Mathieu Velten
14f9d9b452 Fix empty scope when having version mismatch between workers (#15774) 2023-06-14 11:53:55 +02:00
Jason Little
21fea6b749 Prefill events after invalidate not before when persisting events (#15758)
Fixes #15757
2023-06-14 09:42:18 +01:00
Eric Eastwood
8ddb2de553 Document looping_call() functionality that will wait for the given function to finish before scheduling another (#15772)
Thanks to @erikjohnston for clarifying, https://github.com/matrix-org/synapse/pull/15743#discussion_r1226544457

We don't have to worry about calls stacking up if the given function takes longer than the scheduled time.
2023-06-13 16:34:54 -05:00
Shay
553f2f53e7 Replace EventContext fields prev_group and delta_ids with field state_group_deltas (#15233) 2023-06-13 13:22:06 -07:00
Mathieu Velten
59ec4a0dc1 Fix MSC3983 support: only one OTK per device was returned through federation (#15770) 2023-06-13 19:51:47 +02:00
Eric Eastwood
0757d59ec4 Avoid backfill when we already have messages to return (#15737)
We now only block the client to backfill when we see a large gap in the events (more than 2 events missing in a row according to `depth`), more than 3 single-event holes, or not enough messages to fill the response. Otherwise, we return the messages directly to the client and backfill in the background for eventual consistency sake. 

Fix https://github.com/matrix-org/synapse/issues/15696
2023-06-13 12:31:08 -05:00
Patrick Cloke
df945e0d7c Fix MSC3983 support: Use the unstable /keys/claim federation endpoint if multiple keys are requested (#15755) 2023-06-13 18:07:55 +02:00
239 changed files with 10404 additions and 7096 deletions

View File

@@ -29,11 +29,12 @@ IS_PR = os.environ["GITHUB_REF"].startswith("refs/pull/")
# First calculate the various trial jobs.
#
# For each type of test we only run on Py3.7 on PRs
# For PRs, we only run each type of test with the oldest Python version supported (which
# is Python 3.8 right now)
trial_sqlite_tests = [
{
"python-version": "3.7",
"python-version": "3.8",
"database": "sqlite",
"extras": "all",
}
@@ -46,13 +47,13 @@ if not IS_PR:
"database": "sqlite",
"extras": "all",
}
for version in ("3.8", "3.9", "3.10", "3.11")
for version in ("3.9", "3.10", "3.11")
)
trial_postgres_tests = [
{
"python-version": "3.7",
"python-version": "3.8",
"database": "postgres",
"postgres-version": "11",
"extras": "all",
@@ -71,7 +72,7 @@ if not IS_PR:
trial_no_extra_tests = [
{
"python-version": "3.7",
"python-version": "3.8",
"database": "sqlite",
"extras": "",
}
@@ -133,11 +134,6 @@ if not IS_PR:
"sytest-tag": "testing",
"postgres": "postgres",
},
{
"sytest-tag": "buster",
"postgres": "multi-postgres",
"workers": "workers",
},
]
)

View File

@@ -29,6 +29,16 @@ jobs:
- name: Inspect builder
run: docker buildx inspect
- name: Checkout repository
uses: actions/checkout@v3
- name: Extract version from pyproject.toml
# Note: explicitly requesting bash will mean bash is invoked with `-eo pipefail`, see
# https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsshell
shell: bash
run: |
echo "SYNAPSE_VERSION=$(grep "^version" pyproject.toml | sed -E 's/version\s*=\s*["]([^"]*)["]/\1/')" >> $GITHUB_ENV
- name: Log in to DockerHub
uses: docker/login-action@v2
with:
@@ -61,7 +71,9 @@ jobs:
uses: docker/build-push-action@v4
with:
push: true
labels: "gitsha1=${{ github.sha }}"
labels: |
gitsha1=${{ github.sha }}
org.opencontainers.image.version=${{ env.SYNAPSE_VERSION }}
tags: "${{ steps.set-tag.outputs.tags }}"
file: "docker/Dockerfile"
platforms: linux/amd64,linux/arm64

View File

@@ -144,7 +144,7 @@ jobs:
- name: Only build a single wheel on PR
if: startsWith(github.ref, 'refs/pull/')
run: echo "CIBW_BUILD="cp37-manylinux_${{ matrix.arch }}"" >> $GITHUB_ENV
run: echo "CIBW_BUILD="cp38-manylinux_${{ matrix.arch }}"" >> $GITHUB_ENV
- name: Build wheels
run: python -m cibuildwheel --output-dir wheelhouse

View File

@@ -320,7 +320,7 @@ jobs:
- uses: actions/setup-python@v4
with:
python-version: '3.7'
python-version: '3.8'
- name: Prepare old deps
if: steps.cache-poetry-old-deps.outputs.cache-hit != 'true'
@@ -362,7 +362,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["pypy-3.7"]
python-version: ["pypy-3.8"]
extras: ["all"]
steps:
@@ -399,8 +399,8 @@ jobs:
env:
SYTEST_BRANCH: ${{ github.head_ref }}
POSTGRES: ${{ matrix.job.postgres && 1}}
MULTI_POSTGRES: ${{ (matrix.job.postgres == 'multi-postgres') && 1}}
ASYNCIO_REACTOR: ${{ (matrix.job.reactor == 'asyncio') && 1 }}
MULTI_POSTGRES: ${{ (matrix.job.postgres == 'multi-postgres') || '' }}
ASYNCIO_REACTOR: ${{ (matrix.job.reactor == 'asyncio') || '' }}
WORKERS: ${{ matrix.job.workers && 1 }}
BLACKLIST: ${{ matrix.job.workers && 'synapse-blacklist-with-workers' }}
TOP: ${{ github.workspace }}
@@ -477,7 +477,7 @@ jobs:
strategy:
matrix:
include:
- python-version: "3.7"
- python-version: "3.8"
postgres-version: "11"
- python-version: "3.11"

View File

@@ -96,7 +96,11 @@ jobs:
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:buster
# We're using ubuntu:focal because it uses Python 3.8 which is our minimum supported Python version.
# This job is a canary to warn us about unreleased twisted changes that would cause problems for us if
# they were to be released immediately. For simplicity's sake (and to save CI runners) we use the oldest
# version, assuming that any incompatibilities on newer versions would also be present on the oldest.
image: matrixdotorg/sytest-synapse:focal
volumes:
- ${{ github.workspace }}:/src

1
.gitignore vendored
View File

@@ -34,6 +34,7 @@ __pycache__/
/logs
/media_store/
/uploads
/homeserver-config-overrides.d
# For direnv users
/.envrc

2979
CHANGES.md

File diff suppressed because it is too large Load Diff

54
Cargo.lock generated
View File

@@ -13,9 +13,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.71"
version = "1.0.72"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9c7d0618f0e0b7e8ff11427422b64564d5fb0be1940354bfe2e0529b18a9d9b8"
checksum = "3b13c32d80ecc7ab747b80c3784bce54ee8a7a0cc4fbda9bf4cda2cf6fe90854"
[[package]]
name = "arc-swap"
@@ -182,9 +182,9 @@ dependencies = [
[[package]]
name = "proc-macro2"
version = "1.0.52"
version = "1.0.64"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d0e1ae9e836cc3beddd63db0df682593d7e2d3d891ae8c9083d2113e1744224"
checksum = "78803b62cbf1f46fde80d7c0e803111524b9877184cfe7c3033659490ac7a7da"
dependencies = [
"unicode-ident",
]
@@ -229,9 +229,9 @@ dependencies = [
[[package]]
name = "pyo3-log"
version = "0.8.2"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c94ff6535a6bae58d7d0b85e60d4c53f7f84d0d0aa35d6a28c3f3e70bfe51444"
checksum = "f47b0777feb17f61eea78667d61103758b243a871edc09a7786500a50467b605"
dependencies = [
"arc-swap",
"log",
@@ -273,9 +273,9 @@ dependencies = [
[[package]]
name = "quote"
version = "1.0.26"
version = "1.0.29"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4424af4bf778aae2051a77b60283332f386554255d722233d09fbfc7e30da2fc"
checksum = "573015e8ab27661678357f27dc26460738fd2b6c86e46f386fde94cb5d913105"
dependencies = [
"proc-macro2",
]
@@ -291,9 +291,21 @@ dependencies = [
[[package]]
name = "regex"
version = "1.8.4"
version = "1.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d0ab3ca65655bb1e41f2a8c8cd662eb4fb035e67c3f78da1d61dffe89d07300f"
checksum = "b2eae68fc220f7cf2532e4494aded17545fce192d59cd996e0fe7887f4ceb575"
dependencies = [
"aho-corasick",
"memchr",
"regex-automata",
"regex-syntax",
]
[[package]]
name = "regex-automata"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "83d3daa6976cffb758ec878f108ba0e062a45b2d6ca3a2cca965338855476caf"
dependencies = [
"aho-corasick",
"memchr",
@@ -302,9 +314,9 @@ dependencies = [
[[package]]
name = "regex-syntax"
version = "0.7.2"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "436b050e76ed2903236f032a59761c1eb99e1b0aead2c257922771dab1fc8c78"
checksum = "2ab07dc67230e4a4718e70fd5c20055a4334b121f1f9db8fe63ef39ce9b8c846"
[[package]]
name = "ryu"
@@ -320,29 +332,29 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "serde"
version = "1.0.164"
version = "1.0.179"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9e8c8cf938e98f769bc164923b06dce91cea1751522f46f8466461af04c9027d"
checksum = "0a5bf42b8d227d4abf38a1ddb08602e229108a517cd4e5bb28f9c7eaafdce5c0"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.164"
version = "1.0.179"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d9735b638ccc51c28bf6914d90a2e9725b377144fc612c49a611fddd1b631d68"
checksum = "741e124f5485c7e60c03b043f79f320bff3527f4bbf12cf3831750dc46a0ec2c"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.10",
"syn 2.0.25",
]
[[package]]
name = "serde_json"
version = "1.0.96"
version = "1.0.104"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "057d394a50403bcac12672b2b18fb387ab6d289d957dab67dd201875391e52f1"
checksum = "076066c5f1078eac5b722a31827a8832fe108bed65dfa75e233c89f8206e976c"
dependencies = [
"itoa",
"ryu",
@@ -374,9 +386,9 @@ dependencies = [
[[package]]
name = "syn"
version = "2.0.10"
version = "2.0.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5aad1363ed6d37b84299588d62d3a7d95b5a5c2d9aad5c85609fda12afaa1f40"
checksum = "15e3fc8c0c74267e2df136e5e5fb656a464158aa57624053375eb9c8c6e25ae2"
dependencies = [
"proc-macro2",
"quote",

View File

@@ -3,3 +3,4 @@
[workspace]
members = ["rust"]
resolver = "2"

1
changelog.d/15525.misc Normal file
View File

@@ -0,0 +1 @@
Update SQL queries to inline boolean parameters as supported in SQLite 3.27.

View File

@@ -0,0 +1 @@
Scope transaction IDs to devices (implement [MSC3970](https://github.com/matrix-org/matrix-spec-proposals/pull/3970)).

1
changelog.d/15754.misc Normal file
View File

@@ -0,0 +1 @@
Allow for the configuration of the backoff algorithm for federation destinations.

1
changelog.d/15791.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug where purging history and paginating simultaneously could lead to database corruption when using workers.

View File

@@ -0,0 +1 @@
Remove support for legacy application service paths.

1
changelog.d/15972.docker Normal file
View File

@@ -0,0 +1 @@
Add `org.opencontainers.image.version` labels to Docker containers [published by Matrix.org](https://hub.docker.com/r/matrixdotorg/synapse). Contributed by Mo Balaa.

1
changelog.d/15991.misc Normal file
View File

@@ -0,0 +1 @@
Allow modules to check whether the current worker is configured to run background tasks.

1
changelog.d/15992.misc Normal file
View File

@@ -0,0 +1 @@
Update support for [MSC3958](https://github.com/matrix-org/matrix-spec-proposals/pull/3958) to match the latest revision of the MSC.

1
changelog.d/16009.docker Normal file
View File

@@ -0,0 +1 @@
Add `org.opencontainers.image.version` labels to Docker containers [published by Matrix.org](https://hub.docker.com/r/matrixdotorg/synapse). Contributed by Mo Balaa.

1
changelog.d/16011.misc Normal file
View File

@@ -0,0 +1 @@
Update PyYAML to 6.0.1.

1
changelog.d/16012.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix 404 not found code returned on profile endpoint when the display name is empty but not the avatar URL.

1
changelog.d/16013.misc Normal file
View File

@@ -0,0 +1 @@
Properly overwrite the `redacts` content-property for forwards-compatibility with room versions 1 through 10.

2
changelog.d/16016.doc Normal file
View File

@@ -0,0 +1,2 @@
Clarify comment on the keys/upload over replication enpoint.

View File

@@ -0,0 +1 @@
Move support for application service query parameter authorization behind a configuration option.

1
changelog.d/16019.misc Normal file
View File

@@ -0,0 +1 @@
Fix building the nix development environment on MacOS systems.

1
changelog.d/16023.misc Normal file
View File

@@ -0,0 +1 @@
Combine duplicated code.

1
changelog.d/16027.doc Normal file
View File

@@ -0,0 +1 @@
Do not expose Admin API in caddy reverse proxy example. Contributed by @NilsIrl.

1
changelog.d/16028.misc Normal file
View File

@@ -0,0 +1 @@
Collect additional metrics from `ResponseCache` for eviction.

1
changelog.d/16031.bugfix Normal file
View File

@@ -0,0 +1 @@
Remove leading and trailing spaces when setting a display name.

1
changelog.d/16043.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug where the `synapse_port_db` failed to configure sequences for application services and partial stated rooms.

1
changelog.d/16044.misc Normal file
View File

@@ -0,0 +1 @@
Update certifi to 2023.7.22 and pygments to 2.15.1.

1
changelog.d/16063.misc Normal file
View File

@@ -0,0 +1 @@
Fix building the nix development environment on MacOS systems.

View File

@@ -63,7 +63,7 @@
"uid": "${DS_PROMETHEUS}"
},
"enable": true,
"expr": "changes(process_start_time_seconds{instance=\"matrix.org\",job=~\"synapse\"}[$bucket_size]) * on (instance, job) group_left(version) synapse_build_info{instance=\"matrix.org\",job=\"synapse\"}",
"expr": "changes(process_start_time_seconds{instance=\"$instance\",job=~\"synapse\"}[$bucket_size]) * on (instance, job) group_left(version) synapse_build_info{instance=\"$instance\",job=\"synapse\"}",
"iconColor": "purple",
"name": "deploys",
"titleFormat": "Deployed {{version}}"

View File

@@ -29,7 +29,7 @@
"level": "error"
},
{
"line": "my-matrix-server-federation-sender-1 | 2023-01-25 20:56:20,995 - synapse.http.matrixfederationclient - 709 - WARNING - federation_transaction_transmission_loop-3 - {PUT-O-3} [example.com] Request failed: PUT matrix://example.com/_matrix/federation/v1/send/1674680155797: HttpResponseException('403: Forbidden')",
"line": "my-matrix-server-federation-sender-1 | 2023-01-25 20:56:20,995 - synapse.http.matrixfederationclient - 709 - WARNING - federation_transaction_transmission_loop-3 - {PUT-O-3} [example.com] Request failed: PUT matrix-federation://example.com/_matrix/federation/v1/send/1674680155797: HttpResponseException('403: Forbidden')",
"level": "warning"
},
{

48
debian/changelog vendored
View File

@@ -1,3 +1,51 @@
matrix-synapse-py3 (1.89.0) stable; urgency=medium
* New Synapse release 1.89.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Aug 2023 11:07:15 +0100
matrix-synapse-py3 (1.89.0~rc1) stable; urgency=medium
* New Synapse release 1.89.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 25 Jul 2023 14:31:07 +0200
matrix-synapse-py3 (1.88.0) stable; urgency=medium
* New Synapse release 1.88.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jul 2023 13:59:28 +0100
matrix-synapse-py3 (1.88.0~rc1) stable; urgency=medium
* New Synapse release 1.88.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 11 Jul 2023 10:20:19 +0100
matrix-synapse-py3 (1.87.0) stable; urgency=medium
* New Synapse release 1.87.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Jul 2023 16:24:00 +0100
matrix-synapse-py3 (1.87.0~rc1) stable; urgency=medium
* New synapse release 1.87.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 27 Jun 2023 15:27:04 +0000
matrix-synapse-py3 (1.86.0) stable; urgency=medium
* New Synapse release 1.86.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 20 Jun 2023 17:22:46 +0200
matrix-synapse-py3 (1.86.0~rc2) stable; urgency=medium
* New Synapse release 1.86.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 14 Jun 2023 12:16:27 +0200
matrix-synapse-py3 (1.86.0~rc1) stable; urgency=medium
* New Synapse release 1.86.0rc1.

View File

@@ -28,12 +28,12 @@ FROM docker.io/library/${distro} as builder
RUN apt-get update -qq -o Acquire::Languages=none
RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends \
build-essential \
ca-certificates \
devscripts \
equivs \
wget
-yqq --no-install-recommends \
build-essential \
ca-certificates \
devscripts \
equivs \
wget
# fetch and unpack the package
# We are temporarily using a fork of dh-virtualenv due to an incompatibility with Python 3.11, which ships with
@@ -62,33 +62,29 @@ FROM docker.io/library/${distro}
ARG distro=""
ENV distro ${distro}
# Python < 3.7 assumes LANG="C" means ASCII-only and throws on printing unicode
# http://bugs.python.org/issue19846
ENV LANG C.UTF-8
# Install the build dependencies
#
# NB: keep this list in sync with the list of build-deps in debian/control
# TODO: it would be nice to do that automatically.
RUN apt-get update -qq -o Acquire::Languages=none \
&& env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \
build-essential \
curl \
debhelper \
devscripts \
libsystemd-dev \
lsb-release \
pkg-config \
python3-dev \
python3-pip \
python3-setuptools \
python3-venv \
sqlite3 \
libpq-dev \
libicu-dev \
pkg-config \
xmlsec1
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \
build-essential \
curl \
debhelper \
devscripts \
libsystemd-dev \
lsb-release \
pkg-config \
python3-dev \
python3-pip \
python3-setuptools \
python3-venv \
sqlite3 \
libpq-dev \
libicu-dev \
pkg-config \
xmlsec1
# Install rust and ensure it's in the PATH
ENV RUSTUP_HOME=/rust

View File

@@ -92,8 +92,6 @@ allow_device_name_lookup_over_federation: true
## Experimental Features ##
experimental_features:
# Enable history backfilling support
msc2716_enabled: true
# client-side support for partial state in /send_join responses
faster_joins: true
# Enable support for polls

View File

@@ -35,7 +35,11 @@ server {
# Send all other traffic to the main process
location ~* ^(\\/_matrix|\\/_synapse) {
{% if using_unix_sockets %}
proxy_pass http://unix:/run/main_public.sock;
{% else %}
proxy_pass http://localhost:8080;
{% endif %}
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;

View File

@@ -6,6 +6,9 @@
{% if enable_redis %}
redis:
enabled: true
{% if using_unix_sockets %}
path: /tmp/redis.sock
{% endif %}
{% endif %}
{% if appservice_registrations is not none %}

View File

@@ -19,7 +19,11 @@ username=www-data
autorestart=true
[program:redis]
{% if using_unix_sockets %}
command=/usr/local/bin/prefix-log /usr/local/bin/redis-server --unixsocket /tmp/redis.sock
{% else %}
command=/usr/local/bin/prefix-log /usr/local/bin/redis-server
{% endif %}
priority=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0

View File

@@ -8,7 +8,11 @@ worker_name: "{{ name }}"
worker_listeners:
- type: http
{% if using_unix_sockets %}
path: "/run/worker.{{ port }}"
{% else %}
port: {{ port }}
{% endif %}
{% if listener_resources %}
resources:
- names:

View File

@@ -36,12 +36,17 @@ listeners:
# Allow configuring in case we want to reverse proxy 8008
# using another process in the same container
{% if SYNAPSE_USE_UNIX_SOCKET %}
# Unix sockets don't care about TLS or IP addresses or ports
- path: '/run/main_public.sock'
type: http
{% else %}
- port: {{ SYNAPSE_HTTP_PORT or 8008 }}
tls: false
bind_addresses: ['::']
type: http
x_forwarded: false
{% endif %}
resources:
- names: [client]
compress: true
@@ -57,8 +62,11 @@ database:
user: "{{ POSTGRES_USER or "synapse" }}"
password: "{{ POSTGRES_PASSWORD }}"
database: "{{ POSTGRES_DB or "synapse" }}"
{% if not SYNAPSE_USE_UNIX_SOCKET %}
{# Synapse will use a default unix socket for Postgres when host/port is not specified (behavior from `psycopg2`). #}
host: "{{ POSTGRES_HOST or "db" }}"
port: "{{ POSTGRES_PORT or "5432" }}"
{% endif %}
cp_min: 5
cp_max: 10
{% else %}

View File

@@ -74,6 +74,9 @@ MAIN_PROCESS_HTTP_LISTENER_PORT = 8080
MAIN_PROCESS_INSTANCE_NAME = "main"
MAIN_PROCESS_LOCALHOST_ADDRESS = "127.0.0.1"
MAIN_PROCESS_REPLICATION_PORT = 9093
# Obviously, these would only be used with the UNIX socket option
MAIN_PROCESS_UNIX_SOCKET_PUBLIC_PATH = "/run/main_public.sock"
MAIN_PROCESS_UNIX_SOCKET_PRIVATE_PATH = "/run/main_private.sock"
# A simple name used as a placeholder in the WORKERS_CONFIG below. This will be replaced
# during processing with the name of the worker.
@@ -244,7 +247,6 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"^/_matrix/client/(api/v1|r0|v3|unstable)/join/",
"^/_matrix/client/(api/v1|r0|v3|unstable)/knock/",
"^/_matrix/client/(api/v1|r0|v3|unstable)/profile/",
"^/_matrix/client/(v1|unstable/org.matrix.msc2716)/rooms/.*/batch_send",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
@@ -408,11 +410,15 @@ def add_worker_roles_to_shared_config(
)
# Map of stream writer instance names to host/ports combos
instance_map[worker_name] = {
"host": "localhost",
"port": worker_port,
}
if os.environ.get("SYNAPSE_USE_UNIX_SOCKET", False):
instance_map[worker_name] = {
"path": f"/run/worker.{worker_port}",
}
else:
instance_map[worker_name] = {
"host": "localhost",
"port": worker_port,
}
# Update the list of stream writers. It's convenient that the name of the worker
# type is the same as the stream to write. Iterate over the whole list in case there
# is more than one.
@@ -424,10 +430,15 @@ def add_worker_roles_to_shared_config(
# Map of stream writer instance names to host/ports combos
# For now, all stream writers need http replication ports
instance_map[worker_name] = {
"host": "localhost",
"port": worker_port,
}
if os.environ.get("SYNAPSE_USE_UNIX_SOCKET", False):
instance_map[worker_name] = {
"path": f"/run/worker.{worker_port}",
}
else:
instance_map[worker_name] = {
"host": "localhost",
"port": worker_port,
}
def merge_worker_template_configs(
@@ -719,17 +730,29 @@ def generate_worker_files(
# Note that yaml cares about indentation, so care should be taken to insert lines
# into files at the correct indentation below.
# Convenience helper for if using unix sockets instead of host:port
using_unix_sockets = environ.get("SYNAPSE_USE_UNIX_SOCKET", False)
# First read the original config file and extract the listeners block. Then we'll
# add another listener for replication. Later we'll write out the result to the
# shared config file.
listeners = [
{
"port": MAIN_PROCESS_REPLICATION_PORT,
"bind_address": MAIN_PROCESS_LOCALHOST_ADDRESS,
"type": "http",
"resources": [{"names": ["replication"]}],
}
]
listeners: List[Any]
if using_unix_sockets:
listeners = [
{
"path": MAIN_PROCESS_UNIX_SOCKET_PRIVATE_PATH,
"type": "http",
"resources": [{"names": ["replication"]}],
}
]
else:
listeners = [
{
"port": MAIN_PROCESS_REPLICATION_PORT,
"bind_address": MAIN_PROCESS_LOCALHOST_ADDRESS,
"type": "http",
"resources": [{"names": ["replication"]}],
}
]
with open(config_path) as file_stream:
original_config = yaml.safe_load(file_stream)
original_listeners = original_config.get("listeners")
@@ -770,7 +793,17 @@ def generate_worker_files(
# A list of internal endpoints to healthcheck, starting with the main process
# which exists even if no workers do.
healthcheck_urls = ["http://localhost:8080/health"]
# This list ends up being part of the command line to curl, (curl added support for
# Unix sockets in version 7.40).
if using_unix_sockets:
healthcheck_urls = [
f"--unix-socket {MAIN_PROCESS_UNIX_SOCKET_PUBLIC_PATH} "
# The scheme and hostname from the following URL are ignored.
# The only thing that matters is the path `/health`
"http://localhost/health"
]
else:
healthcheck_urls = ["http://localhost:8080/health"]
# Get the set of all worker types that we have configured
all_worker_types_in_use = set(chain(*requested_worker_types.values()))
@@ -807,8 +840,12 @@ def generate_worker_files(
# given worker_type needs to stay assigned and not be replaced.
worker_config["shared_extra_conf"].update(shared_config)
shared_config = worker_config["shared_extra_conf"]
healthcheck_urls.append("http://localhost:%d/health" % (worker_port,))
if using_unix_sockets:
healthcheck_urls.append(
f"--unix-socket /run/worker.{worker_port} http://localhost/health"
)
else:
healthcheck_urls.append("http://localhost:%d/health" % (worker_port,))
# Update the shared config with sharding-related options if necessary
add_worker_roles_to_shared_config(
@@ -827,6 +864,7 @@ def generate_worker_files(
"/conf/workers/{name}.yaml".format(name=worker_name),
**worker_config,
worker_log_config_filepath=log_config_filepath,
using_unix_sockets=using_unix_sockets,
)
# Save this worker's port number to the correct nginx upstreams
@@ -847,8 +885,13 @@ def generate_worker_files(
nginx_upstream_config = ""
for upstream_worker_base_name, upstream_worker_ports in nginx_upstreams.items():
body = ""
for port in upstream_worker_ports:
body += f" server localhost:{port};\n"
if using_unix_sockets:
for port in upstream_worker_ports:
body += f" server unix:/run/worker.{port};\n"
else:
for port in upstream_worker_ports:
body += f" server localhost:{port};\n"
# Add to the list of configured upstreams
nginx_upstream_config += NGINX_UPSTREAM_CONFIG_BLOCK.format(
@@ -878,10 +921,15 @@ def generate_worker_files(
# If there are workers, add the main process to the instance_map too.
if workers_in_use:
instance_map = shared_config.setdefault("instance_map", {})
instance_map[MAIN_PROCESS_INSTANCE_NAME] = {
"host": MAIN_PROCESS_LOCALHOST_ADDRESS,
"port": MAIN_PROCESS_REPLICATION_PORT,
}
if using_unix_sockets:
instance_map[MAIN_PROCESS_INSTANCE_NAME] = {
"path": MAIN_PROCESS_UNIX_SOCKET_PRIVATE_PATH,
}
else:
instance_map[MAIN_PROCESS_INSTANCE_NAME] = {
"host": MAIN_PROCESS_LOCALHOST_ADDRESS,
"port": MAIN_PROCESS_REPLICATION_PORT,
}
# Shared homeserver config
convert(
@@ -891,6 +939,7 @@ def generate_worker_files(
appservice_registrations=appservice_registrations,
enable_redis=workers_in_use,
workers_in_use=workers_in_use,
using_unix_sockets=using_unix_sockets,
)
# Nginx config
@@ -901,6 +950,7 @@ def generate_worker_files(
upstream_directives=nginx_upstream_config,
tls_cert_path=os.environ.get("SYNAPSE_TLS_CERT"),
tls_key_path=os.environ.get("SYNAPSE_TLS_KEY"),
using_unix_sockets=using_unix_sockets,
)
# Supervisord config
@@ -910,6 +960,7 @@ def generate_worker_files(
"/etc/supervisor/supervisord.conf",
main_config_path=config_path,
enable_redis=workers_in_use,
using_unix_sockets=using_unix_sockets,
)
convert(

View File

@@ -419,7 +419,7 @@ The following query parameters are available:
* `from` (required) - The token to start returning events from. This token can be obtained from a prev_batch
or next_batch token returned by the /sync endpoint, or from an end token returned by a previous request to this endpoint.
* `to` - The token to spot returning events at.
* `to` - The token to stop returning events at.
* `limit` - The maximum number of events to return. Defaults to `10`.
* `filter` - A JSON RoomEventFilter to filter returned events with.
* `dir` - The direction to return events from. Either `f` for forwards or `b` for backwards. Setting

View File

@@ -242,6 +242,9 @@ The following parameters should be set in the URL:
- `dir` - Direction of media order. Either `f` for forwards or `b` for backwards.
Setting this value to `b` will reverse the above sort order. Defaults to `f`.
- `not_user_type` - Exclude certain user types, such as bot users, from the request.
Can be provided multiple times. Possible values are `bot`, `support` or "empty string".
"empty string" here means to exclude users without a type.
Caution. The database only has indexes on the columns `name` and `creation_ts`.
This means that if a different sort order is used (`is_guest`, `admin`,
@@ -729,7 +732,8 @@ POST /_synapse/admin/v1/users/<user_id>/login
An optional `valid_until_ms` field can be specified in the request body as an
integer timestamp that specifies when the token should expire. By default tokens
do not expire.
do not expire. Note that this API does not allow a user to login as themselves
(to create more tokens).
A response body like the following is returned:
@@ -1180,7 +1184,7 @@ The following parameters should be set in the URL:
- `user_id` - The fully qualified MXID: for example, `@user:server.com`. The user must
be local.
### Check username availability
## Check username availability
Checks to see if a username is available, and valid, for the server. See [the client-server
API](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available)
@@ -1198,7 +1202,7 @@ GET /_synapse/admin/v1/username_available?username=$localpart
The request and response format is the same as the
[/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.
### Find a user based on their ID in an auth provider
## Find a user based on their ID in an auth provider
The API is:
@@ -1237,7 +1241,7 @@ Returns a `404` HTTP status code if no user was found, with a response body like
_Added in Synapse 1.68.0._
### Find a user based on their Third Party ID (ThreePID or 3PID)
## Find a user based on their Third Party ID (ThreePID or 3PID)
The API is:

File diff suppressed because it is too large Load Diff

View File

@@ -23,7 +23,7 @@ people building from source should ensure they can fetch recent versions of Rust
(e.g. by using [rustup](https://rustup.rs/)).
The oldest supported version of SQLite is the version
[provided](https://packages.debian.org/buster/libsqlite3-0) by
[provided](https://packages.debian.org/bullseye/libsqlite3-0) by
[Debian oldstable](https://wiki.debian.org/DebianOldStable).
Context

View File

@@ -322,7 +322,7 @@ The following command will let you run the integration test with the most common
configuration:
```sh
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:buster
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:focal
```
(Note that the paths must be full paths! You could also write `$(realpath relative/path)` if needed.)
@@ -370,6 +370,7 @@ The above will run a monolithic (single-process) Synapse with SQLite as the data
See the [worker documentation](../workers.md) for additional information on workers.
- Passing `ASYNCIO_REACTOR=1` as an environment variable to use the Twisted asyncio reactor instead of the default one.
- Passing `PODMAN=1` will use the [podman](https://podman.io/) container runtime, instead of docker.
- Passing `UNIX_SOCKETS=1` will utilise Unix socket functionality for Synapse, Redis, and Postgres(when applicable).
To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`, e.g:
```sh

View File

@@ -6,7 +6,7 @@ This is a work-in-progress set of notes with two goals:
See also [MSC3902](https://github.com/matrix-org/matrix-spec-proposals/pull/3902).
The key idea is described by [MSC706](https://github.com/matrix-org/matrix-spec-proposals/pull/3902). This allows servers to
The key idea is described by [MSC3706](https://github.com/matrix-org/matrix-spec-proposals/pull/3706). This allows servers to
request a lightweight response to the federation `/send_join` endpoint.
This is called a **faster join**, also known as a **partial join**. In these
notes we'll usually use the word "partial" as it matches the database schema.

View File

@@ -348,6 +348,42 @@ callback returns `False`, Synapse falls through to the next one. The value of th
callback that does not return `False` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback.
### `check_login_for_spam`
_First introduced in Synapse v1.87.0_
```python
async def check_login_for_spam(
user_id: str,
device_id: Optional[str],
initial_display_name: Optional[str],
request_info: Collection[Tuple[Optional[str], str]],
auth_provider_id: Optional[str] = None,
) -> Union["synapse.module_api.NOT_SPAM", "synapse.module_api.errors.Codes"]
```
Called when a user logs in.
The arguments passed to this callback are:
* `user_id`: The user ID the user is logging in with
* `device_id`: The device ID the user is re-logging into.
* `initial_display_name`: The device display name, if any.
* `request_info`: A collection of tuples, which first item is a user agent, and which
second item is an IP address. These user agents and IP addresses are the ones that were
used during the login process.
* `auth_provider_id`: The identifier of the SSO authentication provider, if any.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `synapse.module_api.NOT_SPAM`, Synapse falls through to the next one.
The value of the first callback that does not return `synapse.module_api.NOT_SPAM` will
be used. If this happens, Synapse will not call any of the subsequent implementations of
this callback.
*Note:* This will not be called when a user registers.
## Example
The example below is a module that implements the spam checker callback

View File

@@ -95,7 +95,7 @@ matrix.example.com {
}
example.com:8448 {
reverse_proxy localhost:8008
reverse_proxy /_matrix/* localhost:8008
}
```

View File

@@ -135,8 +135,8 @@ Unofficial package are built for SLES 15 in the openSUSE:Backports:SLE-15 reposi
#### ArchLinux
The quickest way to get up and running with ArchLinux is probably with the community package
<https://archlinux.org/packages/community/x86_64/matrix-synapse/>, which should pull in most of
The quickest way to get up and running with ArchLinux is probably with the package provided by ArchLinux
<https://archlinux.org/packages/extra/x86_64/matrix-synapse/>, which should pull in most of
the necessary dependencies.
pip may be outdated (6.0.7-1 and needs to be upgraded to 6.0.8-1 ):
@@ -200,7 +200,7 @@ When following this route please make sure that the [Platform-specific prerequis
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 3.7 or later, up to Python 3.11.
- Python 3.8 or later, up to Python 3.11.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
If building on an uncommon architecture for which pre-built wheels are

View File

@@ -1,8 +1,4 @@
worker_app: synapse.app.generic_worker
worker_name: background_worker
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_log_config: /etc/matrix-synapse/background-worker-log.yaml

View File

@@ -1,9 +1,5 @@
worker_app: synapse.app.generic_worker
worker_name: event_persister1
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_name: event_persister1
worker_listeners:
- type: http

View File

@@ -1,8 +1,4 @@
worker_app: synapse.app.federation_sender
worker_name: federation_sender1
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_log_config: /etc/matrix-synapse/federation-sender-log.yaml

View File

@@ -1,10 +1,6 @@
worker_app: synapse.app.media_repository
worker_name: media_worker
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: 8085

View File

@@ -1,8 +1,4 @@
worker_app: synapse.app.pusher
worker_name: pusher_worker1
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_log_config: /etc/matrix-synapse/pusher-worker-log.yaml

View File

@@ -87,6 +87,57 @@ process, for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.90.0
## App service query parameter authorization is now a configuration option
Synapse v1.81.0 deprecated application service authorization via query parameters as this is
considered insecure - and from Synapse v1.71.0 forwards the application service token has also been sent via
[the `Authorization` header](https://spec.matrix.org/v1.6/application-service-api/#authorization)], making the insecure
query parameter authorization redundant. Since removing the ability to continue to use query parameters could break
backwards compatibility it has now been put behind a configuration option, `use_appservice_legacy_authorization`.
This option defaults to false, but can be activated by adding
```yaml
use_appservice_legacy_authorization: true
```
to your configuration.
# Upgrading to v1.89.0
## Removal of unspecced `user` property for `/register`
Application services can no longer call `/register` with a `user` property to create new users.
The standard `username` property should be used instead. See the
[Application Service specification](https://spec.matrix.org/v1.7/application-service-api/#server-admin-style-permissions)
for more information.
# Upgrading to v1.88.0
## Minimum supported Python version
The minimum supported Python version has been increased from v3.7 to v3.8.
You will need Python 3.8 to run Synapse v1.88.0 (due out July 18th, 2023).
If you use current versions of the Matrix.org-distributed Debian
packages or Docker images, no action is required.
## Removal of `worker_replication_*` settings
As mentioned previously in [Upgrading to v1.84.0](#upgrading-to-v1840), the following deprecated settings
are being removed in this release of Synapse:
* [`worker_replication_host`](https://matrix-org.github.io/synapse/v1.86/usage/configuration/config_documentation.html#worker_replication_host)
* [`worker_replication_http_port`](https://matrix-org.github.io/synapse/v1.86/usage/configuration/config_documentation.html#worker_replication_http_port)
* [`worker_replication_http_tls`](https://matrix-org.github.io/synapse/v1.86/usage/configuration/config_documentation.html#worker_replication_http_tls)
Please ensure that you have migrated to using `main` on your shared configuration's `instance_map`
(or create one if necessary). This is required if you have ***any*** workers at all;
administrators of single-process (monolith) installations don't need to do anything.
For an illustrative example, please see [Upgrading to v1.84.0](#upgrading-to-v1840) below.
# Upgrading to v1.86.0
## Minimum supported Rust version

View File

@@ -462,6 +462,20 @@ See the docs [request log format](../administration/request_log.md).
* `additional_resources`: Only valid for an 'http' listener. A map of
additional endpoints which should be loaded via dynamic modules.
Unix socket support (_Added in Synapse 1.89.0_):
* `path`: A path and filename for a Unix socket. Make sure it is located in a
directory with read and write permissions, and that it already exists (the directory
will not be created). Defaults to `None`.
* **Note**: The use of both `path` and `port` options for the same `listener` is not
compatible.
* The `x_forwarded` option defaults to true when using Unix sockets and can be omitted.
* Other options that would not make sense to use with a UNIX socket, such as
`bind_addresses` and `tls` will be ignored and can be removed.
* `mode`: The file permissions to set on the UNIX socket. Defaults to `666`
* **Note:** Must be set as `type: http` (does not support `metrics` and `manhole`).
Also make sure that `metrics` is not included in `resources` -> `names`
Valid resource names are:
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies `media` and `static`.
@@ -474,7 +488,7 @@ Valid resource names are:
* `media`: the media API (/_matrix/media).
* `metrics`: the metrics interface. See [here](../../metrics-howto.md).
* `metrics`: the metrics interface. See [here](../../metrics-howto.md). (Not compatible with Unix sockets)
* `openid`: OpenID authentication. See [here](../../openid.md).
@@ -533,6 +547,22 @@ listeners:
bind_addresses: ['::1', '127.0.0.1']
type: manhole
```
Example configuration #3:
```yaml
listeners:
# Unix socket listener: Ideal for Synapse deployments behind a reverse proxy, offering
# lightweight interprocess communication without TCP/IP overhead, avoid port
# conflicts, and providing enhanced security through system file permissions.
#
# Note that x_forwarded will default to true, when using a UNIX socket. Please see
# https://matrix-org.github.io/synapse/latest/reverse_proxy.html.
#
- path: /var/run/synapse/main_public.sock
type: http
resources:
- names: [client, federation]
```
---
### `manhole_settings`
@@ -1206,20 +1236,31 @@ Short retry algorithm is used when something or someone will wait for the reques
answer, while long retry is used for requests that happen in the background,
like sending a federation transaction.
* `client_timeout`: timeout for the federation requests in seconds. Default to 60s.
* `max_short_retry_delay`: maximum delay to be used for the short retry algo in seconds. Default to 2s.
* `max_long_retry_delay`: maximum delay to be used for the short retry algo in seconds. Default to 60s.
* `client_timeout`: timeout for the federation requests. Default to 60s.
* `max_short_retry_delay`: maximum delay to be used for the short retry algo. Default to 2s.
* `max_long_retry_delay`: maximum delay to be used for the short retry algo. Default to 60s.
* `max_short_retries`: maximum number of retries for the short retry algo. Default to 3 attempts.
* `max_long_retries`: maximum number of retries for the long retry algo. Default to 10 attempts.
The following options control the retry logic when communicating with a specific homeserver destination.
Unlike the previous configuration options, these values apply across all requests
for a given destination and the state of the backoff is stored in the database.
* `destination_min_retry_interval`: the initial backoff, after the first request fails. Defaults to 10m.
* `destination_retry_multiplier`: how much we multiply the backoff by after each subsequent fail. Defaults to 2.
* `destination_max_retry_interval`: a cap on the backoff. Defaults to a week.
Example configuration:
```yaml
federation:
client_timeout: 180
max_short_retry_delay: 7
max_long_retry_delay: 100
client_timeout: 180s
max_short_retry_delay: 7s
max_long_retry_delay: 100s
max_short_retries: 5
max_long_retries: 20
destination_min_retry_interval: 30s
destination_retry_multiplier: 5
destination_max_retry_interval: 12h
```
---
## Caching
@@ -2807,6 +2848,20 @@ Example configuration:
```yaml
track_appservice_user_ips: true
```
---
### `use_appservice_legacy_authorization`
Whether to send the application service access tokens via the `access_token` query parameter
per older versions of the Matrix specification. Defaults to false. Set to true to enable sending
access tokens via a query parameter.
**Enabling this option is considered insecure and is not recommended. **
Example configuration:
```yaml
use_appservice_legacy_authorization: true
```
---
### `macaroon_secret_key`
@@ -3930,13 +3985,14 @@ federation_sender_instances:
---
### `instance_map`
When using workers this should be a map from [`worker_name`](#worker_name) to the
HTTP replication listener of the worker, if configured, and to the main process.
Each worker declared under [`stream_writers`](../../workers.md#stream-writers) needs
a HTTP replication listener, and that listener should be included in the `instance_map`.
The main process also needs an entry on the `instance_map`, and it should be listed under
`main` **if even one other worker exists**. Ensure the port matches with what is declared
inside the `listener` block for a `replication` listener.
When using workers this should be a map from [`worker_name`](#worker_name) to the HTTP
replication listener of the worker, if configured, and to the main process. Each worker
declared under [`stream_writers`](../../workers.md#stream-writers) and
[`outbound_federation_restricted_to`](#outbound_federation_restricted_to) needs a HTTP
replication listener, and that listener should be included in the `instance_map`. The
main process also needs an entry on the `instance_map`, and it should be listed under
`main` **if even one other worker exists**. Ensure the port matches with what is
declared inside the `listener` block for a `replication` listener.
Example configuration:
@@ -3949,6 +4005,14 @@ instance_map:
host: localhost
port: 8034
```
Example configuration(#2, for UNIX sockets):
```yaml
instance_map:
main:
path: /var/run/synapse/main_replication.sock
worker1:
path: /var/run/synapse/worker1_replication.sock
```
---
### `stream_writers`
@@ -3966,6 +4030,24 @@ stream_writers:
typing: worker1
```
---
### `outbound_federation_restricted_to`
When using workers, you can restrict outbound federation traffic to only go through a
specific subset of workers. Any worker specified here must also be in the
[`instance_map`](#instance_map).
[`worker_replication_secret`](#worker_replication_secret) must also be configured to
authorize inter-worker communication.
```yaml
outbound_federation_restricted_to:
- federation_sender1
- federation_sender2
```
Also see the [worker
documentation](../../workers.md#restrict-outbound-federation-traffic-to-a-specific-set-of-workers)
for more info.
---
### `run_background_tasks_on`
The [worker](../../workers.md#background-tasks) that is used to run
@@ -4090,51 +4172,6 @@ Example configuration:
worker_name: generic_worker1
```
---
### `worker_replication_host`
*Deprecated as of version 1.84.0. Place `host` under `main` entry on the [`instance_map`](#instance_map) in your shared yaml configuration instead.*
The HTTP replication endpoint that it should talk to on the main Synapse process.
The main Synapse process defines this with a `replication` resource in
[`listeners` option](#listeners).
Example configuration:
```yaml
worker_replication_host: 127.0.0.1
```
---
### `worker_replication_http_port`
*Deprecated as of version 1.84.0. Place `port` under `main` entry on the [`instance_map`](#instance_map) in your shared yaml configuration instead.*
The HTTP replication port that it should talk to on the main Synapse process.
The main Synapse process defines this with a `replication` resource in
[`listeners` option](#listeners).
Example configuration:
```yaml
worker_replication_http_port: 9093
```
---
### `worker_replication_http_tls`
*Deprecated as of version 1.84.0. Place `tls` under `main` entry on the [`instance_map`](#instance_map) in your shared yaml configuration instead.*
Whether TLS should be used for talking to the HTTP replication port on the main
Synapse process.
The main Synapse process defines this with the `tls` option on its [listener](#listeners) that
has the `replication` resource enabled.
**Please note:** by default, it is not safe to expose replication ports to the
public Internet, even with TLS enabled.
See [`worker_replication_secret`](#worker_replication_secret).
Defaults to `false`.
*Added in Synapse 1.72.0.*
Example configuration:
```yaml
worker_replication_http_tls: true
```
---
### `worker_listeners`
A worker can handle HTTP requests. To do so, a `worker_listeners` option
@@ -4153,6 +4190,18 @@ worker_listeners:
resources:
- names: [client, federation]
```
Example configuration(#2, using UNIX sockets with a `replication` listener):
```yaml
worker_listeners:
- type: http
path: /var/run/synapse/worker_public.sock
resources:
- names: [client, federation]
- type: http
path: /var/run/synapse/worker_replication.sock
resources:
- names: [replication]
```
---
### `worker_manhole`

View File

@@ -95,9 +95,12 @@ for the main process
* Secondly, you need to enable
[redis-based replication](usage/configuration/config_documentation.md#redis)
* You will need to add an [`instance_map`](usage/configuration/config_documentation.md#instance_map)
with the `main` process defined, as well as the relevant connection information from
it's HTTP `replication` listener (defined in step 1 above). Note that the `host` defined
is the address the worker needs to look for the `main` process at, not necessarily the same address that is bound to.
with the `main` process defined, as well as the relevant connection information from
it's HTTP `replication` listener (defined in step 1 above).
* Note that the `host` defined is the address the worker needs to look for the `main`
process at, not necessarily the same address that is bound to.
* If you are using Unix sockets for the `replication` resource, make sure to
use a `path` to the socket file instead of a `port`.
* Optionally, a [shared secret](usage/configuration/config_documentation.md#worker_replication_secret)
can be used to authenticate HTTP traffic between workers. For example:
@@ -145,9 +148,6 @@ In the config file for each worker, you must specify:
with an `http` listener.
* **Synapse 1.72 and older:** if handling the `^/_matrix/client/v3/keys/upload` endpoint, the HTTP URI for
the main process (`worker_main_http_uri`). This config option is no longer required and is ignored when running Synapse 1.73 and newer.
* **Synapse 1.83 and older:** The HTTP replication endpoint that the worker should talk to on the main synapse process
([`worker_replication_host`](usage/configuration/config_documentation.md#worker_replication_host) and
[`worker_replication_http_port`](usage/configuration/config_documentation.md#worker_replication_http_port)). If using Synapse 1.84 and newer, these are not needed if `main` is defined on the [shared configuration](#shared-configuration) `instance_map`
For example:
@@ -177,11 +177,11 @@ The following applies to Synapse installations that have been installed from sou
You can start the main Synapse process with Poetry by running the following command:
```console
poetry run synapse_homeserver -c [your homeserver.yaml]
poetry run synapse_homeserver --config-file [your homeserver.yaml]
```
For worker setups, you can run the following command
```console
poetry run synapse_worker -c [your worker.yaml]
poetry run synapse_worker --config-file [your homeserver.yaml] --config-file [your worker.yaml]
```
## Available worker applications
@@ -232,7 +232,6 @@ information.
^/_matrix/client/v1/rooms/.*/hierarchy$
^/_matrix/client/(v1|unstable)/rooms/.*/relations/
^/_matrix/client/v1/rooms/.*/threads$
^/_matrix/client/unstable/org.matrix.msc2716/rooms/.*/batch_send$
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
^/_matrix/client/(r0|v3|unstable)/account/3pid$
^/_matrix/client/(r0|v3|unstable)/account/whoami$
@@ -532,6 +531,30 @@ the stream writer for the `presence` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
#### Restrict outbound federation traffic to a specific set of workers
The
[`outbound_federation_restricted_to`](usage/configuration/config_documentation.md#outbound_federation_restricted_to)
configuration is useful to make sure outbound federation traffic only goes through a
specified subset of workers. This allows you to set more strict access controls (like a
firewall) for all workers and only allow the `federation_sender`'s to contact the
outside world.
```yaml
instance_map:
main:
host: localhost
port: 8030
federation_sender1:
host: localhost
port: 8034
outbound_federation_restricted_to:
- federation_sender1
worker_replication_secret: "secret_secret"
```
#### Background tasks
There is also support for moving background tasks to a separate

148
flake.lock generated
View File

@@ -8,41 +8,20 @@
"pre-commit-hooks": "pre-commit-hooks"
},
"locked": {
"lastModified": 1683102061,
"narHash": "sha256-kOphT6V0uQUlFNBP3GBjs7DAU7fyZGGqCs9ue1gNY6E=",
"lastModified": 1688058187,
"narHash": "sha256-ipDcc7qrucpJ0+0eYNlwnE+ISTcq4m03qW+CWUshRXI=",
"owner": "cachix",
"repo": "devenv",
"rev": "ff1f29e41756553174d596cafe3a9fa77595100b",
"rev": "c8778e3dc30eb9043e218aaa3861d42d4992de77",
"type": "github"
},
"original": {
"owner": "cachix",
"ref": "main",
"ref": "v0.6.3",
"repo": "devenv",
"type": "github"
}
},
"fenix": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1682490133,
"narHash": "sha256-tR2Qx0uuk97WySpSSk4rGS/oH7xb5LykbjATcw1vw1I=",
"owner": "nix-community",
"repo": "fenix",
"rev": "4e9412753ab75ef0e038a5fe54a062fb44c27c6a",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "fenix",
"type": "github"
}
},
"flake-compat": {
"flake": false,
"locked": {
@@ -60,12 +39,33 @@
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1667395993,
"narHash": "sha256-nuEHfE/LcWyuSWnS8t12N1wc105Qtau+/OdUAjtQ0rA=",
"lastModified": 1685518550,
"narHash": "sha256-o2d0KcvaXzTrPRIo0kOLV0/QXHhDQ5DTi+OxcjO8xqY=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "5aed5285a952e0b949eb3ba02c12fa4fcfef535f",
"rev": "a1720a10a6cfe8234c0e93907ffe81be440f4cef",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1681202837,
"narHash": "sha256-H+Rh19JDwRtpVPAWp64F+rlEtxUWBAQW28eAi3SRSzg=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "cfacdce06f30d2b68473a46042957675eebb3401",
"type": "github"
},
"original": {
@@ -170,27 +170,27 @@
},
"nixpkgs-stable": {
"locked": {
"lastModified": 1673800717,
"narHash": "sha256-SFHraUqLSu5cC6IxTprex/nTsI81ZQAtDvlBvGDWfnA=",
"lastModified": 1685801374,
"narHash": "sha256-otaSUoFEMM+LjBI1XL/xGB5ao6IwnZOXc47qhIgJe8U=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "2f9fd351ec37f5d479556cd48be4ca340da59b8f",
"rev": "c37ca420157f4abc31e26f436c1145f8951ff373",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-22.11",
"ref": "nixos-23.05",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1682519441,
"narHash": "sha256-Vsq/8NOtvW1AoC6shCBxRxZyMQ+LhvPuJT6ltbzuv+Y=",
"lastModified": 1690535733,
"narHash": "sha256-WgjUPscQOw3cB8yySDGlyzo6cZNihnRzUwE9kadv/5I=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "7a32a141db568abde9bc389845949dc2a454dfd3",
"rev": "8cacc05fbfffeaab910e8c2c9e2a7c6b32ce881a",
"type": "github"
},
"original": {
@@ -200,6 +200,22 @@
"type": "github"
}
},
"nixpkgs_3": {
"locked": {
"lastModified": 1681358109,
"narHash": "sha256-eKyxW4OohHQx9Urxi7TQlFBTDWII+F+x2hklDOQPB50=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "96ba1c52e54e74c3197f4d43026b3f3d92e83ff9",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"pre-commit-hooks": {
"inputs": {
"flake-compat": [
@@ -215,11 +231,11 @@
"nixpkgs-stable": "nixpkgs-stable"
},
"locked": {
"lastModified": 1678376203,
"narHash": "sha256-3tyYGyC8h7fBwncLZy5nCUjTJPrHbmNwp47LlNLOHSM=",
"lastModified": 1688056373,
"narHash": "sha256-2+SDlNRTKsgo3LBRiMUcoEUb6sDViRNQhzJquZ4koOI=",
"owner": "cachix",
"repo": "pre-commit-hooks.nix",
"rev": "1a20b9708962096ec2481eeb2ddca29ed747770a",
"rev": "5843cf069272d92b60c3ed9e55b7a8989c01d4c7",
"type": "github"
},
"original": {
@@ -231,25 +247,27 @@
"root": {
"inputs": {
"devenv": "devenv",
"fenix": "fenix",
"nixpkgs": "nixpkgs_2",
"systems": "systems"
"rust-overlay": "rust-overlay",
"systems": "systems_3"
}
},
"rust-analyzer-src": {
"flake": false,
"rust-overlay": {
"inputs": {
"flake-utils": "flake-utils_2",
"nixpkgs": "nixpkgs_3"
},
"locked": {
"lastModified": 1682426789,
"narHash": "sha256-UqnLmJESRZE0tTEaGbRAw05Hm19TWIPA+R3meqi5I4w=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "943d2a8a1ca15e8b28a1f51f5a5c135e3728da04",
"lastModified": 1690510705,
"narHash": "sha256-6mjs3Gl9/xrseFh9iNcNq1u5yJ/MIoAmjoaG7SXZDIE=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "851ae4c128905a62834d53ce7704ebc1ba481bea",
"type": "github"
},
"original": {
"owner": "rust-lang",
"ref": "nightly",
"repo": "rust-analyzer",
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
},
@@ -267,6 +285,36 @@
"repo": "default",
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_3": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",

View File

@@ -39,27 +39,27 @@
{
inputs = {
# Use the master/unstable branch of nixpkgs. The latest stable, 22.11,
# does not contain 'perl536Packages.NetAsyncHTTP', needed by Sytest.
# Use the master/unstable branch of nixpkgs. Used to fetch the latest
# available versions of packages.
nixpkgs.url = "github:NixOS/nixpkgs/master";
# Output a development shell for x86_64/aarch64 Linux/Darwin (MacOS).
systems.url = "github:nix-systems/default";
# A development environment manager built on Nix. See https://devenv.sh.
devenv.url = "github:cachix/devenv/main";
# Rust toolchains and rust-analyzer nightly.
fenix = {
url = "github:nix-community/fenix";
inputs.nixpkgs.follows = "nixpkgs";
};
devenv.url = "github:cachix/devenv/v0.6.3";
# Rust toolchain.
rust-overlay.url = "github:oxalica/rust-overlay";
};
outputs = { self, nixpkgs, devenv, systems, ... } @ inputs:
outputs = { self, nixpkgs, devenv, systems, rust-overlay, ... } @ inputs:
let
forEachSystem = nixpkgs.lib.genAttrs (import systems);
in {
devShells = forEachSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
overlays = [ (import rust-overlay) ];
pkgs = import nixpkgs {
inherit system overlays;
};
in {
# Everything is configured via devenv - a Nix module for creating declarative
# developer environments. See https://devenv.sh/reference/options/ for a list
@@ -76,6 +76,23 @@
# Configure packages to install.
# Search for package names at https://search.nixos.org/packages?channel=unstable
packages = with pkgs; [
# The rust toolchain and related tools.
# This will install the "default" profile of rust components.
# https://rust-lang.github.io/rustup/concepts/profiles.html
#
# NOTE: We currently need to set the Rust version unnecessarily high
# in order to work around https://github.com/matrix-org/synapse/issues/15939
(rust-bin.stable."1.70.0".default.override {
# Additionally install the "rust-src" extension to allow diving into the
# Rust source code in an IDE (rust-analyzer will also make use of it).
extensions = [ "rust-src" ];
})
# The rust-analyzer language server implementation.
rust-analyzer
# For building any Python bindings to C or Rust code.
gcc
# Native dependencies for running Synapse.
icu
libffi
@@ -108,7 +125,7 @@
languages.python.poetry.activate.enable = true;
# Install all extra Python dependencies; this is needed to run the unit
# tests and utilitise all Synapse features.
languages.python.poetry.install.arguments = ["--extras all"];
languages.python.poetry.install.arguments = ["-v" "--extras all"];
# Install the 'matrix-synapse' package from the local checkout.
languages.python.poetry.install.installRootPackage = true;
@@ -124,12 +141,11 @@
# Install dependencies for the additional programming languages
# involved with Synapse development.
#
# * Rust is used for developing and running Synapse.
# * Golang is needed to run the Complement test suite.
# * Perl is needed to run the SyTest test suite.
# * Rust is used for developing and running Synapse.
# It is installed manually with `packages` above.
languages.go.enable = true;
languages.rust.enable = true;
languages.rust.version = "stable";
languages.perl.enable = true;
# Postgres is needed to run Synapse with postgres support and
@@ -178,7 +194,7 @@
EOF
'';
# Start synapse when `devenv up` is run.
processes.synapse.exec = "poetry run python -m synapse.app.homeserver -c homeserver.yaml --config-directory homeserver-config-overrides.d";
processes.synapse.exec = "poetry run python -m synapse.app.homeserver -c homeserver.yaml -c homeserver-config-overrides.d";
# Define the perl modules we require to run SyTest.
#

1114
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -89,7 +89,7 @@ manifest-path = "rust/Cargo.toml"
[tool.poetry]
name = "matrix-synapse"
version = "1.86.0rc1"
version = "1.89.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@@ -147,7 +147,7 @@ synapse_review_recent_signups = "synapse._scripts.review_recent_signups:main"
update_synapse_database = "synapse._scripts.update_synapse_database:main"
[tool.poetry.dependencies]
python = "^3.7.1"
python = "^3.8.0"
# Mandatory Dependencies
# ----------------------
@@ -203,11 +203,9 @@ ijson = ">=3.1.4"
matrix-common = "^1.3.0"
# We need packaging.requirements.Requirement, added in 16.1.
packaging = ">=16.1"
# At the time of writing, we only use functions from the version `importlib.metadata`
# which shipped in Python 3.8. This corresponds to version 1.4 of the backport.
importlib_metadata = { version = ">=1.4", python = "<3.8" }
# This is the most recent version of Pydantic with available on common distros.
pydantic = ">=1.7.4"
# We are currently incompatible with >=2.0.0: (https://github.com/matrix-org/synapse/issues/15858)
pydantic = "^1.7.4"
# This is for building the rust components during "poetry install", which
# currently ignores the `build-system.requires` directive (c.f.
@@ -311,7 +309,7 @@ all = [
# We pin black so that our tests don't start failing on new releases.
isort = ">=5.10.1"
black = ">=22.3.0"
ruff = "0.0.265"
ruff = "0.0.277"
# Typechecking
lxml-stubs = ">=0.4.0"
@@ -375,7 +373,15 @@ build-backend = "poetry.core.masonry.api"
[tool.cibuildwheel]
# Skip unsupported platforms (by us or by Rust).
skip = "cp36* *-musllinux_i686 pp*aarch64 *-musllinux_aarch64"
# See https://cibuildwheel.readthedocs.io/en/stable/options/#build-skip for the list of build targets.
# We skip:
# - CPython 3.6 and 3.7: EOLed
# - PyPy 3.7: we only support Python 3.8+
# - musllinux i686: excluded to reduce number of wheels we build.
# c.f. https://github.com/matrix-org/synapse/pull/12595#discussion_r963107677
# - PyPy on Aarch64 and musllinux on aarch64: too slow to build.
# c.f. https://github.com/matrix-org/synapse/pull/14259
skip = "cp36* cp37* pp37* *-musllinux_i686 pp*aarch64 *-musllinux_aarch64"
# We need a rust compiler
before-all = "curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain stable -y --profile minimal"

View File

@@ -13,6 +13,9 @@
// limitations under the License.
#![feature(test)]
use std::borrow::Cow;
use synapse::push::{
evaluator::PushRuleEvaluator, Condition, EventMatchCondition, FilteredPushRules, JsonValue,
PushRules, SimpleJsonValue,
@@ -26,15 +29,15 @@ fn bench_match_exact(b: &mut Bencher) {
let flattened_keys = [
(
"type".to_string(),
JsonValue::Value(SimpleJsonValue::Str("m.text".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("m.text"))),
),
(
"room_id".to_string(),
JsonValue::Value(SimpleJsonValue::Str("!room:server".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("!room:server"))),
),
(
"content.body".to_string(),
JsonValue::Value(SimpleJsonValue::Str("test message".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("test message"))),
),
]
.into_iter()
@@ -71,15 +74,15 @@ fn bench_match_word(b: &mut Bencher) {
let flattened_keys = [
(
"type".to_string(),
JsonValue::Value(SimpleJsonValue::Str("m.text".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("m.text"))),
),
(
"room_id".to_string(),
JsonValue::Value(SimpleJsonValue::Str("!room:server".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("!room:server"))),
),
(
"content.body".to_string(),
JsonValue::Value(SimpleJsonValue::Str("test message".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("test message"))),
),
]
.into_iter()
@@ -116,15 +119,15 @@ fn bench_match_word_miss(b: &mut Bencher) {
let flattened_keys = [
(
"type".to_string(),
JsonValue::Value(SimpleJsonValue::Str("m.text".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("m.text"))),
),
(
"room_id".to_string(),
JsonValue::Value(SimpleJsonValue::Str("!room:server".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("!room:server"))),
),
(
"content.body".to_string(),
JsonValue::Value(SimpleJsonValue::Str("test message".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("test message"))),
),
]
.into_iter()
@@ -161,15 +164,15 @@ fn bench_eval_message(b: &mut Bencher) {
let flattened_keys = [
(
"type".to_string(),
JsonValue::Value(SimpleJsonValue::Str("m.text".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("m.text"))),
),
(
"room_id".to_string(),
JsonValue::Value(SimpleJsonValue::Str("!room:server".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("!room:server"))),
),
(
"content.body".to_string(),
JsonValue::Value(SimpleJsonValue::Str("test message".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("test message"))),
),
]
.into_iter()

View File

@@ -63,22 +63,6 @@ pub const BASE_PREPEND_OVERRIDE_RULES: &[PushRule] = &[PushRule {
}];
pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
// We don't want to notify on edits. Not only can this be confusing in real
// time (2 notifications, one message) but it's especially confusing
// if a bridge needs to edit a previously backfilled message.
PushRule {
rule_id: Cow::Borrowed("global/override/.com.beeper.suppress_edits"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("content.m\\.relates_to.rel_type"),
pattern: Cow::Borrowed("m.replace"),
},
))]),
actions: Cow::Borrowed(&[]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.suppress_notices"),
priority_class: 5,
@@ -142,11 +126,11 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.is_user_mention"),
rule_id: Cow::Borrowed("global/override/.m.rule.is_user_mention"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(
KnownCondition::ExactEventPropertyContainsType(EventPropertyIsTypeCondition {
key: Cow::Borrowed("content.m\\.mentions.user_ids"),
key: Cow::Borrowed(r"content.m\.mentions.user_ids"),
value_type: Cow::Borrowed(&EventMatchPatternType::UserId),
}),
)]),
@@ -163,12 +147,12 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.is_room_mention"),
rule_id: Cow::Borrowed("global/override/.m.rule.is_room_mention"),
priority_class: 5,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventPropertyIs(EventPropertyIsCondition {
key: Cow::Borrowed("content.m\\.mentions.room"),
value: Cow::Borrowed(&SimpleJsonValue::Bool(true)),
key: Cow::Borrowed(r"content.m\.mentions.room"),
value: Cow::Owned(SimpleJsonValue::Bool(true)),
})),
Condition::Known(KnownCondition::SenderNotificationPermission {
key: Cow::Borrowed("room"),
@@ -241,6 +225,21 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
default: true,
default_enabled: true,
},
// We don't want to notify on edits *unless* the edit directly mentions a
// user, which is handled above.
PushRule {
rule_id: Cow::Borrowed("global/override/.org.matrix.msc3958.suppress_edits"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventPropertyIs(
EventPropertyIsCondition {
key: Cow::Borrowed(r"content.m\.relates_to.rel_type"),
value: Cow::Owned(SimpleJsonValue::Str(Cow::Borrowed("m.replace"))),
},
))]),
actions: Cow::Borrowed(&[]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.org.matrix.msc3930.rule.poll_response"),
priority_class: 5,

View File

@@ -117,7 +117,7 @@ impl PushRuleEvaluator {
msc3931_enabled: bool,
) -> Result<Self, Error> {
let body = match flattened_keys.get("content.body") {
Some(JsonValue::Value(SimpleJsonValue::Str(s))) => s.clone(),
Some(JsonValue::Value(SimpleJsonValue::Str(s))) => s.clone().into_owned(),
_ => String::new(),
};
@@ -313,13 +313,15 @@ impl PushRuleEvaluator {
};
let pattern = match &*exact_event_match.value_type {
EventMatchPatternType::UserId => user_id,
EventMatchPatternType::UserLocalpart => get_localpart_from_id(user_id)?,
EventMatchPatternType::UserId => user_id.to_owned(),
EventMatchPatternType::UserLocalpart => {
get_localpart_from_id(user_id)?.to_owned()
}
};
self.match_event_property_contains(
exact_event_match.key.clone(),
Cow::Borrowed(&SimpleJsonValue::Str(pattern.to_string())),
Cow::Borrowed(&SimpleJsonValue::Str(Cow::Owned(pattern))),
)?
}
KnownCondition::ContainsDisplayName => {
@@ -494,7 +496,7 @@ fn push_rule_evaluator() {
let mut flattened_keys = BTreeMap::new();
flattened_keys.insert(
"content.body".to_string(),
JsonValue::Value(SimpleJsonValue::Str("foo bar bob hello".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("foo bar bob hello"))),
);
let evaluator = PushRuleEvaluator::py_new(
flattened_keys,
@@ -522,7 +524,7 @@ fn test_requires_room_version_supports_condition() {
let mut flattened_keys = BTreeMap::new();
flattened_keys.insert(
"content.body".to_string(),
JsonValue::Value(SimpleJsonValue::Str("foo bar bob hello".to_string())),
JsonValue::Value(SimpleJsonValue::Str(Cow::Borrowed("foo bar bob hello"))),
);
let flags = vec![RoomVersionFeatures::ExtensibleEvents.as_str().to_string()];
let evaluator = PushRuleEvaluator::py_new(

View File

@@ -256,7 +256,7 @@ impl<'de> Deserialize<'de> for Action {
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
#[serde(untagged)]
pub enum SimpleJsonValue {
Str(String),
Str(Cow<'static, str>),
Int(i64),
Bool(bool),
Null,
@@ -265,7 +265,7 @@ pub enum SimpleJsonValue {
impl<'source> FromPyObject<'source> for SimpleJsonValue {
fn extract(ob: &'source PyAny) -> PyResult<Self> {
if let Ok(s) = <PyString as pyo3::PyTryFrom>::try_from(ob) {
Ok(SimpleJsonValue::Str(s.to_string()))
Ok(SimpleJsonValue::Str(Cow::Owned(s.to_string())))
// A bool *is* an int, ensure we try bool first.
} else if let Ok(b) = <PyBool as pyo3::PyTryFrom>::try_from(ob) {
Ok(SimpleJsonValue::Bool(b.extract()?))
@@ -585,7 +585,7 @@ impl FilteredPushRules {
}
if !self.msc3958_suppress_edits_enabled
&& rule.rule_id == "global/override/.com.beeper.suppress_edits"
&& rule.rule_id == "global/override/.org.matrix.msc3958.suppress_edits"
{
return false;
}

View File

@@ -22,15 +22,19 @@ from typing import Collection, Optional, Sequence, Set
# These are expanded inside the dockerfile to be a fully qualified image name.
# e.g. docker.io/library/debian:bullseye
#
# If an EOL is forced by a Python version and we're dropping support for it, make sure
# to remove references to the distibution across Synapse (search for "bullseye" for
# example)
DISTS = (
"debian:buster", # oldstable: EOL 2022-08
"debian:bullseye",
"debian:bookworm",
"debian:sid",
"ubuntu:focal", # 20.04 LTS (our EOL forced by Py38 on 2024-10-14)
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04)
"ubuntu:kinetic", # 22.10 (EOL 2023-07-20)
"ubuntu:lunar", # 23.04 (EOL 2024-01)
"debian:bullseye", # (EOL ~2024-07) (our EOL forced by Python 3.9 is 2025-10-05)
"debian:bookworm", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24)
"debian:sid", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24)
"ubuntu:focal", # 20.04 LTS (EOL 2025-04) (our EOL forced by Python 3.8 is 2024-10-14)
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)
"ubuntu:kinetic", # 22.10 (EOL 2023-07-20) (our EOL forced by Python 3.10 is 2026-10-04)
"ubuntu:lunar", # 23.04 (EOL 2024-01) (our EOL forced by Python 3.11 is 2027-10-24)
"debian:trixie", # (EOL not specified yet)
)
DESC = """\

View File

@@ -214,7 +214,7 @@ fi
extra_test_args=()
test_tags="synapse_blacklist,msc3787,msc3874,msc3890,msc3391,msc3930,faster_joins"
test_tags="synapse_blacklist,msc3874,msc3890,msc3391,msc3930,faster_joins"
# All environment variables starting with PASS_ will be shared.
# (The prefix is stripped off before reaching the container.)
@@ -246,10 +246,6 @@ else
else
export PASS_SYNAPSE_COMPLEMENT_DATABASE=sqlite
fi
# The tests for importing historical messages (MSC2716)
# only pass with monoliths, currently.
test_tags="$test_tags,msc2716"
fi
if [[ -n "$ASYNCIO_REACTOR" ]]; then
@@ -257,6 +253,10 @@ if [[ -n "$ASYNCIO_REACTOR" ]]; then
export PASS_SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR=true
fi
if [[ -n "$UNIX_SOCKETS" ]]; then
# Enable full on Unix socket mode for Synapse, Redis and Postgresql
export PASS_SYNAPSE_USE_UNIX_SOCKET=1
fi
if [[ -n "$SYNAPSE_TEST_LOG_LEVEL" ]]; then
# Set the log level to what is desired

View File

@@ -136,11 +136,11 @@ def request(
authorization_headers.append(header)
print("Authorization: %s" % header, file=sys.stderr)
dest = "matrix://%s%s" % (destination, path)
dest = "matrix-federation://%s%s" % (destination, path)
print("Requesting %s" % dest, file=sys.stderr)
s = requests.Session()
s.mount("matrix://", MatrixConnectionAdapter())
s.mount("matrix-federation://", MatrixConnectionAdapter())
headers: Dict[str, str] = {
"Authorization": authorization_headers[0],

View File

@@ -25,8 +25,8 @@ from synapse.util.rust import check_rust_lib_up_to_date
from synapse.util.stringutils import strtobool
# Check that we're not running on an unsupported Python version.
if sys.version_info < (3, 7):
print("Synapse requires Python 3.7 or above.")
if sys.version_info < (3, 8):
print("Synapse requires Python 3.8 or above.")
sys.exit(1)
# Allow using the asyncio reactor via env var.

View File

@@ -61,6 +61,7 @@ from synapse.storage.databases.main.deviceinbox import DeviceInboxBackgroundUpda
from synapse.storage.databases.main.devices import DeviceBackgroundUpdateStore
from synapse.storage.databases.main.e2e_room_keys import EndToEndRoomKeyBackgroundStore
from synapse.storage.databases.main.end_to_end_keys import EndToEndKeyBackgroundStore
from synapse.storage.databases.main.event_federation import EventFederationWorkerStore
from synapse.storage.databases.main.event_push_actions import EventPushActionsStore
from synapse.storage.databases.main.events_bg_updates import (
EventsBackgroundUpdatesStore,
@@ -196,6 +197,11 @@ IGNORED_TABLES = {
"ui_auth_sessions",
"ui_auth_sessions_credentials",
"ui_auth_sessions_ips",
# Ignore the worker locks table, as a) there shouldn't be any acquired locks
# after porting, and b) the circular foreign key constraints make it hard to
# port.
"worker_read_write_locks_mode",
"worker_read_write_locks",
}
@@ -239,6 +245,7 @@ class Store(
PresenceBackgroundUpdateStore,
ReceiptsBackgroundUpdateStore,
RelationsWorkerStore,
EventFederationWorkerStore,
):
def execute(self, f: Callable[..., R], *args: Any, **kwargs: Any) -> Awaitable[R]:
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)
@@ -754,7 +761,7 @@ class Porter:
# Step 2. Set up sequences
#
# We do this before porting the tables so that event if we fail half
# We do this before porting the tables so that even if we fail half
# way through the postgres DB always have sequences that are greater
# than their respective tables. If we don't then creating the
# `DataStore` object will fail due to the inconsistency.
@@ -762,6 +769,10 @@ class Porter:
await self._setup_state_group_id_seq()
await self._setup_user_id_seq()
await self._setup_events_stream_seqs()
await self._setup_sequence(
"un_partial_stated_event_stream_sequence",
("un_partial_stated_event_stream",),
)
await self._setup_sequence(
"device_inbox_sequence", ("device_inbox", "device_federation_outbox")
)
@@ -772,6 +783,11 @@ class Porter:
await self._setup_sequence("receipts_sequence", ("receipts_linearized",))
await self._setup_sequence("presence_stream_sequence", ("presence_stream",))
await self._setup_auth_chain_sequence()
await self._setup_sequence(
"application_services_txn_id_seq",
("application_services_txns",),
"txn_id",
)
# Step 3. Get tables.
self.progress.set_state("Fetching tables")
@@ -803,7 +819,9 @@ class Porter:
)
# Map from table name to args passed to `handle_table`, i.e. a tuple
# of: `postgres_size`, `table_size`, `forward_chunk`, `backward_chunk`.
tables_to_port_info_map = {r[0]: r[1:] for r in setup_res}
tables_to_port_info_map = {
r[0]: r[1:] for r in setup_res if r[0] not in IGNORED_TABLES
}
# Step 5. Do the copying.
#
@@ -1074,7 +1092,10 @@ class Porter:
)
async def _setup_sequence(
self, sequence_name: str, stream_id_tables: Iterable[str]
self,
sequence_name: str,
stream_id_tables: Iterable[str],
column_name: str = "stream_id",
) -> None:
"""Set a sequence to the correct value."""
current_stream_ids = []
@@ -1084,7 +1105,7 @@ class Porter:
await self.sqlite_store.db_pool.simple_select_one_onecol(
table=stream_id_table,
keyvalues={},
retcol="COALESCE(MAX(stream_id), 1)",
retcol=f"COALESCE(MAX({column_name}), 1)",
allow_none=True,
),
)
@@ -1369,6 +1390,9 @@ def main() -> None:
sys.stderr.write("Database must use the 'psycopg2' connector.\n")
sys.exit(3)
# Don't run the background tasks that get started by the data stores.
hs_config["run_background_tasks_on"] = "some_other_process"
config = HomeServerConfig()
config.parse_config_dict(hs_config, "", "")

View File

@@ -123,10 +123,6 @@ class EventTypes:
SpaceChild: Final = "m.space.child"
SpaceParent: Final = "m.space.parent"
MSC2716_INSERTION: Final = "org.matrix.msc2716.insertion"
MSC2716_BATCH: Final = "org.matrix.msc2716.batch"
MSC2716_MARKER: Final = "org.matrix.msc2716.marker"
Reaction: Final = "m.reaction"
@@ -222,16 +218,6 @@ class EventContentFields:
# Used in m.room.guest_access events.
GUEST_ACCESS: Final = "guest_access"
# Used on normal messages to indicate they were historically imported after the fact
MSC2716_HISTORICAL: Final = "org.matrix.msc2716.historical"
# For "insertion" events to indicate what the next batch ID should be in
# order to connect to it
MSC2716_NEXT_BATCH_ID: Final = "next_batch_id"
# Used on "batch" events to indicate which insertion event it connects to
MSC2716_BATCH_ID: Final = "batch_id"
# For "marker" events
MSC2716_INSERTION_EVENT_REFERENCE: Final = "insertion_event_reference"
# The authorising user for joining a restricted room.
AUTHORISING_USER: Final = "join_authorised_via_users_server"

View File

@@ -217,6 +217,13 @@ class InvalidAPICallError(SynapseError):
super().__init__(HTTPStatus.BAD_REQUEST, msg, Codes.BAD_JSON)
class InvalidProxyCredentialsError(SynapseError):
"""Error raised when the proxy credentials are invalid."""
def __init__(self, msg: str, errcode: str = Codes.UNKNOWN):
super().__init__(401, msg, errcode)
class ProxiedRequestError(SynapseError):
"""An error from a general matrix endpoint, eg. from a proxied Matrix API call.

View File

@@ -78,41 +78,29 @@ class RoomVersion:
# MSC2209: Check 'notifications' key while verifying
# m.room.power_levels auth rules.
limit_notifications_power_levels: bool
# MSC2175: No longer include the creator in m.room.create events.
msc2175_implicit_room_creator: bool
# MSC2174/MSC2176: Apply updated redaction rules algorithm, move redacts to
# content property.
msc2176_redaction_rules: bool
# MSC3083: Support the 'restricted' join_rule.
msc3083_join_rules: bool
# MSC3375: Support for the proper redaction rules for MSC3083. This mustn't
# be enabled if MSC3083 is not.
msc3375_redaction_rules: bool
# MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending
# m.room.membership event with membership 'knock'.
msc2403_knocking: bool
# MSC2716: Adds m.room.power_levels -> content.historical field to control
# whether "insertion", "chunk", "marker" events can be sent
msc2716_historical: bool
# MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events
msc2716_redactions: bool
# No longer include the creator in m.room.create events.
implicit_room_creator: bool
# Apply updated redaction rules algorithm from room version 11.
updated_redaction_rules: bool
# Support the 'restricted' join rule.
restricted_join_rule: bool
# Support for the proper redaction rules for the restricted join rule. This requires
# restricted_join_rule to be enabled.
restricted_join_rule_fix: bool
# Support the 'knock' join rule.
knock_join_rule: bool
# MSC3389: Protect relation information from redaction.
msc3389_relation_redactions: bool
# MSC3787: Adds support for a `knock_restricted` join rule, mixing concepts of
# knocks and restricted join rules into the same join condition.
msc3787_knock_restricted_join_rule: bool
# MSC3667: Enforce integer power levels
msc3667_int_only_power_levels: bool
# MSC3821: Do not redact the third_party_invite content field for membership events.
msc3821_redaction_rules: bool
# Support the 'knock_restricted' join rule.
knock_restricted_join_rule: bool
# Enforce integer power levels
enforce_int_power_levels: bool
# MSC3931: Adds a push rule condition for "room version feature flags", making
# some push rules room version dependent. Note that adding a flag to this list
# is not enough to mark it "supported": the push rule evaluator also needs to
# support the flag. Unknown flags are ignored by the evaluator, making conditions
# fail if used.
msc3931_push_features: Tuple[str, ...] # values from PushRuleRoomFlag
# MSC3989: Redact the origin field.
msc3989_redaction_rules: bool
class RoomVersions:
@@ -125,19 +113,15 @@ class RoomVersions:
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=False,
restricted_join_rule_fix=False,
knock_join_rule=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
V2 = RoomVersion(
"2",
@@ -148,19 +132,15 @@ class RoomVersions:
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=False,
restricted_join_rule_fix=False,
knock_join_rule=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
V3 = RoomVersion(
"3",
@@ -171,19 +151,15 @@ class RoomVersions:
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=False,
restricted_join_rule_fix=False,
knock_join_rule=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
V4 = RoomVersion(
"4",
@@ -194,19 +170,15 @@ class RoomVersions:
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=False,
restricted_join_rule_fix=False,
knock_join_rule=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
V5 = RoomVersion(
"5",
@@ -217,19 +189,15 @@ class RoomVersions:
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=False,
restricted_join_rule_fix=False,
knock_join_rule=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
V6 = RoomVersion(
"6",
@@ -240,42 +208,15 @@ class RoomVersions:
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=False,
restricted_join_rule_fix=False,
knock_join_rule=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
MSC2176 = RoomVersion(
"org.matrix.msc2176",
RoomDisposition.UNSTABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=True,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
V7 = RoomVersion(
"7",
@@ -286,19 +227,15 @@ class RoomVersions:
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=False,
restricted_join_rule_fix=False,
knock_join_rule=True,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
V8 = RoomVersion(
"8",
@@ -309,19 +246,15 @@ class RoomVersions:
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=False,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=True,
restricted_join_rule_fix=False,
knock_join_rule=True,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
V9 = RoomVersion(
"9",
@@ -332,65 +265,15 @@ class RoomVersions:
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=True,
restricted_join_rule_fix=True,
knock_join_rule=True,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
MSC3787 = RoomVersion(
"org.matrix.msc3787",
RoomDisposition.UNSTABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
MSC3821 = RoomVersion(
"org.matrix.msc3821.opt1",
RoomDisposition.UNSTABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=True,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
V10 = RoomVersion(
"10",
@@ -401,42 +284,15 @@ class RoomVersions:
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=True,
restricted_join_rule_fix=True,
knock_join_rule=True,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
msc3821_redaction_rules=False,
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
MSC2716v4 = RoomVersion(
"org.matrix.msc2716v4",
RoomDisposition.UNSTABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=True,
msc2716_historical=True,
msc2716_redactions=True,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
MSC1767v10 = RoomVersion(
# MSC1767 (Extensible Events) based on room version "10"
@@ -448,66 +304,34 @@ class RoomVersions:
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=True,
restricted_join_rule_fix=True,
knock_join_rule=True,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
msc3821_redaction_rules=False,
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(PushRuleRoomFlag.EXTENSIBLE_EVENTS,),
msc3989_redaction_rules=False,
)
MSC3989 = RoomVersion(
"org.matrix.msc3989",
RoomDisposition.UNSTABLE,
V11 = RoomVersion(
"11",
RoomDisposition.STABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
implicit_room_creator=True, # Used by MSC3820
updated_redaction_rules=True, # Used by MSC3820
restricted_join_rule=True,
restricted_join_rule_fix=True,
knock_join_rule=True,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
msc3821_redaction_rules=False,
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(),
msc3989_redaction_rules=True,
)
MSC3820opt2 = RoomVersion(
# Based upon v10
"org.matrix.msc3820.opt2",
RoomDisposition.UNSTABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=True, # Used by MSC3820
msc2176_redaction_rules=True, # Used by MSC3820
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
msc3821_redaction_rules=True, # Used by MSC3820
msc3931_push_features=(),
msc3989_redaction_rules=True, # Used by MSC3820
)
@@ -520,15 +344,11 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
RoomVersions.V4,
RoomVersions.V5,
RoomVersions.V6,
RoomVersions.MSC2176,
RoomVersions.V7,
RoomVersions.V8,
RoomVersions.V9,
RoomVersions.MSC3787,
RoomVersions.V10,
RoomVersions.MSC2716v4,
RoomVersions.MSC3989,
RoomVersions.MSC3820opt2,
RoomVersions.V11,
)
}
@@ -557,12 +377,12 @@ MSC3244_CAPABILITIES = {
RoomVersionCapability(
"knock",
RoomVersions.V7,
lambda room_version: room_version.msc2403_knocking,
lambda room_version: room_version.knock_join_rule,
),
RoomVersionCapability(
"restricted",
RoomVersions.V9,
lambda room_version: room_version.msc3083_join_rules,
lambda room_version: room_version.restricted_join_rule,
),
)
}

View File

@@ -386,6 +386,7 @@ def listen_unix(
def listen_http(
hs: "HomeServer",
listener_config: ListenerConfig,
root_resource: Resource,
version_string: str,
@@ -406,6 +407,7 @@ def listen_http(
version_string,
max_request_body_size=max_request_body_size,
reactor=reactor,
hs=hs,
)
if isinstance(listener_config, TCPListenerConfig):

View File

@@ -83,7 +83,6 @@ from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.registration import RegistrationWorkerStore
from synapse.storage.databases.main.relations import RelationsWorkerStore
from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.storage.databases.main.room_batch import RoomBatchStore
from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
from synapse.storage.databases.main.search import SearchStore
from synapse.storage.databases.main.session import SessionStore
@@ -120,7 +119,6 @@ class GenericWorkerStore(
# the races it creates aren't too bad.
KeyStore,
RoomWorkerStore,
RoomBatchStore,
DirectoryWorkerStore,
PushRulesWorkerStore,
ApplicationServiceTransactionWorkerStore,
@@ -223,6 +221,7 @@ class GenericWorkerServer(HomeServer):
root_resource = create_resource_tree(resources, OptionsResource())
_base.listen_http(
self,
listener_config,
root_resource,
self.version_string,

View File

@@ -139,6 +139,7 @@ class SynapseHomeServer(HomeServer):
root_resource = OptionsResource()
ports = listen_http(
self,
listener_config,
create_resource_tree(resources, root_resource),
self.version_string,

View File

@@ -16,9 +16,6 @@ import logging
import urllib.parse
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Callable,
Dict,
Iterable,
List,
@@ -27,10 +24,11 @@ from typing import (
Sequence,
Tuple,
TypeVar,
Union,
)
from prometheus_client import Counter
from typing_extensions import Concatenate, ParamSpec, TypeGuard
from typing_extensions import ParamSpec, TypeGuard
from synapse.api.constants import EventTypes, Membership, ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException, HttpResponseException
@@ -80,9 +78,7 @@ sent_todevice_counter = Counter(
HOUR_IN_MS = 60 * 60 * 1000
APP_SERVICE_PREFIX = "/_matrix/app/v1"
APP_SERVICE_UNSTABLE_PREFIX = "/_matrix/app/unstable"
P = ParamSpec("P")
R = TypeVar("R")
@@ -123,52 +119,12 @@ class ApplicationServiceApi(SimpleHttpClient):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.clock = hs.get_clock()
self.config = hs.config.appservice
self.protocol_meta_cache: ResponseCache[Tuple[str, str]] = ResponseCache(
hs.get_clock(), "as_protocol_meta", timeout_ms=HOUR_IN_MS
)
async def _send_with_fallbacks(
self,
service: "ApplicationService",
prefixes: List[str],
path: str,
func: Callable[Concatenate[str, P], Awaitable[R]],
*args: P.args,
**kwargs: P.kwargs,
) -> R:
"""
Attempt to call an application service with multiple paths, falling back
until one succeeds.
Args:
service: The appliacation service, this provides the base URL.
prefixes: A last of paths to try in order for the requests.
path: A suffix to append to each prefix.
func: The function to call, the first argument will be the full
endpoint to fetch. Other arguments are provided by args/kwargs.
Returns:
The return value of func.
"""
for i, prefix in enumerate(prefixes, start=1):
uri = f"{service.url}{prefix}{path}"
try:
return await func(uri, *args, **kwargs)
except HttpResponseException as e:
# If an error is received that is due to an unrecognised path,
# fallback to next path (if one exists). Otherwise, consider it
# a legitimate error and raise.
if i < len(prefixes) and is_unknown_endpoint(e):
continue
raise
except Exception:
# Unexpected exceptions get sent to the caller.
raise
# The function should always exit via the return or raise above this.
raise RuntimeError("Unexpected fallback behaviour. This should never be seen.")
async def query_user(self, service: "ApplicationService", user_id: str) -> bool:
if service.url is None:
return False
@@ -177,12 +133,12 @@ class ApplicationServiceApi(SimpleHttpClient):
assert service.hs_token is not None
try:
response = await self._send_with_fallbacks(
service,
[APP_SERVICE_PREFIX, ""],
f"/users/{urllib.parse.quote(user_id)}",
self.get_json,
{"access_token": service.hs_token},
args = None
if self.config.use_appservice_legacy_authorization:
args = {"access_token": service.hs_token}
response = await self.get_json(
f"{service.url}{APP_SERVICE_PREFIX}/users/{urllib.parse.quote(user_id)}",
args,
headers={"Authorization": [f"Bearer {service.hs_token}"]},
)
if response is not None: # just an empty json object
@@ -203,12 +159,12 @@ class ApplicationServiceApi(SimpleHttpClient):
assert service.hs_token is not None
try:
response = await self._send_with_fallbacks(
service,
[APP_SERVICE_PREFIX, ""],
f"/rooms/{urllib.parse.quote(alias)}",
self.get_json,
{"access_token": service.hs_token},
args = None
if self.config.use_appservice_legacy_authorization:
args = {"access_token": service.hs_token}
response = await self.get_json(
f"{service.url}{APP_SERVICE_PREFIX}/rooms/{urllib.parse.quote(alias)}",
args,
headers={"Authorization": [f"Bearer {service.hs_token}"]},
)
if response is not None: # just an empty json object
@@ -241,15 +197,14 @@ class ApplicationServiceApi(SimpleHttpClient):
assert service.hs_token is not None
try:
args: Mapping[Any, Any] = {
**fields,
b"access_token": service.hs_token,
}
response = await self._send_with_fallbacks(
service,
[APP_SERVICE_PREFIX, APP_SERVICE_UNSTABLE_PREFIX],
f"/thirdparty/{kind}/{urllib.parse.quote(protocol)}",
self.get_json,
args: Mapping[bytes, Union[List[bytes], str]] = fields
if self.config.use_appservice_legacy_authorization:
args = {
**fields,
b"access_token": service.hs_token,
}
response = await self.get_json(
f"{service.url}{APP_SERVICE_PREFIX}/thirdparty/{kind}/{urllib.parse.quote(protocol)}",
args=args,
headers={"Authorization": [f"Bearer {service.hs_token}"]},
)
@@ -285,12 +240,12 @@ class ApplicationServiceApi(SimpleHttpClient):
# This is required by the configuration.
assert service.hs_token is not None
try:
info = await self._send_with_fallbacks(
service,
[APP_SERVICE_PREFIX, APP_SERVICE_UNSTABLE_PREFIX],
f"/thirdparty/protocol/{urllib.parse.quote(protocol)}",
self.get_json,
{"access_token": service.hs_token},
args = None
if self.config.use_appservice_legacy_authorization:
args = {"access_token": service.hs_token}
info = await self.get_json(
f"{service.url}{APP_SERVICE_PREFIX}/thirdparty/protocol/{urllib.parse.quote(protocol)}",
args,
headers={"Authorization": [f"Bearer {service.hs_token}"]},
)
@@ -401,13 +356,14 @@ class ApplicationServiceApi(SimpleHttpClient):
}
try:
await self._send_with_fallbacks(
service,
[APP_SERVICE_PREFIX, ""],
f"/transactions/{urllib.parse.quote(str(txn_id))}",
self.put_json,
args = None
if self.config.use_appservice_legacy_authorization:
args = {"access_token": service.hs_token}
await self.put_json(
f"{service.url}{APP_SERVICE_PREFIX}/transactions/{urllib.parse.quote(str(txn_id))}",
json_body=body,
args={"access_token": service.hs_token},
args=args,
headers={"Authorization": [f"Bearer {service.hs_token}"]},
)
if logger.isEnabledFor(logging.DEBUG):

View File

@@ -43,6 +43,14 @@ class AppServiceConfig(Config):
)
self.track_appservice_user_ips = config.get("track_appservice_user_ips", False)
self.use_appservice_legacy_authorization = config.get(
"use_appservice_legacy_authorization", False
)
if self.use_appservice_legacy_authorization:
logger.warning(
"The use of appservice legacy authorization via query params is deprecated"
" and should be considered insecure."
)
def load_appservices(

View File

@@ -31,7 +31,7 @@ class AuthConfig(Config):
# The default value of password_config.enabled is True, unless msc3861 is enabled.
msc3861_enabled = (
config.get("experimental_features", {})
(config.get("experimental_features") or {})
.get("msc3861", {})
.get("enabled", False)
)

View File

@@ -216,12 +216,6 @@ class MSC3861:
("session_lifetime",),
)
if not root.experimental.msc3970_enabled:
raise ConfigError(
"experimental_features.msc3970_enabled must be 'true' when OAuth delegation is enabled",
("experimental_features", "msc3970_enabled"),
)
@attr.s(auto_attribs=True, frozen=True, slots=True)
class MSC3866Config:
@@ -247,8 +241,26 @@ class ExperimentalConfig(Config):
# MSC3026 (busy presence state)
self.msc3026_enabled: bool = experimental.get("msc3026_enabled", False)
# MSC2716 (importing historical messages)
self.msc2716_enabled: bool = experimental.get("msc2716_enabled", False)
# MSC2697 (device dehydration)
# Enabled by default since this option was added after adding the feature.
# It is not recommended that both MSC2697 and MSC3814 both be enabled at
# once.
self.msc2697_enabled: bool = experimental.get("msc2697_enabled", True)
# MSC3814 (dehydrated devices with SSSS)
# This is an alternative method to achieve the same goals as MSC2697.
# It is not recommended that both MSC2697 and MSC3814 both be enabled at
# once.
self.msc3814_enabled: bool = experimental.get("msc3814_enabled", False)
if self.msc2697_enabled and self.msc3814_enabled:
raise ConfigError(
"MSC2697 and MSC3814 should not both be enabled.",
(
"experimental_features",
"msc3814_enabled",
),
)
# MSC3244 (room version capabilities)
self.msc3244_enabled: bool = experimental.get("msc3244_enabled", True)
@@ -379,15 +391,9 @@ class ExperimentalConfig(Config):
"Invalid MSC3861 configuration", ("experimental", "msc3861")
) from exc
# MSC3970: Scope transaction IDs to devices
self.msc3970_enabled = experimental.get("msc3970_enabled", self.msc3861.enabled)
# Check that none of the other config options conflict with MSC3861 when enabled
self.msc3861.check_config_conflicts(self.root)
# MSC4009: E.164 Matrix IDs
self.msc4009_e164_mxids = experimental.get("msc4009_e164_mxids", False)
# MSC4010: Do not allow setting m.push_rules account data.
self.msc4010_push_rules_account_data = experimental.get(
"msc4010_push_rules_account_data", False

View File

@@ -53,11 +53,35 @@ class FederationConfig(Config):
# Allow for the configuration of timeout, max request retries
# and min/max retry delays in the matrix federation client.
self.client_timeout = federation_config.get("client_timeout", 60)
self.max_long_retry_delay = federation_config.get("max_long_retry_delay", 60)
self.max_short_retry_delay = federation_config.get("max_short_retry_delay", 2)
self.client_timeout_ms = Config.parse_duration(
federation_config.get("client_timeout", "60s")
)
self.max_long_retry_delay_ms = Config.parse_duration(
federation_config.get("max_long_retry_delay", "60s")
)
self.max_short_retry_delay_ms = Config.parse_duration(
federation_config.get("max_short_retry_delay", "2s")
)
self.max_long_retries = federation_config.get("max_long_retries", 10)
self.max_short_retries = federation_config.get("max_short_retries", 3)
# Allow for the configuration of the backoff algorithm used
# when trying to reach an unavailable destination.
# Unlike previous configuration those values applies across
# multiple requests and the state of the backoff is stored on DB.
self.destination_min_retry_interval_ms = Config.parse_duration(
federation_config.get("destination_min_retry_interval", "10m")
)
self.destination_retry_multiplier = federation_config.get(
"destination_retry_multiplier", 2
)
self.destination_max_retry_interval_ms = min(
Config.parse_duration(
federation_config.get("destination_max_retry_interval", "7d")
),
# Set a hard-limit to not overflow the database column.
2**62,
)
_METRICS_FOR_DOMAINS_SCHEMA = {"type": "array", "items": {"type": "string"}}

View File

@@ -15,7 +15,7 @@
import argparse
import logging
from typing import Any, Dict, List, Union
from typing import Any, Dict, List, Optional, Union
import attr
from pydantic import BaseModel, Extra, StrictBool, StrictInt, StrictStr
@@ -41,11 +41,17 @@ Synapse version. Please use ``%s: name_of_worker`` instead.
_MISSING_MAIN_PROCESS_INSTANCE_MAP_DATA = """
Missing data for a worker to connect to main process. Please include '%s' in the
`instance_map` declared in your shared yaml configuration, or optionally(as a deprecated
solution) in every worker's yaml as various `worker_replication_*` settings as defined
in workers documentation here:
`instance_map` declared in your shared yaml configuration as defined in configuration
documentation here:
`https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#instance_map`
"""
WORKER_REPLICATION_SETTING_DEPRECATED_MESSAGE = """
'%s' is no longer a supported worker setting, please place '%s' onto your shared
configuration under `main` inside the `instance_map`. See workers documentation here:
`https://matrix-org.github.io/synapse/latest/workers.html#worker-configuration`
"""
# This allows for a handy knob when it's time to change from 'master' to
# something with less 'history'
MAIN_PROCESS_INSTANCE_NAME = "master"
@@ -88,7 +94,7 @@ class ConfigModel(BaseModel):
allow_mutation = False
class InstanceLocationConfig(ConfigModel):
class InstanceTcpLocationConfig(ConfigModel):
"""The host and port to talk to an instance via HTTP replication."""
host: StrictStr
@@ -104,6 +110,23 @@ class InstanceLocationConfig(ConfigModel):
return f"{self.host}:{self.port}"
class InstanceUnixLocationConfig(ConfigModel):
"""The socket file to talk to an instance via HTTP replication."""
path: StrictStr
def scheme(self) -> str:
"""Hardcode a retrievable scheme"""
return "unix"
def netloc(self) -> str:
"""Nicely format the address location data"""
return f"{self.path}"
InstanceLocationConfig = Union[InstanceTcpLocationConfig, InstanceUnixLocationConfig]
@attr.s
class WriterLocations:
"""Specifies the instances that write various streams.
@@ -148,6 +171,27 @@ class WriterLocations:
)
@attr.s(auto_attribs=True)
class OutboundFederationRestrictedTo:
"""Whether we limit outbound federation to a certain set of instances.
Attributes:
instances: optional list of instances that can make outbound federation
requests. If None then all instances can make federation requests.
locations: list of instance locations to connect to proxy via.
"""
instances: Optional[List[str]]
locations: List[InstanceLocationConfig] = attr.Factory(list)
def __contains__(self, instance: str) -> bool:
# It feels a bit dirty to return `True` if `instances` is `None`, but it makes
# sense in downstream usage in the sense that if
# `outbound_federation_restricted_to` is not configured, then any instance can
# talk to federation (no restrictions so always return `True`).
return self.instances is None or instance in self.instances
class WorkerConfig(Config):
"""The workers are processes run separately to the main synapse process.
They have their own pid_file and listener configuration. They use the
@@ -216,22 +260,37 @@ class WorkerConfig(Config):
)
# A map from instance name to host/port of their HTTP replication endpoint.
# Check if the main process is declared. Inject it into the map if it's not,
# based first on if a 'main' block is declared then on 'worker_replication_*'
# data. If both are available, default to instance_map. The main process
# itself doesn't need this data as it would never have to talk to itself.
# Check if the main process is declared. The main process itself doesn't need
# this data as it would never have to talk to itself.
instance_map: Dict[str, Any] = config.get("instance_map", {})
if self.instance_name is not MAIN_PROCESS_INSTANCE_NAME:
# TODO: The next 3 condition blocks can be deleted after some time has
# passed and we're ready to stop checking for these settings.
# The host used to connect to the main synapse
main_host = config.get("worker_replication_host", None)
if main_host:
raise ConfigError(
WORKER_REPLICATION_SETTING_DEPRECATED_MESSAGE
% ("worker_replication_host", main_host)
)
# The port on the main synapse for HTTP replication endpoint
main_port = config.get("worker_replication_http_port")
if main_port:
raise ConfigError(
WORKER_REPLICATION_SETTING_DEPRECATED_MESSAGE
% ("worker_replication_http_port", main_port)
)
# The tls mode on the main synapse for HTTP replication endpoint.
# For backward compatibility this defaults to False.
main_tls = config.get("worker_replication_http_tls", False)
if main_tls:
raise ConfigError(
WORKER_REPLICATION_SETTING_DEPRECATED_MESSAGE
% ("worker_replication_http_tls", main_tls)
)
# For now, accept 'main' in the instance_map, but the replication system
# expects 'master', force that into being until it's changed later.
@@ -241,30 +300,20 @@ class WorkerConfig(Config):
]
del instance_map[MAIN_PROCESS_INSTANCE_MAP_NAME]
# This is the backwards compatibility bit that handles the
# worker_replication_* bits using setdefault() to not overwrite anything.
elif main_host is not None and main_port is not None:
instance_map.setdefault(
MAIN_PROCESS_INSTANCE_NAME,
{
"host": main_host,
"port": main_port,
"tls": main_tls,
},
)
else:
# If we've gotten here, it means that the main process is not on the
# instance_map and that not enough worker_replication_* variables
# were declared in the worker's yaml.
# instance_map.
raise ConfigError(
_MISSING_MAIN_PROCESS_INSTANCE_MAP_DATA
% MAIN_PROCESS_INSTANCE_MAP_NAME
)
# type-ignore: the expression `Union[A, B]` is not a Type[Union[A, B]] currently
self.instance_map: Dict[
str, InstanceLocationConfig
] = parse_and_validate_mapping(instance_map, InstanceLocationConfig)
] = parse_and_validate_mapping(
instance_map, InstanceLocationConfig # type: ignore[arg-type]
)
# Map from type of streams to source, c.f. WriterLocations.
writers = config.get("stream_writers") or {}
@@ -357,6 +406,28 @@ class WorkerConfig(Config):
new_option_name="update_user_directory_from_worker",
)
outbound_federation_restricted_to = config.get(
"outbound_federation_restricted_to", None
)
self.outbound_federation_restricted_to = OutboundFederationRestrictedTo(
outbound_federation_restricted_to
)
if outbound_federation_restricted_to:
if not self.worker_replication_secret:
raise ConfigError(
"`worker_replication_secret` must be configured when using `outbound_federation_restricted_to`."
)
for instance in outbound_federation_restricted_to:
if instance not in self.instance_map:
raise ConfigError(
"Instance %r is configured in 'outbound_federation_restricted_to' but does not appear in `instance_map` config."
% (instance,)
)
self.outbound_federation_restricted_to.locations.append(
self.instance_map[instance]
)
def _should_this_worker_perform_duty(
self,
config: Dict[str, Any],

View File

@@ -126,7 +126,7 @@ def validate_event_for_room_version(event: "EventBase") -> None:
raise AuthError(403, "Event not signed by sending server")
is_invite_via_allow_rule = (
event.room_version.msc3083_join_rules
event.room_version.restricted_join_rule
and event.type == EventTypes.Member
and event.membership == Membership.JOIN
and EventContentFields.AUTHORISING_USER in event.content
@@ -339,13 +339,6 @@ def check_state_dependent_auth_rules(
if event.type == EventTypes.Redaction:
check_redaction(event.room_version, event, auth_dict)
if (
event.type == EventTypes.MSC2716_INSERTION
or event.type == EventTypes.MSC2716_BATCH
or event.type == EventTypes.MSC2716_MARKER
):
check_historical(event.room_version, event, auth_dict)
logger.debug("Allowing! %s", event)
@@ -359,13 +352,10 @@ LENIENT_EVENT_BYTE_LIMITS_ROOM_VERSIONS = {
RoomVersions.V4,
RoomVersions.V5,
RoomVersions.V6,
RoomVersions.MSC2176,
RoomVersions.V7,
RoomVersions.V8,
RoomVersions.V9,
RoomVersions.MSC3787,
RoomVersions.V10,
RoomVersions.MSC2716v4,
RoomVersions.MSC1767v10,
}
@@ -457,7 +447,7 @@ def _check_create(event: "EventBase") -> None:
# 1.4 If content has no creator field, reject if the room version requires it.
if (
not event.room_version.msc2175_implicit_room_creator
not event.room_version.implicit_room_creator
and EventContentFields.ROOM_CREATOR not in event.content
):
raise AuthError(403, "Create event lacks a 'creator' property")
@@ -494,7 +484,7 @@ def _is_membership_change_allowed(
key = (EventTypes.Create, "")
create = auth_events.get(key)
if create and event.prev_event_ids()[0] == create.event_id:
if room_version.msc2175_implicit_room_creator:
if room_version.implicit_room_creator:
creator = create.sender
else:
creator = create.content[EventContentFields.ROOM_CREATOR]
@@ -517,7 +507,7 @@ def _is_membership_change_allowed(
caller_invited = caller and caller.membership == Membership.INVITE
caller_knocked = (
caller
and room_version.msc2403_knocking
and room_version.knock_join_rule
and caller.membership == Membership.KNOCK
)
@@ -617,9 +607,9 @@ def _is_membership_change_allowed(
elif join_rule == JoinRules.PUBLIC:
pass
elif (
room_version.msc3083_join_rules and join_rule == JoinRules.RESTRICTED
room_version.restricted_join_rule and join_rule == JoinRules.RESTRICTED
) or (
room_version.msc3787_knock_restricted_join_rule
room_version.knock_restricted_join_rule
and join_rule == JoinRules.KNOCK_RESTRICTED
):
# This is the same as public, but the event must contain a reference
@@ -649,9 +639,9 @@ def _is_membership_change_allowed(
elif (
join_rule == JoinRules.INVITE
or (room_version.msc2403_knocking and join_rule == JoinRules.KNOCK)
or (room_version.knock_join_rule and join_rule == JoinRules.KNOCK)
or (
room_version.msc3787_knock_restricted_join_rule
room_version.knock_restricted_join_rule
and join_rule == JoinRules.KNOCK_RESTRICTED
)
):
@@ -685,9 +675,9 @@ def _is_membership_change_allowed(
"You don't have permission to ban",
errcode=Codes.INSUFFICIENT_POWER,
)
elif room_version.msc2403_knocking and Membership.KNOCK == membership:
elif room_version.knock_join_rule and Membership.KNOCK == membership:
if join_rule != JoinRules.KNOCK and (
not room_version.msc3787_knock_restricted_join_rule
not room_version.knock_restricted_join_rule
or join_rule != JoinRules.KNOCK_RESTRICTED
):
raise AuthError(403, "You don't have permission to knock")
@@ -823,38 +813,6 @@ def check_redaction(
raise AuthError(403, "You don't have permission to redact events")
def check_historical(
room_version_obj: RoomVersion,
event: "EventBase",
auth_events: StateMap["EventBase"],
) -> None:
"""Check whether the event sender is allowed to send historical related
events like "insertion", "batch", and "marker".
Returns:
None
Raises:
AuthError if the event sender is not allowed to send historical related events
("insertion", "batch", and "marker").
"""
# Ignore the auth checks in room versions that do not support historical
# events
if not room_version_obj.msc2716_historical:
return
user_level = get_user_power_level(event.user_id, auth_events)
historical_level = get_named_level(auth_events, "historical", 100)
if user_level < historical_level:
raise UnstableSpecAuthError(
403,
'You don\'t have permission to send send historical related events ("insertion", "batch", and "marker")',
errcode=Codes.INSUFFICIENT_POWER,
)
def _check_power_levels(
room_version_obj: RoomVersion,
event: "EventBase",
@@ -876,7 +834,7 @@ def _check_power_levels(
# Reject events with stringy power levels if required by room version
if (
event.type == EventTypes.PowerLevels
and room_version_obj.msc3667_int_only_power_levels
and room_version_obj.enforce_int_power_levels
):
for k, v in event.content.items():
if k in {
@@ -1012,7 +970,7 @@ def get_user_power_level(user_id: str, auth_events: StateMap["EventBase"]) -> in
key = (EventTypes.Create, "")
create_event = auth_events.get(key)
if create_event is not None:
if create_event.room_version.msc2175_implicit_room_creator:
if create_event.room_version.implicit_room_creator:
creator = create_event.sender
else:
creator = create_event.content[EventContentFields.ROOM_CREATOR]
@@ -1150,7 +1108,7 @@ def auth_types_for_event(
)
auth_types.add(key)
if room_version.msc3083_join_rules and membership == Membership.JOIN:
if room_version.restricted_join_rule and membership == Membership.JOIN:
if EventContentFields.AUTHORISING_USER in event.content:
key = (
EventTypes.Member,

View File

@@ -198,7 +198,6 @@ class _EventInternalMetadata:
soft_failed: DictProperty[bool] = DictProperty("soft_failed")
proactively_send: DictProperty[bool] = DictProperty("proactively_send")
redacted: DictProperty[bool] = DictProperty("redacted")
historical: DictProperty[bool] = DictProperty("historical")
txn_id: DictProperty[str] = DictProperty("txn_id")
"""The transaction ID, if it was set when the event was created."""
@@ -288,14 +287,6 @@ class _EventInternalMetadata:
"""
return self._dict.get("redacted", False)
def is_historical(self) -> bool:
"""Whether this is a historical message.
This is used by the batchsend historical message endpoint and
is needed to and mark the event as backfilled and skip some checks
like push notifications.
"""
return self._dict.get("historical", False)
def is_notifiable(self) -> bool:
"""Whether this event can trigger a push notification"""
return not self.is_outlier() or self.is_out_of_band_membership()
@@ -355,7 +346,7 @@ class EventBase(metaclass=abc.ABCMeta):
@property
def redacts(self) -> Optional[str]:
"""MSC2176 moved the redacts field into the content."""
if self.room_version.msc2176_redaction_rules:
if self.room_version.updated_redaction_rules:
return self.content.get("redacts")
return self.get("redacts")

View File

@@ -175,7 +175,7 @@ class EventBuilder:
# MSC2174 moves the redacts property to the content, it is invalid to
# provide it as a top-level property.
if self._redacts is not None and not self.room_version.msc2176_redaction_rules:
if self._redacts is not None and not self.room_version.updated_redaction_rules:
event_dict["redacts"] = self._redacts
if self._origin_server_ts is not None:

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, List, Optional, Tuple
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
import attr
from immutabledict import immutabledict
@@ -107,33 +107,32 @@ class EventContext(UnpersistedEventContextBase):
state_delta_due_to_event: If `state_group` and `state_group_before_event` are not None
then this is the delta of the state between the two groups.
prev_group: If it is known, ``state_group``'s prev_group. Note that this being
None does not necessarily mean that ``state_group`` does not have
a prev_group!
state_group_deltas: If not empty, this is a dict collecting a mapping of the state
difference between state groups.
If the event is a state event, this is normally the same as
``state_group_before_event``.
The keys are a tuple of two integers: the initial group and final state group.
The corresponding value is a state map representing the state delta between
these state groups.
If ``state_group`` is None (ie, the event is an outlier), ``prev_group``
will always also be ``None``.
The dictionary is expected to have at most two entries with state groups of:
Note that this *not* (necessarily) the state group associated with
``_prev_state_ids``.
1. The state group before the event and after the event.
2. The state group preceding the state group before the event and the
state group before the event.
delta_ids: If ``prev_group`` is not None, the state delta between ``prev_group``
and ``state_group``.
This information is collected and stored as part of an optimization for persisting
events.
partial_state: if True, we may be storing this event with a temporary,
incomplete state.
"""
_storage: "StorageControllers"
state_group_deltas: Dict[Tuple[int, int], StateMap[str]]
rejected: Optional[str] = None
_state_group: Optional[int] = None
state_group_before_event: Optional[int] = None
_state_delta_due_to_event: Optional[StateMap[str]] = None
prev_group: Optional[int] = None
delta_ids: Optional[StateMap[str]] = None
app_service: Optional[ApplicationService] = None
partial_state: bool = False
@@ -145,16 +144,14 @@ class EventContext(UnpersistedEventContextBase):
state_group_before_event: Optional[int],
state_delta_due_to_event: Optional[StateMap[str]],
partial_state: bool,
prev_group: Optional[int] = None,
delta_ids: Optional[StateMap[str]] = None,
state_group_deltas: Dict[Tuple[int, int], StateMap[str]],
) -> "EventContext":
return EventContext(
storage=storage,
state_group=state_group,
state_group_before_event=state_group_before_event,
state_delta_due_to_event=state_delta_due_to_event,
prev_group=prev_group,
delta_ids=delta_ids,
state_group_deltas=state_group_deltas,
partial_state=partial_state,
)
@@ -163,7 +160,7 @@ class EventContext(UnpersistedEventContextBase):
storage: "StorageControllers",
) -> "EventContext":
"""Return an EventContext instance suitable for persisting an outlier event"""
return EventContext(storage=storage)
return EventContext(storage=storage, state_group_deltas={})
async def persist(self, event: EventBase) -> "EventContext":
return self
@@ -183,13 +180,15 @@ class EventContext(UnpersistedEventContextBase):
"state_group": self._state_group,
"state_group_before_event": self.state_group_before_event,
"rejected": self.rejected,
"prev_group": self.prev_group,
"state_group_deltas": _encode_state_group_delta(self.state_group_deltas),
"state_delta_due_to_event": _encode_state_dict(
self._state_delta_due_to_event
),
"delta_ids": _encode_state_dict(self.delta_ids),
"app_service_id": self.app_service.id if self.app_service else None,
"partial_state": self.partial_state,
# add dummy delta_ids and prev_group for backwards compatibility
"delta_ids": None,
"prev_group": None,
}
@staticmethod
@@ -204,17 +203,24 @@ class EventContext(UnpersistedEventContextBase):
Returns:
The event context.
"""
# workaround for backwards/forwards compatibility: if the input doesn't have a value
# for "state_group_deltas" just assign an empty dict
state_group_deltas = input.get("state_group_deltas", None)
if state_group_deltas:
state_group_deltas = _decode_state_group_delta(state_group_deltas)
else:
state_group_deltas = {}
context = EventContext(
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
storage=storage,
state_group=input["state_group"],
state_group_before_event=input["state_group_before_event"],
prev_group=input["prev_group"],
state_group_deltas=state_group_deltas,
state_delta_due_to_event=_decode_state_dict(
input["state_delta_due_to_event"]
),
delta_ids=_decode_state_dict(input["delta_ids"]),
rejected=input["rejected"],
partial_state=input.get("partial_state", False),
)
@@ -349,7 +355,7 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
_storage: "StorageControllers"
state_group_before_event: Optional[int]
state_group_after_event: Optional[int]
state_delta_due_to_event: Optional[dict]
state_delta_due_to_event: Optional[StateMap[str]]
prev_group_for_state_group_before_event: Optional[int]
delta_ids_to_state_group_before_event: Optional[StateMap[str]]
partial_state: bool
@@ -380,26 +386,16 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
events_and_persisted_context = []
for event, unpersisted_context in amended_events_and_context:
if event.is_state():
context = EventContext(
storage=unpersisted_context._storage,
state_group=unpersisted_context.state_group_after_event,
state_group_before_event=unpersisted_context.state_group_before_event,
state_delta_due_to_event=unpersisted_context.state_delta_due_to_event,
partial_state=unpersisted_context.partial_state,
prev_group=unpersisted_context.state_group_before_event,
delta_ids=unpersisted_context.state_delta_due_to_event,
)
else:
context = EventContext(
storage=unpersisted_context._storage,
state_group=unpersisted_context.state_group_after_event,
state_group_before_event=unpersisted_context.state_group_before_event,
state_delta_due_to_event=unpersisted_context.state_delta_due_to_event,
partial_state=unpersisted_context.partial_state,
prev_group=unpersisted_context.prev_group_for_state_group_before_event,
delta_ids=unpersisted_context.delta_ids_to_state_group_before_event,
)
state_group_deltas = unpersisted_context._build_state_group_deltas()
context = EventContext(
storage=unpersisted_context._storage,
state_group=unpersisted_context.state_group_after_event,
state_group_before_event=unpersisted_context.state_group_before_event,
state_delta_due_to_event=unpersisted_context.state_delta_due_to_event,
partial_state=unpersisted_context.partial_state,
state_group_deltas=state_group_deltas,
)
events_and_persisted_context.append((event, context))
return events_and_persisted_context
@@ -452,11 +448,11 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
# if the event isn't a state event the state group doesn't change
if not self.state_delta_due_to_event:
state_group_after_event = self.state_group_before_event
self.state_group_after_event = self.state_group_before_event
# otherwise if it is a state event we need to get a state group for it
else:
state_group_after_event = await self._storage.state.store_state_group(
self.state_group_after_event = await self._storage.state.store_state_group(
event.event_id,
event.room_id,
prev_group=self.state_group_before_event,
@@ -464,16 +460,81 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
current_state_ids=None,
)
state_group_deltas = self._build_state_group_deltas()
return EventContext.with_state(
storage=self._storage,
state_group=state_group_after_event,
state_group=self.state_group_after_event,
state_group_before_event=self.state_group_before_event,
state_delta_due_to_event=self.state_delta_due_to_event,
state_group_deltas=state_group_deltas,
partial_state=self.partial_state,
prev_group=self.state_group_before_event,
delta_ids=self.state_delta_due_to_event,
)
def _build_state_group_deltas(self) -> Dict[Tuple[int, int], StateMap]:
"""
Collect deltas between the state groups associated with this context
"""
state_group_deltas = {}
# if we know the state group before the event and after the event, add them and the
# state delta between them to state_group_deltas
if self.state_group_before_event and self.state_group_after_event:
# if we have the state groups we should have the delta
assert self.state_delta_due_to_event is not None
state_group_deltas[
(
self.state_group_before_event,
self.state_group_after_event,
)
] = self.state_delta_due_to_event
# the state group before the event may also have a state group which precedes it, if
# we have that and the state group before the event, add them and the state
# delta between them to state_group_deltas
if (
self.prev_group_for_state_group_before_event
and self.state_group_before_event
):
# if we have both state groups we should have the delta between them
assert self.delta_ids_to_state_group_before_event is not None
state_group_deltas[
(
self.prev_group_for_state_group_before_event,
self.state_group_before_event,
)
] = self.delta_ids_to_state_group_before_event
return state_group_deltas
def _encode_state_group_delta(
state_group_delta: Dict[Tuple[int, int], StateMap[str]]
) -> List[Tuple[int, int, Optional[List[Tuple[str, str, str]]]]]:
if not state_group_delta:
return []
state_group_delta_encoded = []
for key, value in state_group_delta.items():
state_group_delta_encoded.append((key[0], key[1], _encode_state_dict(value)))
return state_group_delta_encoded
def _decode_state_group_delta(
input: List[Tuple[int, int, List[Tuple[str, str, str]]]]
) -> Dict[Tuple[int, int], StateMap[str]]:
if not input:
return {}
state_group_deltas = {}
for state_group_1, state_group_2, state_dict in input:
state_map = _decode_state_dict(state_dict)
assert state_map is not None
state_group_deltas[(state_group_1, state_group_2)] = state_map
return state_group_deltas
def _encode_state_dict(
state_dict: Optional[StateMap[str]],

View File

@@ -108,13 +108,9 @@ def prune_event_dict(room_version: RoomVersion, event_dict: JsonDict) -> JsonDic
"origin_server_ts",
]
# Room versions from before MSC2176 had additional allowed keys.
if not room_version.msc2176_redaction_rules:
allowed_keys.extend(["prev_state", "membership"])
# Room versions before MSC3989 kept the origin field.
if not room_version.msc3989_redaction_rules:
allowed_keys.append("origin")
# Earlier room versions from had additional allowed keys.
if not room_version.updated_redaction_rules:
allowed_keys.extend(["prev_state", "membership", "origin"])
event_type = event_dict["type"]
@@ -127,9 +123,9 @@ def prune_event_dict(room_version: RoomVersion, event_dict: JsonDict) -> JsonDic
if event_type == EventTypes.Member:
add_fields("membership")
if room_version.msc3375_redaction_rules:
if room_version.restricted_join_rule_fix:
add_fields(EventContentFields.AUTHORISING_USER)
if room_version.msc3821_redaction_rules:
if room_version.updated_redaction_rules:
# Preserve the signed field under third_party_invite.
third_party_invite = event_dict["content"].get("third_party_invite")
if isinstance(third_party_invite, collections.abc.Mapping):
@@ -140,14 +136,16 @@ def prune_event_dict(room_version: RoomVersion, event_dict: JsonDict) -> JsonDic
]
elif event_type == EventTypes.Create:
# MSC2176 rules state that create events cannot be redacted.
if room_version.msc2176_redaction_rules:
return event_dict
if room_version.updated_redaction_rules:
# MSC2176 rules state that create events cannot have their `content` redacted.
new_content = event_dict["content"]
elif not room_version.implicit_room_creator:
# Some room versions give meaning to `creator`
add_fields("creator")
add_fields("creator")
elif event_type == EventTypes.JoinRules:
add_fields("join_rule")
if room_version.msc3083_join_rules:
if room_version.restricted_join_rule:
add_fields("allow")
elif event_type == EventTypes.PowerLevels:
add_fields(
@@ -161,24 +159,15 @@ def prune_event_dict(room_version: RoomVersion, event_dict: JsonDict) -> JsonDic
"redact",
)
if room_version.msc2176_redaction_rules:
if room_version.updated_redaction_rules:
add_fields("invite")
if room_version.msc2716_historical:
add_fields("historical")
elif event_type == EventTypes.Aliases and room_version.special_case_aliases_auth:
add_fields("aliases")
elif event_type == EventTypes.RoomHistoryVisibility:
add_fields("history_visibility")
elif event_type == EventTypes.Redaction and room_version.msc2176_redaction_rules:
elif event_type == EventTypes.Redaction and room_version.updated_redaction_rules:
add_fields("redacts")
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_INSERTION:
add_fields(EventContentFields.MSC2716_NEXT_BATCH_ID)
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_BATCH:
add_fields(EventContentFields.MSC2716_BATCH_ID)
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_MARKER:
add_fields(EventContentFields.MSC2716_INSERTION_EVENT_REFERENCE)
# Protect the rel_type and event_id fields under the m.relates_to field.
if room_version.msc3389_relation_redactions:
@@ -405,7 +394,6 @@ def serialize_event(
time_now_ms: int,
*,
config: SerializeEventConfig = _DEFAULT_SERIALIZE_EVENT_CONFIG,
msc3970_enabled: bool = False,
) -> JsonDict:
"""Serialize event for clients
@@ -413,8 +401,6 @@ def serialize_event(
e
time_now_ms
config: Event serialization config
msc3970_enabled: Whether MSC3970 is enabled. It changes whether we should
include the `transaction_id` in the event's `unsigned` section.
Returns:
The serialized event dictionary.
@@ -440,38 +426,46 @@ def serialize_event(
e.unsigned["redacted_because"],
time_now_ms,
config=config,
msc3970_enabled=msc3970_enabled,
)
# If we have a txn_id saved in the internal_metadata, we should include it in the
# unsigned section of the event if it was sent by the same session as the one
# requesting the event.
txn_id: Optional[str] = getattr(e.internal_metadata, "txn_id", None)
if txn_id is not None and config.requester is not None:
# For the MSC3970 rules to be applied, we *need* to have the device ID in the
# event internal metadata. Since we were not recording them before, if it hasn't
# been recorded, we fallback to the old behaviour.
if (
txn_id is not None
and config.requester is not None
and config.requester.user.to_string() == e.sender
):
# Some events do not have the device ID stored in the internal metadata,
# this includes old events as well as those created by appservice, guests,
# or with tokens minted with the admin API. For those events, fallback
# to using the access token instead.
event_device_id: Optional[str] = getattr(e.internal_metadata, "device_id", None)
if msc3970_enabled and event_device_id is not None:
if event_device_id is not None:
if event_device_id == config.requester.device_id:
d["unsigned"]["transaction_id"] = txn_id
else:
# The pre-MSC3970 behaviour is to only include the transaction ID if the
# event was sent from the same access token. For regular users, we can use
# the access token ID to determine this. For guests, we can't, but since
# each guest only has one access token, we can just check that the event was
# sent by the same user as the one requesting the event.
# Fallback behaviour: only include the transaction ID if the event
# was sent from the same access token.
#
# For regular users, the access token ID can be used to determine this.
# This includes access tokens minted with the admin API.
#
# For guests and appservice users, we can't check the access token ID
# so assume it is the same session.
event_token_id: Optional[int] = getattr(
e.internal_metadata, "token_id", None
)
if config.requester.user.to_string() == e.sender and (
if (
(
event_token_id is not None
and config.requester.access_token_id is not None
and event_token_id == config.requester.access_token_id
)
or config.requester.is_guest
or config.requester.app_service
):
d["unsigned"]["transaction_id"] = txn_id
@@ -486,6 +480,17 @@ def serialize_event(
if config.as_client_event:
d = config.event_format(d)
# If the event is a redaction, the field with the redacted event ID appears
# in a different location depending on the room version. e.redacts handles
# fetching from the proper location; copy it to the other location for forwards-
# and backwards-compatibility with clients.
if e.type == EventTypes.Redaction and e.redacts is not None:
if e.room_version.updated_redaction_rules:
d["redacts"] = e.redacts
else:
d["content"] = dict(d["content"])
d["content"]["redacts"] = e.redacts
only_event_fields = config.only_event_fields
if only_event_fields:
if not isinstance(only_event_fields, list) or not all(
@@ -504,9 +509,6 @@ class EventClientSerializer:
clients.
"""
def __init__(self, *, msc3970_enabled: bool = False):
self._msc3970_enabled = msc3970_enabled
def serialize_event(
self,
event: Union[JsonDict, EventBase],
@@ -531,9 +533,7 @@ class EventClientSerializer:
if not isinstance(event, EventBase):
return event
serialized_event = serialize_event(
event, time_now, config=config, msc3970_enabled=self._msc3970_enabled
)
serialized_event = serialize_event(event, time_now, config=config)
# Check if there are any bundled aggregations to include with the event.
if bundle_aggregations:

View File

@@ -231,7 +231,7 @@ async def _check_sigs_on_pdu(
# If this is a join event for a restricted room it may have been authorised
# via a different server from the sending server. Check those signatures.
if (
room_version.msc3083_join_rules
room_version.restricted_join_rule
and pdu.type == EventTypes.Member
and pdu.membership == Membership.JOIN
and EventContentFields.AUTHORISING_USER in pdu.content

View File

@@ -260,7 +260,9 @@ class FederationClient(FederationBase):
use_unstable = False
for user_id, one_time_keys in query.items():
for device_id, algorithms in one_time_keys.items():
if any(count > 1 for count in algorithms.values()):
# If more than one algorithm is requested, attempt to use the unstable
# endpoint.
if sum(algorithms.values()) > 1:
use_unstable = True
if algorithms:
# For the stable query, choose only the first algorithm.
@@ -296,6 +298,7 @@ class FederationClient(FederationBase):
else:
logger.debug("Skipping unstable claim client keys API")
# TODO Potentially attempt multiple queries and combine the results?
return await self.transport_layer.claim_client_keys(
user, destination, content, timeout
)
@@ -980,7 +983,7 @@ class FederationClient(FederationBase):
if not room_version:
raise UnsupportedRoomVersionError()
if not room_version.msc2403_knocking and membership == Membership.KNOCK:
if not room_version.knock_join_rule and membership == Membership.KNOCK:
raise SynapseError(
400,
"This room version does not support knocking",
@@ -1066,7 +1069,7 @@ class FederationClient(FederationBase):
# * Ensure the signatures are good.
#
# Otherwise, fallback to the provided event.
if room_version.msc3083_join_rules and response.event:
if room_version.restricted_join_rule and response.event:
event = response.event
valid_pdu = await self._check_sigs_and_hash_and_fetch_one(
@@ -1192,7 +1195,7 @@ class FederationClient(FederationBase):
# MSC3083 defines additional error codes for room joins.
failover_errcodes = None
if room_version.msc3083_join_rules:
if room_version.restricted_join_rule:
failover_errcodes = (
Codes.UNABLE_AUTHORISE_JOIN,
Codes.UNABLE_TO_GRANT_JOIN,

View File

@@ -63,6 +63,7 @@ from synapse.federation.federation_base import (
)
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
from synapse.handlers.worker_lock import DELETE_ROOM_LOCK_NAME
from synapse.http.servlet import assert_params_in_dict
from synapse.logging.context import (
make_deferred_yieldable,
@@ -137,6 +138,7 @@ class FederationServer(FederationBase):
self._event_auth_handler = hs.get_event_auth_handler()
self._room_member_handler = hs.get_room_member_handler()
self._e2e_keys_handler = hs.get_e2e_keys_handler()
self._worker_lock_handler = hs.get_worker_locks_handler()
self._state_storage_controller = hs.get_storage_controllers().state
@@ -806,7 +808,7 @@ class FederationServer(FederationBase):
raise IncompatibleRoomVersionError(room_version=room_version.identifier)
# Check that this room supports knocking as defined by its room version
if not room_version.msc2403_knocking:
if not room_version.knock_join_rule:
raise SynapseError(
403,
"This room version does not support knocking",
@@ -909,7 +911,7 @@ class FederationServer(FederationBase):
errcode=Codes.NOT_FOUND,
)
if membership_type == Membership.KNOCK and not room_version.msc2403_knocking:
if membership_type == Membership.KNOCK and not room_version.knock_join_rule:
raise SynapseError(
403,
"This room version does not support knocking",
@@ -933,7 +935,7 @@ class FederationServer(FederationBase):
# the event is valid to be sent into the room. Currently this is only done
# if the user is being joined via restricted join rules.
if (
room_version.msc3083_join_rules
room_version.restricted_join_rule
and event.membership == Membership.JOIN
and EventContentFields.AUTHORISING_USER in event.content
):
@@ -1016,7 +1018,9 @@ class FederationServer(FederationBase):
for user_id, device_keys in result.items():
for device_id, keys in device_keys.items():
for key_id, key in keys.items():
json_result.setdefault(user_id, {})[device_id] = {key_id: key}
json_result.setdefault(user_id, {}).setdefault(device_id, {})[
key_id
] = key
logger.info(
"Claimed one-time-keys: %s",
@@ -1234,9 +1238,18 @@ class FederationServer(FederationBase):
logger.info("handling received PDU in room %s: %s", room_id, event)
try:
with nested_logging_context(event.event_id):
await self._federation_event_handler.on_receive_pdu(
origin, event
)
# We're taking out a lock within a lock, which could
# lead to deadlocks if we're not careful. However, it is
# safe on this occasion as we only ever take a write
# lock when deleting a room, which we would never do
# while holding the `_INBOUND_EVENT_HANDLING_LOCK_NAME`
# lock.
async with self._worker_lock_handler.acquire_read_write_lock(
DELETE_ROOM_LOCK_NAME, room_id, write=False
):
await self._federation_event_handler.on_receive_pdu(
origin, event
)
except FederationError as e:
# XXX: Ideally we'd inform the remote we failed to process
# the event, but we can't return an error in the transaction

View File

@@ -109,10 +109,8 @@ was enabled*, Catch-Up Mode is exited and we return to `_transaction_transmissio
If a remote server is unreachable over federation, we back off from that server,
with an exponentially-increasing retry interval.
Whilst we don't automatically retry after the interval, we prevent making new attempts
until such time as the back-off has cleared.
Once the back-off is cleared and a new PDU or EDU arrives for transmission, the transmission
loop resumes and empties the queue by making federation requests.
We automatically retry after the retry interval expires (roughly, the logic to do so
being triggered every minute).
If the backoff grows too large (> 1 hour), the in-memory queue is emptied (to prevent
unbounded growth) and Catch-Up Mode is entered.
@@ -145,7 +143,6 @@ from prometheus_client import Counter
from typing_extensions import Literal
from twisted.internet import defer
from twisted.internet.interfaces import IDelayedCall
import synapse.metrics
from synapse.api.presence import UserPresenceState
@@ -184,14 +181,18 @@ sent_pdus_destination_dist_total = Counter(
"Total number of PDUs queued for sending across all destinations",
)
# Time (in s) after Synapse's startup that we will begin to wake up destinations
# that have catch-up outstanding.
CATCH_UP_STARTUP_DELAY_SEC = 15
# Time (in s) to wait before trying to wake up destinations that have
# catch-up outstanding. This will also be the delay applied at startup
# before trying the same.
# Please note that rate limiting still applies, so while the loop is
# executed every X seconds the destinations may not be wake up because
# they are being rate limited following previous attempt failures.
WAKEUP_RETRY_PERIOD_SEC = 60
# Time (in s) to wait in between waking up each destination, i.e. one destination
# will be woken up every <x> seconds after Synapse's startup until we have woken
# every destination has outstanding catch-up.
CATCH_UP_STARTUP_INTERVAL_SEC = 5
# will be woken up every <x> seconds until we have woken every destination
# has outstanding catch-up.
WAKEUP_INTERVAL_BETWEEN_DESTINATIONS_SEC = 5
class AbstractFederationSender(metaclass=abc.ABCMeta):
@@ -415,12 +416,10 @@ class FederationSender(AbstractFederationSender):
/ hs.config.ratelimiting.federation_rr_transactions_per_room_per_second
)
# wake up destinations that have outstanding PDUs to be caught up
self._catchup_after_startup_timer: Optional[
IDelayedCall
] = self.clock.call_later(
CATCH_UP_STARTUP_DELAY_SEC,
# Regularly wake up destinations that have outstanding PDUs to be caught up
self.clock.looping_call(
run_as_background_process,
WAKEUP_RETRY_PERIOD_SEC * 1000.0,
"wake_destinations_needing_catchup",
self._wake_destinations_needing_catchup,
)
@@ -966,7 +965,6 @@ class FederationSender(AbstractFederationSender):
if not destinations_to_wake:
# finished waking all destinations!
self._catchup_after_startup_timer = None
break
last_processed = destinations_to_wake[-1]
@@ -983,4 +981,4 @@ class FederationSender(AbstractFederationSender):
last_processed,
)
self.wake_destination(destination)
await self.clock.sleep(CATCH_UP_STARTUP_INTERVAL_SEC)
await self.clock.sleep(WAKEUP_INTERVAL_BETWEEN_DESTINATIONS_SEC)

View File

@@ -432,15 +432,6 @@ class FederationV2SendJoinServlet(BaseFederationServerServlet):
PREFIX = FEDERATION_V2_PREFIX
def __init__(
self,
hs: "HomeServer",
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
async def on_PUT(
self,
origin: str,

View File

@@ -653,6 +653,7 @@ class DeviceHandler(DeviceWorkerHandler):
async def store_dehydrated_device(
self,
user_id: str,
device_id: Optional[str],
device_data: JsonDict,
initial_device_display_name: Optional[str] = None,
) -> str:
@@ -661,6 +662,7 @@ class DeviceHandler(DeviceWorkerHandler):
Args:
user_id: the user that we are storing the device for
device_id: device id supplied by client
device_data: the dehydrated device information
initial_device_display_name: The display name to use for the device
Returns:
@@ -668,7 +670,7 @@ class DeviceHandler(DeviceWorkerHandler):
"""
device_id = await self.check_device_registered(
user_id,
None,
device_id,
initial_device_display_name,
)
old_device_id = await self.store.store_dehydrated_device(
@@ -1124,7 +1126,14 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
)
if resync:
await self.multi_user_device_resync([user_id])
# We mark as stale up front in case we get restarted.
await self.store.mark_remote_users_device_caches_as_stale([user_id])
run_as_background_process(
"_maybe_retry_device_resync",
self.multi_user_device_resync,
[user_id],
False,
)
else:
# Simply update the single device, since we know that is the only
# change (because of the single prev_id matching the current cache)

View File

@@ -13,10 +13,11 @@
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Any, Dict
from http import HTTPStatus
from typing import TYPE_CHECKING, Any, Dict, Optional
from synapse.api.constants import EduTypes, EventContentFields, ToDeviceEventTypes
from synapse.api.errors import SynapseError
from synapse.api.errors import Codes, SynapseError
from synapse.api.ratelimiting import Ratelimiter
from synapse.logging.context import run_in_background
from synapse.logging.opentracing import (
@@ -48,6 +49,9 @@ class DeviceMessageHandler:
self.store = hs.get_datastores().main
self.notifier = hs.get_notifier()
self.is_mine = hs.is_mine
if hs.config.experimental.msc3814_enabled:
self.event_sources = hs.get_event_sources()
self.device_handler = hs.get_device_handler()
# We only need to poke the federation sender explicitly if its on the
# same instance. Other federation sender instances will get notified by
@@ -303,3 +307,103 @@ class DeviceMessageHandler:
# Enqueue a new federation transaction to send the new
# device messages to each remote destination.
self.federation_sender.send_device_messages(destination)
async def get_events_for_dehydrated_device(
self,
requester: Requester,
device_id: str,
since_token: Optional[str],
limit: int,
) -> JsonDict:
"""Fetches up to `limit` events sent to `device_id` starting from `since_token`
and returns the new since token. If there are no more messages, returns an empty
array.
Args:
requester: the user requesting the messages
device_id: ID of the dehydrated device
since_token: stream id to start from when fetching messages
limit: the number of messages to fetch
Returns:
A dict containing the to-device messages, as well as a token that the client
can provide in the next call to fetch the next batch of messages
"""
user_id = requester.user.to_string()
# only allow fetching messages for the dehydrated device id currently associated
# with the user
dehydrated_device = await self.device_handler.get_dehydrated_device(user_id)
if dehydrated_device is None:
raise SynapseError(
HTTPStatus.FORBIDDEN,
"No dehydrated device exists",
Codes.FORBIDDEN,
)
dehydrated_device_id, _ = dehydrated_device
if device_id != dehydrated_device_id:
raise SynapseError(
HTTPStatus.FORBIDDEN,
"You may only fetch messages for your dehydrated device",
Codes.FORBIDDEN,
)
since_stream_id = 0
if since_token:
if not since_token.startswith("d"):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"from parameter %r has an invalid format" % (since_token,),
errcode=Codes.INVALID_PARAM,
)
try:
since_stream_id = int(since_token[1:])
except Exception:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"from parameter %r has an invalid format" % (since_token,),
errcode=Codes.INVALID_PARAM,
)
# if we have a since token, delete any to-device messages before that token
# (since we now know that the device has received them)
deleted = await self.store.delete_messages_for_device(
user_id, device_id, since_stream_id
)
logger.debug(
"Deleted %d to-device messages up to %d for user_id %s device_id %s",
deleted,
since_stream_id,
user_id,
device_id,
)
to_token = self.event_sources.get_current_token().to_device_key
messages, stream_id = await self.store.get_messages_for_device(
user_id, device_id, since_stream_id, to_token, limit
)
for message in messages:
# Remove the message id before sending to client
message_id = message.pop("message_id", None)
if message_id:
set_tag(SynapseTags.TO_DEVICE_EDU_ID, message_id)
logger.debug(
"Returning %d to-device messages between %d and %d (current token: %d) for "
"dehydrated device %s, user_id %s",
len(messages),
since_stream_id,
stream_id,
to_token,
device_id,
user_id,
)
return {
"events": messages,
"next_batch": f"d{stream_id}",
}

View File

@@ -277,7 +277,9 @@ class DirectoryHandler:
except RequestSendFailed:
raise SynapseError(502, "Failed to fetch alias")
except CodeMessageException as e:
logging.warning("Error retrieving alias")
logging.warning(
"Error retrieving alias %s -> %s %s", room_alias, e.code, e.msg
)
if e.code == 404:
fed_result = None
else:

View File

@@ -277,7 +277,7 @@ class EventAuthHandler:
True if the proper room version and join rules are set for restricted access.
"""
# This only applies to room versions which support the new join rule.
if not room_version.msc3083_join_rules:
if not room_version.restricted_join_rule:
return False
# If there's no join rule, then it defaults to invite (so this doesn't apply).
@@ -292,7 +292,7 @@ class EventAuthHandler:
return True
# also check for MSC3787 behaviour
if room_version.msc3787_knock_restricted_join_rule:
if room_version.knock_restricted_join_rule:
return content_join_rule == JoinRules.KNOCK_RESTRICTED
return False

View File

@@ -105,14 +105,12 @@ backfill_processing_before_timer = Histogram(
)
# TODO: We can refactor this away now that there is only one backfill point again
class _BackfillPointType(Enum):
# a regular backwards extremity (ie, an event which we don't yet have, but which
# is referred to by other events in the DAG)
BACKWARDS_EXTREMITY = enum.auto()
# an MSC2716 "insertion event"
INSERTION_PONT = enum.auto()
@attr.s(slots=True, auto_attribs=True, frozen=True)
class _BackfillPoint:
@@ -273,32 +271,10 @@ class FederationHandler:
)
]
insertion_events_to_be_backfilled: List[_BackfillPoint] = []
if self.hs.config.experimental.msc2716_enabled:
insertion_events_to_be_backfilled = [
_BackfillPoint(event_id, depth, _BackfillPointType.INSERTION_PONT)
for event_id, depth in await self.store.get_insertion_event_backward_extremities_in_room(
room_id=room_id,
current_depth=current_depth,
# We only need to end up with 5 extremities combined with
# the backfill points to make the `/backfill` request ...
# (see the other comment above for more context).
limit=50,
)
]
logger.debug(
"_maybe_backfill_inner: backwards_extremities=%s insertion_events_to_be_backfilled=%s",
backwards_extremities,
insertion_events_to_be_backfilled,
)
# we now have a list of potential places to backpaginate from. We prefer to
# start with the most recent (ie, max depth), so let's sort the list.
sorted_backfill_points: List[_BackfillPoint] = sorted(
itertools.chain(
backwards_extremities,
insertion_events_to_be_backfilled,
),
backwards_extremities,
key=lambda e: -int(e.depth),
)
@@ -411,10 +387,7 @@ class FederationHandler:
# event but not anything before it. This would require looking at the
# state *before* the event, ignoring the special casing certain event
# types have.
if bp.type == _BackfillPointType.INSERTION_PONT:
event_ids_to_check = [bp.event_id]
else:
event_ids_to_check = await self.store.get_successor_events(bp.event_id)
event_ids_to_check = await self.store.get_successor_events(bp.event_id)
events_to_check = await self.store.get_events_as_list(
event_ids_to_check,
@@ -984,7 +957,7 @@ class FederationHandler:
# Note that this requires the /send_join request to come back to the
# same server.
prev_event_ids = None
if room_version.msc3083_join_rules:
if room_version.restricted_join_rule:
# Note that the room's state can change out from under us and render our
# nice join rules-conformant event non-conformant by the time we build the
# event. When this happens, our validation at the end fails and we respond
@@ -1608,9 +1581,7 @@ class FederationHandler:
event.content["third_party_invite"]["signed"]["token"],
)
original_invite = None
prev_state_ids = await context.get_prev_state_ids(
StateFilter.from_types([(EventTypes.ThirdPartyInvite, None)])
)
prev_state_ids = await context.get_prev_state_ids(StateFilter.from_types([key]))
original_invite_id = prev_state_ids.get(key)
if original_invite_id:
original_invite = await self.store.get_event(
@@ -1663,7 +1634,7 @@ class FederationHandler:
token = signed["token"]
prev_state_ids = await context.get_prev_state_ids(
StateFilter.from_types([(EventTypes.ThirdPartyInvite, None)])
StateFilter.from_types([(EventTypes.ThirdPartyInvite, token)])
)
invite_event_id = prev_state_ids.get((EventTypes.ThirdPartyInvite, token))

View File

@@ -601,18 +601,6 @@ class FederationEventHandler:
room_id, [(event, context)]
)
# If we're joining the room again, check if there is new marker
# state indicating that there is new history imported somewhere in
# the DAG. Multiple markers can exist in the current state with
# unique state_keys.
#
# Do this after the state from the remote join was persisted (via
# `persist_events_and_notify`). Otherwise we can run into a
# situation where the create event doesn't exist yet in the
# `current_state_events`
for e in state:
await self._handle_marker_event(origin, e)
return stream_id_after_persist
async def update_state_for_partial_state_event(
@@ -915,13 +903,6 @@ class FederationEventHandler:
)
)
# We construct the event lists in source order from `/backfill` response because
# it's a) easiest, but also b) the order in which we process things matters for
# MSC2716 historical batches because many historical events are all at the same
# `depth` and we rely on the tenuous sort that the other server gave us and hope
# they're doing their best. The brittle nature of this ordering for historical
# messages over federation is one of the reasons why we don't want to continue
# on MSC2716 until we have online topological ordering.
events_with_failed_pull_attempts, fresh_events = partition(
new_events, lambda e: e.event_id in event_ids_with_failed_pull_attempts
)
@@ -1460,8 +1441,6 @@ class FederationEventHandler:
await self._run_push_actions_and_persist_event(event, context, backfilled)
await self._handle_marker_event(origin, event)
if backfilled or context.rejected:
return
@@ -1559,94 +1538,6 @@ class FederationEventHandler:
except Exception:
logger.exception("Failed to resync device for %s", sender)
@trace
async def _handle_marker_event(self, origin: str, marker_event: EventBase) -> None:
"""Handles backfilling the insertion event when we receive a marker
event that points to one.
Args:
origin: Origin of the event. Will be called to get the insertion event
marker_event: The event to process
"""
if marker_event.type != EventTypes.MSC2716_MARKER:
# Not a marker event
return
if marker_event.rejected_reason is not None:
# Rejected event
return
# Skip processing a marker event if the room version doesn't
# support it or the event is not from the room creator.
room_version = await self._store.get_room_version(marker_event.room_id)
create_event = await self._store.get_create_event_for_room(marker_event.room_id)
if not room_version.msc2175_implicit_room_creator:
room_creator = create_event.content.get(EventContentFields.ROOM_CREATOR)
else:
room_creator = create_event.sender
if not room_version.msc2716_historical and (
not self._config.experimental.msc2716_enabled
or marker_event.sender != room_creator
):
return
logger.debug("_handle_marker_event: received %s", marker_event)
insertion_event_id = marker_event.content.get(
EventContentFields.MSC2716_INSERTION_EVENT_REFERENCE
)
if insertion_event_id is None:
# Nothing to retrieve then (invalid marker)
return
already_seen_insertion_event = await self._store.have_seen_event(
marker_event.room_id, insertion_event_id
)
if already_seen_insertion_event:
# No need to process a marker again if we have already seen the
# insertion event that it was pointing to
return
logger.debug(
"_handle_marker_event: backfilling insertion event %s", insertion_event_id
)
await self._get_events_and_persist(
origin,
marker_event.room_id,
[insertion_event_id],
)
insertion_event = await self._store.get_event(
insertion_event_id, allow_none=True
)
if insertion_event is None:
logger.warning(
"_handle_marker_event: server %s didn't return insertion event %s for marker %s",
origin,
insertion_event_id,
marker_event.event_id,
)
return
logger.debug(
"_handle_marker_event: succesfully backfilled insertion event %s from marker event %s",
insertion_event,
marker_event,
)
await self._store.insert_insertion_extremity(
insertion_event_id, marker_event.room_id
)
logger.debug(
"_handle_marker_event: insertion extremity added for %s from marker event %s",
insertion_event,
marker_event,
)
async def backfill_event_id(
self, destinations: List[str], room_id: str, event_id: str
) -> PulledPduInfo:

Some files were not shown because too many files have changed in this diff Show More