Compare commits

..

290 Commits

Author SHA1 Message Date
Will Hunt
7ea2cd1ce9 Set PYTHONMALLOC to malloc in start.py 2021-05-13 10:55:57 +01:00
Brendan Abolivier
451f25172a Incorporate changes from review 2021-05-12 17:10:42 +01:00
Brendan Abolivier
91143bb24e Refer and link to the upgrade notes rather than to the file name 2021-05-12 17:04:00 +01:00
Brendan Abolivier
47806b0869 1.34.0rc1 2021-05-12 16:59:46 +01:00
Brendan Abolivier
a683028d81 Correctly ratelimit invites when creating a room (#9968)
* Correctly ratelimit invites when creating a room

Also allow ratelimiting for more than one action at a time.
2021-05-12 16:05:28 +02:00
Richard van der Hoff
7562d887e1 Change the format of access tokens away from macaroons (#5588) 2021-05-12 15:04:51 +01:00
Erik Johnston
affaffb0ab Run cache_joined_hosts_for_event in background (#9951) 2021-05-12 13:17:11 +01:00
Richard van der Hoff
63fb220e5f Tests for to-device messages (#9965) 2021-05-11 18:01:11 +01:00
Patrick Cloke
27c375f812 Sort child events according to MSC1772 for the spaces summary API. (#9954)
This should help ensure that equivalent results are achieved between
homeservers querying for the summary of a space.

This implements modified MSC1772 rules, according to MSC2946.
The different is that the origin_server_ts of the m.room.create event
is not used as a tie-breaker since this might not be known if the
homeserver is not part of the room.
2021-05-11 12:57:39 -04:00
Patrick Cloke
f4833e0c06 Support fetching the spaces summary via GET over federation. (#9947)
Per changes in MSC2946, the C-S and S-S APIs for spaces summary
should use GET requests.

Until this is stable, the POST endpoints still exist.

This does not switch federation requests to use the GET version yet
since it is newly added and already deployed servers might not support
it. When switching to the stable endpoint we should switch to GET
requests.
2021-05-11 12:21:43 -04:00
Patrick Cloke
28c6841102 Send the m.room.create stripped event with invites (support MSC1772). (#9966)
MSC1772 specifies the m.room.create event should be sent as part
of the invite_state. This was done optionally behind an experimental
flag, but is now done by default due to MSC1772 being approved.
2021-05-11 10:58:58 -04:00
Brendan Abolivier
652a6b094d Merge branch 'master' into develop 2021-05-11 14:15:30 +01:00
Brendan Abolivier
d1473f7362 Use link to advisory rather than to the CVE repo 2021-05-11 14:09:46 +01:00
Aaron Raimist
dc6366a9bd Add config option to hide device names over federation (#9945)
Now that cross signing exists there is much less of a need for other people to look at devices and verify them individually. This PR adds a config option to allow you to prevent device display names from being shared with other servers.

Signed-off-by: Aaron Raimist <aaron@raim.ist>
2021-05-11 14:03:23 +01:00
Brendan Abolivier
86fb71431c 1.33.2 2021-05-11 14:01:32 +01:00
Richard van der Hoff
b378d98c8f Add debug logging for issue #9533 (#9959)
Hopefully this will help us track down where to-device messages are getting
lost/delayed.
2021-05-11 11:04:03 +01:00
Richard van der Hoff
7967b36efe Fix m.room_key_request to-device messages (#9961)
fixes #9960
2021-05-11 11:02:56 +01:00
Richard van der Hoff
03318a766c Merge pull request from GHSA-x345-32rc-8h85
* tests for push rule pattern matching

* tests for acl pattern matching

* factor out common `re.escape`

* Factor out common re.compile

* Factor out common anchoring code

* add word_boundary support to `glob_to_regex`

* Use `glob_to_regex` in push rule evaluator

NB that this drops support for character classes. I don't think anyone ever
used them.

* Improve efficiency of globs with multiple wildcards

The idea here is that we compress multiple `*` globs into a single `.*`. We
also need to consider `?`, since `*?*` is as hard to implement efficiently as
`**`.

* add assertion on regex pattern

* Fix mypy

* Simplify glob_to_regex

* Inline the glob_to_regex helper function

Signed-off-by: Dan Callahan <danc@element.io>

* Moar comments

Signed-off-by: Dan Callahan <danc@element.io>

Co-authored-by: Dan Callahan <danc@element.io>
2021-05-11 11:47:23 +02:00
Erik Johnston
2b2985b5cf Improve performance of backfilling in large rooms. (#9935)
We were pulling the full auth chain for the room out of the DB each time
we backfilled, which can be *huge* for large rooms and is totally
unnecessary.
2021-05-10 13:29:02 +01:00
Richard van der Hoff
51065c44bb Fix port_db on empty db (#9930)
... and test it.
2021-05-10 13:02:55 +01:00
Erik Johnston
6c84778549 Always cache 'event_to_prev_state_group' (#9950)
Fixes regression in send PDU times introduced in #9905.
2021-05-07 14:54:09 +01:00
Richard van der Hoff
765473567c Fix make_full_schema to create the db with the right options and user (#9931) 2021-05-07 14:01:57 +01:00
Erik Johnston
b65ecaff9b Merge remote-tracking branch 'origin/release-v1.33.2' into develop 2021-05-07 13:27:19 +01:00
Erik Johnston
4df26abf28 Unpin attrs dep after new version has been released (#9946)
c.f. #9936
2021-05-07 12:57:21 +01:00
Richard van der Hoff
25f43faa70 Reorganise the database schema directories (#9932)
The hope here is that by moving all the schema files into synapse/storage/schema, it gets a bit easier for newcomers to navigate.

It certainly got easier for me to write a helpful README. There's more to do on that front, but I'll follow up with other PRs for that.
2021-05-07 10:22:05 +01:00
Erik Johnston
8771b1337d Export jemalloc stats to prometheus when used (#9882) 2021-05-06 15:54:07 +01:00
Erik Johnston
eba431c539 Revert "Leave out optional keys from /sync (#9919)" (#9940)
This reverts commit e9eb3549d3.
2021-05-06 15:06:35 +01:00
Erik Johnston
a8803e2b6e Merge remote-tracking branch 'origin/master' into develop 2021-05-06 14:36:59 +01:00
Erik Johnston
ac88aca7f7 1.33.1 2021-05-06 14:06:38 +01:00
Erik Johnston
24f07a83e6 Pin attrs to <21.1.0 (#9937)
Fixes #9936
2021-05-06 14:06:06 +01:00
Patrick Cloke
70f0ffd2fc Follow-up to #9915 to correct the identifier for room types. 2021-05-05 16:31:16 -04:00
Patrick Cloke
d783880083 Include the time of the create event in Spaces Summary. (#9928)
This is an update based on changes to MSC2946. The origin_server_ts
of the m.room.create event is copied into the creation_ts field for each
room returned from the spaces summary.
2021-05-05 13:33:05 -04:00
Erik Johnston
37623e3382 Increase perf of handling presence when joining large rooms. (#9916) 2021-05-05 17:27:05 +01:00
Patrick Cloke
e2a443550e Support stable MSC1772 spaces identifiers. (#9915)
Support both the unstable and stable identifiers. A future release
will disable the unstable identifiers.
2021-05-05 11:56:51 -04:00
Erik Johnston
ef889c98a6 Optionally track memory usage of each LruCache (#9881)
This will double count slightly in the presence of interned strings. It's off by default as it can consume a lot of resources.
2021-05-05 16:54:36 +01:00
Erik Johnston
1fb9a2d0bf Limit how often GC happens by time. (#9902)
Synapse can be quite memory intensive, and unless care is taken to tune
the GC thresholds it can end up thrashing, causing noticable performance
problems for large servers. We fix this by limiting how often we GC a
given generation, regardless of current counts/thresholds.

This does not help with the reverse problem where the thresholds are set
too high, but that should only happen in situations where they've been
manually configured.

Adds a `gc_min_seconds_between` config option to override the defaults.

Fixes #9890.
2021-05-05 16:53:45 +01:00
Erik Johnston
de8f0a03a3 Don't set the external cache if its been done recently (#9905) 2021-05-05 16:53:22 +01:00
Erik Johnston
d0aee697ac Use get_current_users_in_room from store and not StateHandler (#9910) 2021-05-05 16:49:34 +01:00
Christopher May-Townsend
d5305000f1 Docker healthcheck timings - add startup delay and changed interval (#9913)
* Add healthcheck startup delay by 5secs and reduced interval check to 15s
to reduce waiting time for docker aware edge routers bringing an
instance online
2021-05-05 17:33:04 +02:00
DeepBlueV7.X
e9eb3549d3 Leave out optional keys from /sync (#9919)
This leaves out all optional keys from /sync. This should be fine for all clients tested against conduit already, but it may break some clients, as such we should check, that at least most of them don't break horribly and maybe back out some of the individual changes. (We can probably always leave out groups for example, while the others may cause more issues.)

Signed-off-by: Nicolas Werner <nicolas.werner@hotmail.de>
2021-05-05 14:37:56 +01:00
Brendan Abolivier
a61b13c0a1 Merge branch 'master' into develop 2021-05-05 14:25:49 +01:00
Brendan Abolivier
0644ac0989 1.33.0 2021-05-05 14:15:54 +01:00
Erik Johnston
e3bc4617fc Time external cache response time (#9904) 2021-05-04 15:14:22 +01:00
Andrew Morgan
b85821aca2 Add port parameter to the sample config for psycopg2 args (#9911)
Adds the `port` option with the default value to the sample config file.
2021-05-04 13:28:59 +01:00
Dan Callahan
56c4b47df3 Build Debian packages for Ubuntu 21.04 Hirsute (#9909)
Signed-off-by: Dan Callahan <danc@element.io>
2021-04-30 15:36:05 +01:00
Andrew Morgan
4d624f467a Merge tag 'v1.33.0rc2' into develop
Synapse 1.33.0rc2 (2021-04-29)
==============================

Bugfixes
--------

- Fix tight loop when handling presence replication when using workers. Introduced in v1.33.0rc1. ([\#9900](https://github.com/matrix-org/synapse/issues/9900))
2021-04-29 14:35:14 +01:00
Andrew Morgan
d11f2dfee5 typo in changelog 2021-04-29 14:31:14 +01:00
Patrick Cloke
bb4b11846f Add missing type hints to handlers and fix a Spam Checker type hint. (#9896)
The user_may_create_room_alias method on spam checkers
declared the room_alias parameter as a str when in reality it is
passed a RoomAlias object.
2021-04-29 07:17:28 -04:00
Andrew Morgan
e9444cc74d 1.33.0rc2 2021-04-29 11:45:37 +01:00
ThibF
0085dc5abc Delete room endpoint (#9889)
Support the delete of a room through DELETE request and mark
previous request as deprecated through documentation.

Signed-off-by: Thibault Ferrante <thibault.ferrante@pm.me>
2021-04-29 10:31:45 +01:00
Erik Johnston
802560211a Merge remote-tracking branch 'origin/release-v1.33.0' into develop 2021-04-28 14:43:10 +01:00
Erik Johnston
e4ab8676b4 Fix tight loop handling presence replication. (#9900)
Only affects workers. Introduced in #9819.

Fixes #9899.
2021-04-28 14:42:50 +01:00
Patrick Cloke
10a08ab88a Use the parent's logging context name for runWithConnection. (#9895)
This fixes a regression where the logging context for runWithConnection
was reported as runWithConnection instead of the connection name,
e.g. "POST-XYZ".
2021-04-28 07:44:52 -04:00
Andrew Morgan
fa6679e794 Merge tag 'v1.33.0rc1' into develop
Synapse 1.33.0rc1 (2021-04-28)
==============================

Features
--------

- Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. ([\#9800](https://github.com/matrix-org/synapse/issues/9800), [\#9814](https://github.com/matrix-org/synapse/issues/9814))
- Add experimental support for handling presence on a worker. ([\#9819](https://github.com/matrix-org/synapse/issues/9819), [\#9820](https://github.com/matrix-org/synapse/issues/9820), [\#9828](https://github.com/matrix-org/synapse/issues/9828), [\#9850](https://github.com/matrix-org/synapse/issues/9850))
- Return a new template when an user attempts to renew their account multiple times with the same token, stating that their account is set to expire. This replaces the invalid token template that would previously be shown in this case. This change concerns the optional account validity feature. ([\#9832](https://github.com/matrix-org/synapse/issues/9832))

Bugfixes
--------

- Fixes the OIDC SSO flow when using a `public_baseurl` value including a non-root URL path. ([\#9726](https://github.com/matrix-org/synapse/issues/9726))
- Fix thumbnail generation for some sites with non-standard content types. Contributed by @rkfg. ([\#9788](https://github.com/matrix-org/synapse/issues/9788))
- Add some sanity checks to identity server passed to 3PID bind/unbind endpoints. ([\#9802](https://github.com/matrix-org/synapse/issues/9802))
- Limit the size of HTTP responses read over federation. ([\#9833](https://github.com/matrix-org/synapse/issues/9833))
- Fix a bug which could cause Synapse to get stuck in a loop of resyncing device lists. ([\#9867](https://github.com/matrix-org/synapse/issues/9867))
- Fix a long-standing bug where errors from federation did not propagate to the client. ([\#9868](https://github.com/matrix-org/synapse/issues/9868))

Improved Documentation
----------------------

- Add a note to the docker docs mentioning that we mirror upstream's supported Docker platforms. ([\#9801](https://github.com/matrix-org/synapse/issues/9801))

Internal Changes
----------------

- Add a dockerfile for running Synapse in worker-mode under Complement. ([\#9162](https://github.com/matrix-org/synapse/issues/9162))
- Apply `pyupgrade` across the codebase. ([\#9786](https://github.com/matrix-org/synapse/issues/9786))
- Move some replication processing out of `generic_worker`. ([\#9796](https://github.com/matrix-org/synapse/issues/9796))
- Replace `HomeServer.get_config()` with inline references. ([\#9815](https://github.com/matrix-org/synapse/issues/9815))
- Rename some handlers and config modules to not duplicate the top-level module. ([\#9816](https://github.com/matrix-org/synapse/issues/9816))
- Fix a long-standing bug which caused `max_upload_size` to not be correctly enforced. ([\#9817](https://github.com/matrix-org/synapse/issues/9817))
- Reduce CPU usage of the user directory by reusing existing calculated room membership. ([\#9821](https://github.com/matrix-org/synapse/issues/9821))
- Small speed up for joining large remote rooms. ([\#9825](https://github.com/matrix-org/synapse/issues/9825))
- Introduce flake8-bugbear to the test suite and fix some of its lint violations. ([\#9838](https://github.com/matrix-org/synapse/issues/9838))
- Only store the raw data in the in-memory caches, rather than objects that include references to e.g. the data stores. ([\#9845](https://github.com/matrix-org/synapse/issues/9845))
- Limit length of accepted email addresses. ([\#9855](https://github.com/matrix-org/synapse/issues/9855))
- Remove redundant `synapse.types.Collection` type definition. ([\#9856](https://github.com/matrix-org/synapse/issues/9856))
- Handle recently added rate limits correctly when using `--no-rate-limit` with the demo scripts. ([\#9858](https://github.com/matrix-org/synapse/issues/9858))
- Disable invite rate-limiting by default when running the unit tests. ([\#9871](https://github.com/matrix-org/synapse/issues/9871))
- Pass a reactor into `SynapseSite` to make testing easier. ([\#9874](https://github.com/matrix-org/synapse/issues/9874))
- Make `DomainSpecificString` an `attrs` class. ([\#9875](https://github.com/matrix-org/synapse/issues/9875))
- Add type hints to `synapse.api.auth` and `synapse.api.auth_blocking` modules. ([\#9876](https://github.com/matrix-org/synapse/issues/9876))
- Remove redundant `_PushHTTPChannel` test class. ([\#9878](https://github.com/matrix-org/synapse/issues/9878))
- Remove backwards-compatibility code for Python versions < 3.6. ([\#9879](https://github.com/matrix-org/synapse/issues/9879))
- Small performance improvement around handling new local presence updates. ([\#9887](https://github.com/matrix-org/synapse/issues/9887))
2021-04-28 12:12:29 +01:00
Andrew Morgan
8ba086980d Reword account validity template change to sound less like a bugfix 2021-04-28 12:07:49 +01:00
Erik Johnston
391bfe9a7b Reduce memory footprint of caches (#9886) 2021-04-28 11:59:28 +01:00
Andrew Morgan
787de3190f 1.33.0rc1 2021-04-28 11:43:33 +01:00
Andrew Morgan
4e0fd35bc9 Revert "Experimental Federation Speedup (#9702)"
This reverts commit 05e8c70c05.
2021-04-28 11:38:33 +01:00
Erik Johnston
dd2d32dcdb Add type hints to presence handler (#9885) 2021-04-28 11:07:47 +01:00
Andrew Morgan
fe604a022a Remove various bits of compatibility code for Python <3.6 (#9879)
I went through and removed a bunch of cruft that was lying around for compatibility with old Python versions. This PR also will now prevent Synapse from starting unless you're running Python 3.6+.
2021-04-27 13:13:07 +01:00
Patrick Cloke
1350b053da Pass errors back to the client when trying multiple federation destinations. (#9868)
This ensures that something like an auth error (403) will be
returned to the requester instead of attempting to try more
servers, which will likely result in the same error, and then
passing back a generic 400 error.
2021-04-27 07:30:34 -04:00
Erik Johnston
0ffa5fb935 Use current state table for presence.get_interested_remotes (#9887)
This should be a lot quicker than asking the state handler.
2021-04-27 10:09:41 +01:00
Richard van der Hoff
3ff2251754 Improved validation for received requests (#9817)
* Simplify `start_listening` callpath

* Correctly check the size of uploaded files
2021-04-23 19:20:44 +01:00
Richard van der Hoff
84936e2264 Kill off _PushHTTPChannel. (#9878)
First of all, a fixup to `FakeChannel` which is needed to make it work with the default HTTP channel implementation.

Secondly, it looks like we no longer need `_PushHTTPChannel`, because as of #8013, the producer that gets attached to the `HTTPChannel` is now an `IPushProducer`. This is good, because it means we can remove a whole load of test-specific boilerplate which causes variation between tests and production.
2021-04-23 18:40:57 +01:00
Andrew Morgan
695b73c861 Allow OIDC cookies to work on non-root public baseurls (#9726)
Applied a (slightly modified) patch from https://github.com/matrix-org/synapse/issues/9574.

As far as I understand this would allow the cookie set during the OIDC flow to work on deployments using public baseurls that do not sit at the URL path root.
2021-04-23 18:22:47 +01:00
Richard van der Hoff
59d24c5bef pass a reactor into SynapseSite (#9874) 2021-04-23 17:06:47 +01:00
Patrick Cloke
e83627926f Add type hints to auth and auth_blocking. (#9876) 2021-04-23 12:02:16 -04:00
Erik Johnston
a15c003e5b Make DomainSpecificString an attrs class (#9875) 2021-04-23 15:46:29 +01:00
Andrew Morgan
ceaa76970f Remove room and user invite ratelimits in default unit test config (#9871) 2021-04-23 13:37:48 +01:00
Erik Johnston
9d25a0ae65 Split presence out of master (#9820) 2021-04-23 12:21:55 +01:00
Patrick Cloke
d924827da1 Check for space membership during a remote join of a restricted room (#9814)
When receiving a /send_join request for a room with join rules set to 'restricted',
check if the user is a member of the spaces defined in the 'allow' key of the join rules.

This only applies to an experimental room version, as defined in MSC3083.
2021-04-23 07:05:51 -04:00
Erik Johnston
3853a7edfc Only store data in caches, not "smart" objects (#9845) 2021-04-23 11:47:07 +01:00
Richard van der Hoff
51a20914a8 Limit the size of HTTP responses read over federation. (#9833) 2021-04-23 11:08:41 +01:00
manuroe
c1ddbbde4f Handle all new rate limits in demo scripts (#9858) 2021-04-22 17:49:42 +01:00
Erik Johnston
177dae2704 Limit length of accepted email addresses (#9855) 2021-04-22 17:49:11 +01:00
Richard van der Hoff
69018acbd2 Clear the resync bit after resyncing device lists (#9867)
Fixes #9866.
2021-04-22 16:53:24 +01:00
Richard van der Hoff
294c675033 Remove synapse.types.Collection (#9856)
This is no longer required, since we have dropped support for Python 3.5.
2021-04-22 16:43:50 +01:00
Andrew Morgan
3186324260 Merge branch 'master' into develop 2021-04-22 11:23:56 +01:00
Andrew Morgan
0f2629ebc6 Merge tag 'v1.32.2'
Synapse 1.32.2 (2021-04-22)
===========================

This release includes a fix for a regression introduced in 1.32.0.

Bugfixes
--------

- Fix a regression in Synapse 1.32.0 and 1.32.1 which caused `LoggingContext` errors in plugins. ([\#9857](https://github.com/matrix-org/synapse/issues/9857))
2021-04-22 11:23:34 +01:00
Andrew Morgan
dac4445934 A regression can't be introduced twice 2021-04-22 11:09:31 +01:00
Andrew Morgan
79e6d9e4b1 Note regression was in 1.32.0 and 1.32.1 2021-04-22 11:04:51 +01:00
Andrew Morgan
ca380881b1 Update dates in changelogs 2021-04-21 18:47:31 +01:00
Andrew Morgan
55159c48e3 1.32.2 2021-04-21 18:45:39 +01:00
Andrew Morgan
ca6ecb8d67 Merge branch 'release-v1.32.1' of github.com:matrix-org/synapse into release-v1.32.2 2021-04-21 18:39:45 +01:00
Andrew Morgan
8798f2291c Merge branch 'master' of github.com:matrix-org/synapse into develop 2021-04-21 18:21:56 +01:00
Andrew Morgan
046175daba Merge branch 'release-v1.32.1' of github.com:matrix-org/synapse 2021-04-21 18:21:14 +01:00
Andrew Morgan
0c23aa393c Note LoggingContext signature change incompatibility in 1.32.0 (#9859)
1.32.0 also introduced an incompatibility with Synapse modules that make use of `synapse.logging.context.LoggingContext`, such as [synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider).

This PR adds a note to the 1.32.0 changelog and upgrade notes about it.
2021-04-21 18:16:58 +01:00
Richard van der Hoff
d9bd62f9d1 Make LoggingContext's name optional (#9857)
Fixes https://github.com/matrix-org/synapse-s3-storage-provider/issues/55
2021-04-21 16:39:34 +01:00
Andrew Morgan
4b2217ace2 Merge branch 'master' into develop 2021-04-21 14:55:06 +01:00
Andrew Morgan
a0972085ed Merge tag 'v1.32.1'
Synapse 1.32.1 (2021-04-21)
===========================

This release fixes [a regression](https://github.com/matrix-org/synapse/issues/9853) in Synapse 1.32.0 that caused connected Prometheus instances to become unstable. If you ran Synapse 1.32.0 with Prometheus metrics, first upgrade to Synapse 1.32.1 and follow [these instructions](https://github.com/matrix-org/synapse/pull/9854#issuecomment-823472183) to clean up any excess writeahead logs.

Bugfixes
--------

- Fix a regression in Synapse 1.32.0 which caused Synapse to report large numbers of Prometheus time series, potentially overwhelming Prometheus instances. ([\#9854](https://github.com/matrix-org/synapse/issues/9854))
2021-04-21 14:54:03 +01:00
Andrew Morgan
bdb4c20dc1 Clarify 1.32.0/1 changelog and upgrade notes 2021-04-21 14:44:04 +01:00
Andrew Morgan
acb8c81041 Add regression notes to CHANGES.md; fix link in 1.32.0 changelog 2021-04-21 14:24:16 +01:00
Andrew Morgan
98a1b84631 Add link to fixing prometheus to 1.32.0 upgrade notes; 1.32.1 has a fix 2021-04-21 14:19:11 +01:00
Andrew Morgan
026a66f2b3 Fix typo in link to regression in 1.32.0 upgrade notes 2021-04-21 14:04:44 +01:00
Andrew Morgan
a745531c10 1.32.1 2021-04-21 14:01:12 +01:00
Andrew Morgan
30c94862b4 Mention Prometheus metrics regression in v1.32.0 2021-04-21 14:00:31 +01:00
Richard van der Hoff
5d281c10dd Stop BackgroundProcessLoggingContext making new prometheus timeseries (#9854)
This undoes part of b076bc276e.
2021-04-21 10:03:31 +01:00
Patrick Cloke
683d6f75af Rename handler and config modules which end in handler/config. (#9816) 2021-04-20 14:55:20 -04:00
Andrew Morgan
eccacd72cb Merge branch 'master' into develop 2021-04-20 17:14:15 +01:00
Andrew Morgan
b8c5f6fddb Mention Prometheus metrics regression in v1.32.0 2021-04-20 17:11:36 +01:00
Andrew Morgan
272402c4d7 Merge branch 'master' into develop 2021-04-20 16:07:53 +01:00
Andrew Morgan
05fa06834d Further tweaking on gpg signing key notice 2021-04-20 15:52:06 +01:00
Andrew Morgan
913f790bb2 Add note about expired Debian gpg signing keys to CHANGES.md 2021-04-20 15:33:56 +01:00
Andrew Morgan
6982db9651 Merge branch 'master' into develop 2021-04-20 14:55:16 +01:00
Andrew Morgan
438a8594cb Update v1.32.0 changelog. It's m.login.application_service, not plural 2021-04-20 14:47:17 +01:00
Andrew Morgan
e031c7e0cc 1.32.0 2021-04-20 14:31:27 +01:00
Andrew Morgan
0a88ec0a87 Add Application Service registration type requirement + py35, pg95 deprecation notices to v1.32.0 upgrade notes (#9849)
Fixes https://github.com/matrix-org/synapse/issues/9846.

Adds important removal information from the top of https://github.com/matrix-org/synapse/releases/tag/v1.32.0rc1 into UPGRADE.rst.
2021-04-20 14:19:35 +01:00
Patrick Cloke
b076bc276e Always use the name as the log ID. (#9829)
As far as I can tell our logging contexts are meant to log the request ID, or sometimes the request ID followed by a suffix (this is generally stored in the name field of LoggingContext). There's also code to log the name@memory location, but I'm not sure this is ever used.

This simplifies the code paths to require every logging context to have a name and use that in logging. For sub-contexts (created via nested_logging_contexts, defer_to_threadpool, Measure) we use the current context's str (which becomes their name or the string "sentinel") and then potentially modify that (e.g. add a suffix).
2021-04-20 14:19:00 +01:00
Erik Johnston
de0d088adc Add presence federation stream (#9819) 2021-04-20 14:11:24 +01:00
Erik Johnston
db70435de7 Fix bug where we sent remote presence states to remote servers (#9850) 2021-04-20 13:37:54 +01:00
Jonathan de Jong
495b214f4f Fix (final) Bugbear violations (#9838) 2021-04-20 11:50:49 +01:00
Andrew Morgan
71f0623de9 Port "Allow users to click account renewal links multiple times without hitting an 'Invalid Token' page #74" from synapse-dinsic (#9832)
This attempts to be a direct port of https://github.com/matrix-org/synapse-dinsic/pull/74 to mainline. There was some fiddling required to deal with the changes that have been made to mainline since (mainly dealing with the split of `RegistrationWorkerStore` from `RegistrationStore`, and the changes made to `self.make_request` in test code).
2021-04-19 19:16:34 +01:00
Denis Kasak
e694a598f8 Sanity check identity server passed to bind/unbind. (#9802)
Signed-off-by: Denis Kasak <dkasak@termina.org.uk>
2021-04-19 17:21:46 +01:00
Erik Johnston
2b7dd21655 Don't send normal presence updates over federation replication stream (#9828) 2021-04-19 10:50:49 +01:00
Andrew Morgan
c571736c6c User directory: use calculated room membership state instead (#9821)
Fixes: #9797.

Should help reduce CPU usage on the user directory, especially when memberships change in rooms with lots of state history.
2021-04-16 18:17:18 +01:00
Erik Johnston
601b893352 Small speed up joining large remote rooms (#9825)
There are a couple of points in `persist_events` where we are doing a
query per event in series, which we can replace.
2021-04-16 14:44:55 +01:00
Richard van der Hoff
5a153772c1 remove HomeServer.get_config (#9815)
Every single time I want to access the config object, I have to remember
whether or not we use `get_config`. Let's just get rid of it.
2021-04-14 19:09:08 +01:00
Patrick Cloke
936e69825a Separate creating an event context from persisting it in the federation handler (#9800)
This refactoring allows adding logic that uses the event context
before persisting it.
2021-04-14 12:35:28 -04:00
Patrick Cloke
e8816c6ace Revert "Check for space membership during a remote join of a restricted room. (#9763)"
This reverts commit cc51aaaa7a.

The PR was prematurely merged and not yet approved.
2021-04-14 12:33:37 -04:00
Patrick Cloke
cc51aaaa7a Check for space membership during a remote join of a restricted room. (#9763)
When receiving a /send_join request for a room with join rules set to 'restricted',
check if the user is a member of the spaces defined in the 'allow' key of the join
rules.
    
This only applies to an experimental room version, as defined in MSC3083.
2021-04-14 12:32:20 -04:00
Jonathan de Jong
05e8c70c05 Experimental Federation Speedup (#9702)
This basically speeds up federation by "squeezing" each individual dual database call (to destinations and destination_rooms), which previously happened per every event, into one call for an entire batch (100 max).

Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>
2021-04-14 17:19:02 +01:00
Erik Johnston
00a6db9676 Move some replication processing out of generic_worker (#9796)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2021-04-14 17:06:06 +01:00
rkfg
c9a2b5d402 More robust handling of the Content-Type header for thumbnail generation (#9788)
Signed-off-by: Sergey Shpikin <rkfg@rkfg.me>
2021-04-14 16:30:59 +01:00
Jonathan de Jong
4b965c862d Remove redundant "coding: utf-8" lines (#9786)
Part of #9744

Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.

`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
2021-04-14 15:34:27 +01:00
Andrew Morgan
7e460ec2a5 Add a dockerfile for running a set of Synapse worker processes (#9162)
This PR adds a Dockerfile and some supporting files to the `docker/` directory. The Dockerfile's intention is to spin up a container with:

* A Synapse main process.
* Any desired worker processes, defined by a `SYNAPSE_WORKERS` environment variable supplied at runtime.
* A redis for worker communication.
* A nginx for routing traffic.
* A supervisord to start all worker processes and monitor them if any go down.

Note that **this is not currently intended to be used in production**. If you'd like to use Synapse workers with Docker, instead make use of the official image, with one worker per container. The purpose of this dockerfile is currently to allow testing Synapse in worker mode with the [Complement](https://github.com/matrix-org/complement/) test suite.

`configure_workers_and_start.py` is where most of the magic happens in this PR. It reads from environment variables (documented in the file) and creates all necessary config files for the processes. It is the entrypoint of the Dockerfile, and thus is run any time the docker container is spun up, recreating all config files in case you want to use a different set of workers. One can specify which workers they'd like to use by setting the `SYNAPSE_WORKERS` environment variable (as a comma-separated list of arbitrary worker names) or by setting it to `*` for all worker processes. We will be using the latter in CI.

Huge thanks to @MatMaul for helping get this all working 🎉 This PR is paired with its equivalent on the Complement side: https://github.com/matrix-org/complement/pull/62.

Note, for the purpose of testing this PR before it's merged: You'll need to (re)build the base Synapse docker image for everything to work (`matrixdotorg/synapse:latest`). Then build the worker-based docker image on top (`matrixdotorg/synapse:workers`).
2021-04-14 13:54:49 +01:00
Andrew Morgan
f16c6cf59a Add note to docker docs explaining platform support (#9801)
Context is in https://github.com/matrix-org/synapse/issues/9764#issuecomment-818615894.

I struggled to find a more official link for this. The problem occurs when using WSL1 instead of WSL2, which some Windows platforms (at least Server 2019) still don't have. Docker have updated their documentation to paint a much happier picture now given WSL2's support.

The last sentence here can probably be removed once WSL1 is no longer around... though that will likely not be for a very long time.
2021-04-14 12:06:19 +01:00
Andrew Morgan
d9bd181a3f Update changelog for v1.32.0 2021-04-13 14:39:06 +01:00
Andrew Morgan
3efd98aa1c 1.32.0rc1 2021-04-13 14:23:43 +01:00
Erik Johnston
c1dbe84c3d Add release helper script (#9713)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2021-04-13 11:51:10 +01:00
Dan Callahan
1d5f0e3529 Bump black configuration to target py36 (#9781)
Signed-off-by: Dan Callahan <danc@element.io>
2021-04-13 10:41:34 +01:00
Dirk Klimpel
1fc97ee876 Add an admin API to manage ratelimit for a specific user (#9648) 2021-04-13 10:26:37 +01:00
Dan Callahan
a7044e5c0f Drop Python 3.5 from Trove classifier metadata. (#9782)
* Drop Python 3.5 from Trove classifier metadata.

Signed-off-by: Dan Callahan <danc@element.io>
2021-04-12 16:00:28 +01:00
Dan Callahan
3efde8b69a Add option to skip unit tests when building debs (#9793)
Signed-off-by: Dan Callahan <danc@element.io>
2021-04-12 15:27:05 +01:00
Will Hunt
e300ef64b1 Require AppserviceRegistrationType (#9548)
This change ensures that the appservice registration behaviour follows the spec. We decided to do this for Dendrite, so it made sense to also make a PR for synapse to correct the behaviour.
2021-04-12 15:13:55 +01:00
Patrick Cloke
0b3112123d Use mock from the stdlib. (#9772) 2021-04-09 13:44:38 -04:00
Richard van der Hoff
f946450184 Fix duplicate logging of exceptions in transaction processing (#9780)
There's no point logging this twice.
2021-04-09 18:12:15 +01:00
Patrick Cloke
abc814dcbf Enable complement tests for MSC2946. (#9771)
By providing the additional build tag for `msc2946`.
2021-04-09 08:11:51 -04:00
Dan Callahan
0277b8f3e6 Proof of concept for GitHub Actions (#9661)
Signed-off-by: Dan Callahan <danc@element.io>
2021-04-09 10:54:30 +01:00
Dirk Klimpel
48a1f4db31 Remove old admin API GET /_synapse/admin/v1/users/<user_id> (#9401)
Related: #8334
Deprecated in: #9429 - Synapse 1.28.0 (2021-02-25)

`GET /_synapse/admin/v1/users/<user_id>` has no
- unit tests
- documentation

API in v2 is available (#5925 - 12/2019, v1.7.0).
API is misleading. It expects `user_id` and returns a list of all users.

Signed-off-by: Dirk Klimpel dirk@klimpel.org
2021-04-09 09:44:40 +01:00
Jonathan de Jong
2ca4e349e9 Bugbear: Add Mutable Parameter fixes (#9682)
Part of #9366

Adds in fixes for B006 and B008, both relating to mutable parameter lint errors.

Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>
2021-04-08 22:38:54 +01:00
Richard van der Hoff
64f4f506c5 Merge pull request #9766 from matrix-org/rav/drop_py35
Require py36, Postgres 9.6, and sqlite 3.22
2021-04-08 19:29:03 +01:00
Richard van der Hoff
9e167d9c53 Merge remote-tracking branch 'origin/develop' into rav/drop_py35 2021-04-08 18:30:38 +01:00
Richard van der Hoff
24c58ebfc9 remove unused param on make_tuple_comparison_clause 2021-04-08 18:29:57 +01:00
Richard van der Hoff
88b9414e32 Merge pull request #9769 from matrix-org/rav/fix_bionic
Fix incompatibility with bionic
2021-04-08 18:26:51 +01:00
Erik Johnston
be0e722fe1 Merge branch 'erikj/fix_stalled_catchup' into develop 2021-04-08 18:05:35 +01:00
Erik Johnston
3a569fb200 Fix sharded federation sender sometimes using 100% CPU.
We pull all destinations requiring catchup from the DB in batches.
However, if all those destinations get filtered out (due to the
federation sender being sharded), then the `last_processed` destination
doesn't get updated, and we keep requesting the same set repeatedly.
2021-04-08 17:34:07 +01:00
Richard van der Hoff
77e56deffc update test_old_deps script 2021-04-08 16:44:07 +01:00
Richard van der Hoff
04ff88139a Update tox.ini to remove py35 2021-04-08 16:43:04 +01:00
Richard van der Hoff
9278eb701e drop support for stretch and xenial 2021-04-08 16:42:32 +01:00
Richard van der Hoff
3ada9b4264 Drop support for sqlite<3.22 as well 2021-04-08 16:42:32 +01:00
Richard van der Hoff
abade34633 Require py36 and Postgres 9.6 2021-04-08 16:42:32 +01:00
Richard van der Hoff
906065c75b unpin olddeps build from py36 2021-04-08 16:41:35 +01:00
Richard van der Hoff
5edd91caec Fix incompatibility with tox 2.5
Apparently on tox 2.5, `usedevelop` overrides `skip_install`, so we end up
trying to install the full dependencies even for the `-old` environment.
2021-04-08 16:22:45 +01:00
Johannes Wienke
cb657eb2f8 Put opencontainers labels to the final image (#9765)
They don't make any sense on the intermediate builder image. The final
images needs them to be of use for anyone.

Signed-off-by: Johannes Wienke <languitar@semipol.de>
2021-04-08 13:49:14 +01:00
Patrick Cloke
452991527a MSC3083: Check for space membership during a local join of restricted rooms. (#9735)
When joining a room with join rules set to 'restricted', check if the
user is a member of the spaces defined in the 'allow' key of the join rules.

This only applies to an experimental room version, as defined in MSC3083.
2021-04-08 08:28:32 -04:00
Patrick Cloke
48d44ab142 Record more information into structured logs. (#9654)
Records additional request information into the structured logs,
e.g. the requester, IP address, etc.
2021-04-08 08:01:14 -04:00
Andrew Morgan
0d87c6bd12 Don't report anything from GaugeBucketCollector metrics until data is present (#8926)
This PR modifies `GaugeBucketCollector` to only report data once it has been updated, rather than initially reporting a value of 0. Fixes zero values being reported for some metrics on startup until a background job to update the metric's value runs later.
2021-04-06 16:32:04 +01:00
Andrew Morgan
04819239ba Add a Synapse Module for configuring presence update routing (#9491)
At the moment, if you'd like to share presence between local or remote users, those users must be sharing a room together. This isn't always the most convenient or useful situation though.

This PR adds a module to Synapse that will allow deployments to set up extra logic on where presence updates should be routed. The module must implement two methods, `get_users_for_states` and `get_interested_users`. These methods are given presence updates or user IDs and must return information that Synapse will use to grant passing presence updates around.

A method is additionally added to `ModuleApi` which allows triggering a set of users to receive the current, online presence information for all users they are considered interested in. This is the equivalent of that user receiving presence information during an initial sync. 

The goal of this module is to be fairly generic and useful for a variety of applications, with hard requirements being:

* Sending state for a specific set or all known users to a defined set of local and remote users.
* The ability to trigger an initial sync for specific users, so they receive all current state.
2021-04-06 14:38:30 +01:00
Patrick Cloke
44bb881096 Add type hints to expiring cache. (#9730) 2021-04-06 08:58:18 -04:00
Andrew Morgan
024f121b74 Fix reported bugbear: too broad exception assertion (#9753) 2021-04-06 13:48:22 +01:00
Richard van der Hoff
0ef321ff3b Remove outdated constraint on remote_media_cache_thumbnails (#9725)
The `remote_media_cache_thumbnails_media_origin_media_id_thumbna_key`
constraint is superceded by
`remote_media_repository_thumbn_media_origin_id_width_height_met` (which adds
`thumbnail_method` to the unique key).

PR #7124 made an attempt to remove the old constraint, but got the name wrong,
so it didn't work. Here we update the bg update and rerun it.

Fixes #8649.
2021-04-06 13:36:05 +01:00
Erik Johnston
5688a74cf3 Merge branch 'master' into develop 2021-04-06 13:29:29 +01:00
Erik Johnston
1d8863c67d 1.31.0 2021-04-06 13:09:56 +01:00
Erik Johnston
a888cbdd31 Add deprecation policy doc (#9723)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2021-04-06 13:07:36 +01:00
Jonathan de Jong
fc8695d621 Fix version for bugbear (#9734) 2021-04-06 12:48:37 +01:00
Patrick Cloke
d959d28730 Add type hints to the federation handler and server. (#9743) 2021-04-06 07:21:57 -04:00
Patrick Cloke
e7b769aea1 Convert storage test cases to HomeserverTestCase. (#9736) 2021-04-06 07:21:02 -04:00
Jonathan de Jong
e2b8a90897 Update mypy configuration: no_implicit_optional = True (#9742) 2021-04-05 09:10:18 -04:00
Jonathan de Jong
4609e58970 Fix version for bugbear (#9734) 2021-04-02 11:22:21 +01:00
Erik Johnston
33548f37aa Improve tracing for to device messages (#9686) 2021-04-01 17:08:21 +01:00
Dirk Klimpel
bb0fe02a52 Add order_by to list user admin API (#9691) 2021-04-01 11:28:53 +01:00
Patrick Cloke
35c5ef2d24 Add an experimental room version to support restricted join rules. (#9717)
Per MSC3083.
2021-03-31 16:39:08 -04:00
Patrick Cloke
e32294f54b Merge branch 'release-v1.31.0' into develop 2021-03-31 14:19:14 -04:00
Patrick Cloke
5fe38e07e7 Revert "Use 'dmypy run' in lint.sh instead of 'mypy' (#9701)" (#9720) 2021-03-31 14:17:52 -04:00
Denis Kasak
5ff8eb97c6 Make sample config allowed_local_3pids regex stricter. (#9719)
The regex should be terminated so that subdomain matches of another
domain are not accepted. Just ensuring that someone doesn't shoot
themselves in the foot by copying our example.

Signed-off-by: Denis Kasak <dkasak@termina.org.uk>
2021-03-31 12:27:20 +00:00
Cristina
670564446c Deprecate imp (#9718)
Fixes #9642.

Signed-off-by: Cristina Muñoz <hi@xmunoz.com>
2021-03-31 12:04:27 +01:00
Andrew Morgan
ac99774dac Rewrite complement.sh (#9685)
This PR rewrites the original complement.sh script with a number of improvements:

* We can now use a local checkout of Complement (configurable with `COMPLEMENT_DIR`), though the default behaviour still downloads the master branch.
* You can now specify a regex of test names to run, or just run all tests.
* We now use the Synapse test blacklist tag (so all tests will pass).
2021-03-31 11:58:12 +01:00
Richard van der Hoff
4dabcf026e Include m.room.create in invite_room_state for Spaces (#9710) 2021-03-30 14:03:17 +01:00
Richard van der Hoff
f02663c4dd Replace room_invite_state_types with room_prejoin_state (#9700)
`room_invite_state_types` was inconvenient as a configuration setting, because
anyone that ever set it would not receive any new types that were added to the
defaults. Here, we deprecate the old setting, and replace it with a couple of
new settings under `room_prejoin_state`.
2021-03-30 12:12:44 +01:00
Erik Johnston
963f4309fe Make RateLimiter class check for ratelimit overrides (#9711)
This should fix a class of bug where we forget to check if e.g. the appservice shouldn't be ratelimited.

We also check the `ratelimit_override` table to check if the user has ratelimiting disabled. That table is really only meant to override the event sender ratelimiting, so we don't use any values from it (as they might not make sense for different rate limits), but we do infer that if ratelimiting is disabled for the user we should disabled all ratelimits.

Fixes #9663
2021-03-30 12:06:09 +01:00
Erik Johnston
3a446c21f8 Update changelog 2021-03-30 11:29:21 +01:00
Erik Johnston
78e48f61bf 1.31.0rc1 2021-03-30 11:19:21 +01:00
Andrew Morgan
f380bb77d1 Use 'dmypy run' in lint.sh instead of 'mypy' (#9701)
For it's obvious performance benefits. `dmypy` support landed in #9692.
2021-03-30 10:30:43 +01:00
Patrick Cloke
01dd90b0f0 Add type hints to DictionaryCache and TTLCache. (#9442) 2021-03-29 12:15:33 -04:00
blakehawkins
7dcf3fd221 Clarify that register_new_matrix_user is present also when installed via non-pip package (#9074)
Signed-off-by: blakehawkins blake.hawkins.11@gmail.com
2021-03-29 17:05:06 +01:00
Patrick Cloke
da75d2ea1f Add type hints for the federation sender. (#9681)
Includes an abstract base class which both the FederationSender
and the FederationRemoteSendQueue must implement.
2021-03-29 11:43:20 -04:00
Richard van der Hoff
4bbd535450 Update the OIDC sample config (#9695)
I've reiterated the advice about using `oidc` to migrate, since I've seen a few
people caught by this.

I've also removed a couple of the examples as they are duplicating the OIDC
documentation, and I think they might be leading people astray.
2021-03-29 15:40:11 +01:00
Andrew Morgan
5fdff97719 Fix CI by ignore type for None module import (#9709) 2021-03-29 14:42:38 +01:00
Jonathan de Jong
fc53a606e4 Fix re.Pattern mypy error on 3.6 (#9703) 2021-03-29 09:40:45 -04:00
Richard van der Hoff
ad8690a26c Fix the suggested pip incantation for cryptography (#9699)
If you have the wrong version of `cryptography` installed, synapse suggests:

```
To install run:
    pip install --upgrade --force 'cryptography>=3.4.7;python_version>='3.6''
```

However, the use of ' inside '...' doesn't work, so when you run this, you get
an error.
2021-03-29 11:55:33 +01:00
Eric Eastwood
0a778c135f Make pip install faster in Docker build for Complement testing (#9610)
Make pip install faster in Docker build for [Complement](https://github.com/matrix-org/complement) testing.

If files have changed in a `COPY` command, Docker will invalidate all of the layers below. So I changed the order of operations to install all dependencies before we `COPY synapse /synapse/synapse/`. This allows Docker to use our cached layer of dependencies even when we change the source of Synapse and speed up builds dramatically! `53.5s` -> `3.7s` builds 🤘

As an alternative, I did try using BuildKit caches but this still took 30 seconds overall on that step. 15 seconds to gather the dependencies from the cache and another 15 seconds to `Installing collected packages`.

Fix https://github.com/matrix-org/synapse/issues/9364
2021-03-26 18:42:58 +00:00
Richard van der Hoff
7c8402ddb8 Suppress CryptographyDeprecationWarning (#9698)
This warning is somewhat confusing to users, so let's suppress it
2021-03-26 17:33:55 +00:00
Erik Johnston
b5efcb577e Make it possible to use dmypy (#9692)
Running `dmypy run` will do a `mypy` check while spinning up a daemon
that makes rerunning `dmypy run` a lot faster.

`dmypy` doesn't support `follow_imports = silent` and has
`local_partial_types` enabled, so this PR enables those options and
fixes the issues that were newly raised. Note that `local_partial_types`
will be enabled by default in upcoming mypy releases.
2021-03-26 16:49:46 +00:00
Erik Johnston
019010964d Merge branch 'master' into develop 2021-03-26 12:26:58 +00:00
Erik Johnston
262ed05f5b Update cahngelog 2021-03-26 12:21:04 +00:00
Erik Johnston
548c4a6587 Update cahngelog 2021-03-26 12:17:37 +00:00
Erik Johnston
c6f8e8086c 1.30.1 2021-03-26 12:03:29 +00:00
Erik Johnston
12d6184713 Explicitly upgrade openssl in docker file and enforce new version of cryptography (#9697) 2021-03-26 12:00:25 +00:00
Paul Tötterman
d7d4232a2d Preserve host in example apache config (#9696)
Fixes redirect loop

Signed-off-by: Paul Tötterman <paul.totterman@iki.fi>
2021-03-26 10:38:31 +00:00
Quentin Gliech
d4c4798a25 Use interpreter from $PATH instead of absolute paths in various scripts using /usr/bin/env (#9689)
On NixOS, `bash` isn't under `/bin/bash` but rather in some directory in `$PATH`. Locally, I've been patching those scripts to make them work.

`/usr/bin/env` seems to be the only [portable way](https://unix.stackexchange.com/questions/29608/why-is-it-better-to-use-usr-bin-env-name-instead-of-path-to-name-as-my) to use binaries from the PATH as interpreters.

Signed-off-by: Quentin Gliech <quentingliech@gmail.com>
2021-03-25 16:53:54 +00:00
Serban Constantin
e5801db830 platform specific prerequisites in source install (#9667)
Make it clearer in the source install step that the platform specific
prerequisites must be installed first.

Signed-off-by: Serban Constantin <serban.constantin@gmail.com>
2021-03-25 15:31:26 +00:00
Andrew Morgan
fae81f2f68 Add a storage method for returning all current presence from all users (#9650)
Split off from https://github.com/matrix-org/synapse/pull/9491

Adds a storage method for getting the current presence of all local users, optionally excluding those that are offline. This will be used by the code in #9491 when a PresenceRouter module informs Synapse that a given user should have `"ALL"` user presence updates routed to them. Specifically, it is used here: b588f16e39/synapse/handlers/presence.py (L1131-L1133)

Note that there is a `get_all_presence_updates` function just above. That function is intended to walk up the table through stream IDs, and is primarily used by the presence replication stream. I could possibly make use of it in the PresenceRouter-related code, but it would be a bit of a bodge.
2021-03-25 10:34:23 +00:00
Erik Johnston
c602ba8336 Fixed undefined variable error in catchup (#9664)
Broke in #9640

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2021-03-24 16:12:47 +00:00
Patrick Cloke
c2d4bd62a2 Fix typo in changelog. 2021-03-24 11:32:42 -04:00
Jonathan de Jong
4c3827f2c1 Enable addtional flake8-bugbear linting checks. (#9659) 2021-03-24 09:34:30 -04:00
Richard van der Hoff
c73cc2c2ad Spaces summary: call out to other servers (#9653)
When we hit an unknown room in the space tree, see if there are other servers that we might be able to poll to get the data.

Fixes: #9447
2021-03-24 12:45:39 +00:00
Ben Banfield-Zanin
4655d2221e docs: fallback/web endpoint does not appear to be mounted on workers (#9679) 2021-03-24 11:43:04 +00:00
Patrick Cloke
83de0be4b0 Bump mypy-zope to 0.2.13. (#9678)
This fixes an error ("Cannot determine consistent method resolution order (MRO)")
when running mypy with a cache.
2021-03-24 07:35:43 -04:00
Patrick Cloke
af387cf52a Add type hints to misc. files. (#9676) 2021-03-24 06:49:01 -04:00
Patrick Cloke
7e8dc9934e Add a type hints for service notices to the HomeServer object. (#9675) 2021-03-24 06:48:46 -04:00
Erik Johnston
e550ab17ad Increase default join burst ratelimiting (#9674)
It's legitimate behaviour to try and join a bunch of rooms at once.
2021-03-23 14:52:20 +00:00
Jonathan de Jong
0caf2a338e Fix federation stall on concurrent access errors (#9639) 2021-03-23 13:52:30 +00:00
Richard van der Hoff
4ecba9bd5c Federation API for Space summary (#9652)
Builds on the work done in #9643 to add a federation API for space summaries.

There's a bit of refactoring of the existing client-server code first, to avoid too much duplication.
2021-03-23 11:51:12 +00:00
Patrick Cloke
b7748d3c00 Import HomeServer from the proper module. (#9665) 2021-03-23 07:12:48 -04:00
Andrew Morgan
5b268997bd Allow providing credentials to HTTPS_PROXY (#9657)
Addresses https://github.com/matrix-org/synapse-dinsic/issues/70

This PR causes `ProxyAgent` to attempt to extract credentials from an `HTTPS_PROXY` env var. If credentials are found, a `Proxy-Authorization` header ([details](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Proxy-Authorization)) is sent to the proxy server to authenticate against it. The headers are *not* passed to the remote server.

Also added some type hints.
2021-03-22 17:20:47 +00:00
Johannes Wienke
4612302399 Include opencontainers labels in Docker image (#9612)
Cf. https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys

Signed-off-by: Johannes Wienke <languitar@semipol.de>
2021-03-22 15:31:00 +00:00
Ankit Dobhal
d66f9070cd Fixed code misc. quality issues (#9649)
- Merge 'isinstance' calls.
- Remove unnecessary dict call outside of comprehension.
- Use 'sys.exit()' calls.
2021-03-22 11:18:13 -04:00
Erik Johnston
d600d4506b Merge branch 'master' into develop 2021-03-22 13:36:36 +00:00
Brendan Abolivier
e09838c78f Merge pull request #9644 from matrix-org/babolivier/msc3026
Implement MSC3026: busy presence state
2021-03-22 14:28:19 +01:00
Erik Johnston
e2904f720d 1.30.0 2021-03-22 13:15:55 +00:00
Brendan Abolivier
b6ed4f55ac Incorporate review 2021-03-19 18:19:50 +01:00
Brendan Abolivier
592d6305fd Merge branch 'develop' into babolivier/msc3026 2021-03-19 16:12:40 +01:00
Brendan Abolivier
0b56481caa Fix lint 2021-03-19 16:11:08 +01:00
Richard van der Hoff
066068f034 fix mypy 2021-03-19 12:20:11 +00:00
Richard van der Hoff
0e35584734 federation_client: handle inline signing_keys in hs.yaml (#9647) 2021-03-18 21:12:07 +00:00
Richard van der Hoff
201178db1a federation_client: stop adding URL prefix (#9645) 2021-03-18 20:31:47 +00:00
Patrick Cloke
9b0e3009fa Fix type-hints from bad merge. 2021-03-18 14:40:56 -04:00
Richard van der Hoff
004234f03a Initial spaces summary API (#9643)
This is very bare-bones for now: federation will come soon, while pagination is descoped for now but will come later.
2021-03-18 18:24:16 +00:00
Brendan Abolivier
066c703729 Move support for MSC3026 behind an experimental flag 2021-03-18 18:37:19 +01:00
Dirk Klimpel
8dd2ea65a9 Consistently check whether a password may be set for a user. (#9636) 2021-03-18 12:54:08 -04:00
Erik Johnston
dd71eb0f8a Make federation catchup send last event from any server. (#9640)
Currently federation catchup will send the last *local* event that we
failed to send to the remote. This can cause issues for large rooms
where lots of servers have sent events while the remote server was down,
as when it comes back up again it'll be flooded with events from various
points in the DAG.

Instead, let's make it so that all the servers send the most recent
events, even if its not theirs. The remote should deduplicate the
events, so there shouldn't be much overhead in doing this.
Alternatively, the servers could only send local events if they were
also extremities and hope that the other server will send the event
over, but that is a bit risky.
2021-03-18 15:52:26 +00:00
Brendan Abolivier
405aeb0b2c Implement MSC3026: busy presence state 2021-03-18 16:34:47 +01:00
Andrew Morgan
7b06f85c0e Ensure we use a copy of the event content dict before modifying it in serialize_event (#9585)
This bug was discovered by DINUM. We were modifying `serialized_event["content"]`, which - if you've got `USE_FROZEN_DICTS` turned on or are [using a third party rules module](17cd48fe51/synapse/events/third_party_rules.py (L73-L76)) - will raise a 500 if you try to a edit a reply to a message.

`serialized_event["content"]` could be set to the edit event's content, instead of a copy of it, which is bad as we attempt to modify it. Instead, we also end up modifying the original event's content. DINUM uses a third party rules module, which meant the event's content got frozen and thus an exception was raised.

To be clear, the problem is not that the event's content was frozen. In fact doing so helped us uncover the fact we weren't copying event content correctly.
2021-03-17 16:51:55 +00:00
Patrick Cloke
cc324d53fe Fix up types for the typing handler. (#9638)
By splitting this to two separate methods the callers know
what methods they can expect on the handler.
2021-03-17 11:30:21 -04:00
Hubert Chathi
73dbce5523 only save remote cross-signing keys if they're different from the current ones (#9634)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2021-03-17 11:04:57 -04:00
Erik Johnston
ad721fc559 Fix bad naming of storage function (#9637)
We had two functions named `get_forward_extremities_for_room` and
`get_forward_extremeties_for_room` that took different paramters. We
rename one of them to avoid confusion.
2021-03-17 13:20:08 +00:00
Richard van der Hoff
567f88f835 Prep work for removing outlier from internal_metadata (#9411)
* Populate `internal_metadata.outlier` based on `events` table

Rather than relying on `outlier` being in the `internal_metadata` column,
populate it based on the `events.outlier` column.

* Move `outlier` out of InternalMetadata._dict

Ultimately, this will allow us to stop writing it to the database. For now, we
have to grandfather it back in so as to maintain compatibility with older
versions of Synapse.
2021-03-17 12:33:18 +00:00
Patrick Cloke
b449af0379 Add type hints to the room member handler. (#9631) 2021-03-17 07:14:39 -04:00
Jonathan de Jong
27d2820c33 Enable flake8-bugbear, but disable most checks. (#9499)
* Adds B00 to ignored checks.
* Fixes remaining issues.
2021-03-16 14:19:27 -04:00
Hubbe
dd5e5dc1d6 Add SSO attribute requirements for OIDC providers (#9609)
Allows limiting who can login using OIDC via the claims
made from the IdP.
2021-03-16 11:46:07 -04:00
Dirk Klimpel
8000cf1315 Return m.change_password.enabled=false if local database is disabled (#9588)
Instead of if the user does not have a password hash. This allows a SSO
user to add a password to their account, but only if the local password
database is configured.
2021-03-16 11:44:25 -04:00
Andrew Morgan
45ef73fd4f Fix jemalloc changelog entry wording 2021-03-16 14:46:40 +00:00
Andrew Morgan
e3bc0e6f7c Changelog typo 2021-03-16 14:33:23 +00:00
Andrew Morgan
ad5d2e7ec0 Pull up appservice login deprecation notice 2021-03-16 13:51:24 +00:00
Andrew Morgan
d315e96443 1.30.0rc1 2021-03-16 13:45:46 +00:00
Andrew Morgan
847ecdd8fa Pass SSO IdP information to spam checker's registration function (#9626)
Fixes https://github.com/matrix-org/synapse/issues/9572

When a SSO user logs in for the first time, we create a local Matrix user for them. This goes through the register_user flow, which ends up triggering the spam checker. Spam checker modules don't currently have any way to differentiate between a user trying to sign up initially, versus an SSO user (whom has presumably already been approved elsewhere) trying to log in for the first time.

This PR passes `auth_provider_id` as an argument to the `check_registration_for_spam` function. This argument will contain an ID of an SSO provider (`"saml"`, `"cas"`, etc.) if one was used, else `None`.
2021-03-16 12:41:41 +00:00
Mathieu Velten
ccf1dc51d7 Install jemalloc in docker image (#8553)
Co-authored-by: Will Hunt <willh@matrix.org>
Co-authored-by: Erik Johnston <erik@matrix.org>
2021-03-16 11:32:18 +00:00
Patrick Cloke
1383508f29 Handle an empty cookie as an invalid macaroon. (#9620)
* Handle an empty cookie as an invalid macaroon.

* Newsfragment
2021-03-16 11:29:35 +00:00
Richard van der Hoff
dd69110d95 Add support for stable MSC2858 API (#9617)
The stable format uses different brand identifiers, so we need to support two
identifiers for each IdP.
2021-03-16 11:21:26 +00:00
Richard van der Hoff
5b5bc188cf Clean up config settings for stats (#9604)
... and complain if people try to turn it off.
2021-03-16 10:57:54 +00:00
Andrew Morgan
1b0eaed21f Prevent bundling aggregations for state events (#9619)
There's no need to do aggregation bundling for state events. Doing so can cause performance issues.
2021-03-16 10:27:51 +00:00
Richard van der Hoff
1c8a2541da Fix Internal Server Error on GET /saml2/authn_response (#9623)
* Fix Internal Server Error on `GET /saml2/authn_response`

Seems to have been introduced in #8765 (Synapse 1.24.0)

* Fix newsfile
2021-03-16 10:20:20 +00:00
Patrick Cloke
f87dfb9403 Revert requiring a specific version of Twisted for mypy checks. (#9618) 2021-03-15 12:18:35 -04:00
Patrick Cloke
d29b71aa50 Fix remaining mypy issues due to Twisted upgrade. (#9608) 2021-03-15 11:14:39 -04:00
Erik Johnston
026503fa3b Don't go into federation catch up mode so easily (#9561)
Federation catch up mode is very inefficient if the number of events
that the remote server has missed is small, since handling gaps can be
very expensive, c.f. #9492.

Instead of going into catch up mode whenever we see an error, we instead
do so only if we've backed off from trying the remote for more than an
hour (the assumption being that in such a case it is more than a
transient failure).
2021-03-15 14:42:40 +00:00
Richard van der Hoff
af2248f8bf Optimise missing prev_event handling (#9601)
Background: When we receive incoming federation traffic, and notice that we are missing prev_events from 
the incoming traffic, first we do a `/get_missing_events` request, and then if we still have missing prev_events,
we set up new backwards-extremities. To do that, we need to make a `/state_ids` request to ask the remote
server for the state at those prev_events, and then we may need to then ask the remote server for any events
in that state which we don't already have, as well as the auth events for those missing state events, so that we
can auth them.

This PR attempts to optimise the processing of that state request. The `state_ids` API returns a list of the state
events, as well as a list of all the auth events for *all* of those state events. The optimisation comes from the
observation that we are currently loading all of those auth events into memory at the start of the operation, but
we almost certainly aren't going to need *all* of the auth events. Rather, we can check that we have them, and
leave the actual load into memory for later. (Ideally the federation API would tell us which auth events we're
actually going to need, but it doesn't.)

The effect of this is to reduce the number of events that I need to load for an event in Matrix HQ from about
60000 to about 22000, which means it can stay in my in-memory cache, whereas previously the sheer number
of events meant that all 60K events had to be loaded from db for each request, due to the amount of cache
churn. (NB I've already tripled the size of the cache from its default of 10K).

Unfortunately I've ended up basically C&Ping `_get_state_for_room` and `_get_events_from_store_or_dest` into
a new method, because `_get_state_for_room` is also called during backfill, which expects the auth events to be
returned, so the same tricks don't work. That said, I don't really know why that codepath is completely different
(ultimately we're doing the same thing in setting up a new backwards extremity) so I've left a TODO suggesting
that we clean it up.
2021-03-15 13:51:02 +00:00
Patrick Cloke
55da8df078 Fix additional type hints from Twisted 21.2.0. (#9591) 2021-03-12 11:37:57 -05:00
Richard van der Hoff
1e67bff833 Reject concurrent transactions (#9597)
If more transactions arrive from an origin while we're still processing the
first one, reject them.

Hopefully a quick fix to https://github.com/matrix-org/synapse/issues/9489
2021-03-12 15:14:55 +00:00
Richard van der Hoff
2b328d7e02 Improve logging when processing incoming transactions (#9596)
Put the room id in the logcontext, to make it easier to understand what's going on.
2021-03-12 15:08:03 +00:00
Richard van der Hoff
464e5da7b2 Add logging for redis connection setup (#9590) 2021-03-11 18:35:09 +00:00
Patrick Cloke
e55bd0e110 Add tests for blacklisting reactor/agent. (#9563) 2021-03-11 09:15:22 -05:00
Dirk Klimpel
70d1b6abff Re-Activating account when local passwords are disabled (#9587)
Fixes: #8393
2021-03-11 13:52:32 +00:00
Richard van der Hoff
a7a3790066 Convert Requester to attrs (#9586)
... because namedtuples suck

Fix up a couple of other annotations to keep mypy happy.
2021-03-10 18:15:56 +00:00
Richard van der Hoff
1107214a1d Fix the auth provider on the logins metric (#9573)
We either need to pass the auth provider over the replication api, or make sure
we report the auth provider on the worker that received the request. I've gone
with the latter.
2021-03-10 18:15:03 +00:00
Jason Robinson
17cd48fe51 Fix spam checker modules documentation example (#9580)
Mention that parse_config must exist and note the
check_media_file_for_spam method.
2021-03-10 10:42:51 -05:00
Patrick Cloke
2a99cc6524 Use the chain cover index in get_auth_chain_ids. (#9576)
This uses a simplified version of get_chain_cover_difference to calculate
auth chain of events.
2021-03-10 09:57:59 -05:00
Patrick Cloke
918f6ed827 Fix a bug in the background task for purging chain cover. (#9583) 2021-03-10 08:55:52 -05:00
Patrick Cloke
67b979bfa1 Do not ignore the unpaddedbase64 module when type checking. (#9568) 2021-03-09 14:41:02 -05:00
Patrick Cloke
dc51d8ffaf Add a background task to purge unused chain IDs. (#9542)
This is a companion change to apply the fix in #9498 /
922788c604 to previously
purged rooms.
2021-03-09 11:22:25 -05:00
Andrew Morgan
e9df3f496b Link to the List user's media admin API from media Admin API docs (#9571)
Earlier [I was convinced](https://github.com/matrix-org/synapse/issues/9565) that we didn't have an Admin API for listing media uploaded by a user. Foolishly I was looking under the Media Admin API documentation, instead of the User Admin API documentation.

I thought it'd be helpful to link to the latter so others don't hit the same dead end :)
2021-03-09 15:15:52 +00:00
Richard van der Hoff
eaada74075 JWT OIDC secrets for Sign in with Apple (#9549)
Apple had to be special. They want a client secret which is generated from an EC key.

Fixes #9220. Also fixes #9212 while I'm here.
2021-03-09 15:03:37 +00:00
Erik Johnston
9cd18cc588 Retry 5xx errors in federation client (#9567)
Fixes #8915
2021-03-09 13:15:12 +00:00
Patrick Cloke
7fdc6cefb3 Fix additional type hints. (#9543)
Type hint fixes due to Twisted 21.2.0 adding type hints.
2021-03-09 07:41:32 -05:00
Patrick Cloke
075c16b410 Handle image transparency better when thumbnailing. (#9473)
Properly uses RGBA mode for 1- and 8-bit images with transparency
(instead of RBG mode).
2021-03-09 07:37:09 -05:00
Patrick Cloke
3ce650057d Add a list of hashes to ignore during git blame. (#9560)
The hashes are from commits due to auto-formatting, e.g. running black.

git can be configured to use this automatically by running the following:

    git config blame.ignoreRevsFile .git-blame-ignore-revs
2021-03-09 07:34:55 -05:00
Erik Johnston
576c91c7c1 Fixup sample config
After 0764d0c6e5
2021-03-09 11:40:45 +00:00
Andrew Morgan
22db45bd4d Prevent the config-lint script erroring out on any sample_config changes (#9562)
I noticed that I'd occasionally have `scripts-dev/lint.sh` fail when messing about with config options in my PR. The script calls `scripts-dev/config-lint.sh`, which attempts some validation on the sample config.

 It does this by using `sed` to edit the sample_config, and then seeing if the file changed using `git diff`.

The problem is: if you changed the sample_config as part of your commit, this script will error regardless.

This PR attempts to change the check so that existing, unstaged changes to the sample_config will not cause the script to report an invalid file.
2021-03-09 11:11:42 +00:00
Jonathan de Jong
9898470e7d Add logging to ObservableDeferred callbacks (#9523) 2021-03-09 11:09:31 +00:00
Matthew Hodgson
0764d0c6e5 quick config comment tweak to clarify allow_profile_lookup_over_federation 2021-03-08 21:52:04 +00:00
Jonathan de Jong
d6196efafc Add ResponseCache tests. (#9458) 2021-03-08 14:00:07 -05:00
Will Hunt
b2c4d3d721 Warn that /register will soon require a type when called with an access token (#9559)
This notice is giving a heads up to the planned spec compliance fix https://github.com/matrix-org/synapse/pull/9548.
2021-03-08 16:35:04 +00:00
Dirk Klimpel
7076eee4b9 Add type hints to purge room and server notice admin API. (#9520) 2021-03-08 10:34:38 -05:00
Patrick Cloke
cb7fc7523e Add a basic test for purging rooms. (#9541)
Unfortunately this doesn't test re-joining the room since
that requires having another homeserver to query over
federation, which isn't easily doable in unit tests.
2021-03-08 09:21:36 -05:00
Erik Johnston
b988b07bb0 Merge branch 'master' into develop 2021-03-08 14:06:35 +00:00
Patrick Cloke
58114f8a17 Create a SynapseReactor type which incorporates the necessary reactor interfaces. (#9528)
This helps fix some type hints when running with Twisted 21.2.0.
2021-03-08 08:25:43 -05:00
Leo Bärring
0fc4eb103a Update reverse proxy to add OpenBSD relayd example configuration. (#9508)
Update reverse proxy to add OpenBSD relayd example configuration.

Signed-off-by: Leo Bärring <leo.barring@protonmail.com>
2021-03-06 11:49:19 +00:00
Ben Banfield-Zanin
e5da770cce Add additional SAML2 upgrade notes (#9550) 2021-03-05 12:07:50 +00:00
Richard van der Hoff
8a4b3738f3 Replace last_*_pdu_age metrics with timestamps (#9540)
Following the advice at
https://prometheus.io/docs/practices/instrumentation/#timestamps-not-time-since,
it's preferable to export unix timestamps, not ages.

There doesn't seem to be any particular naming convention for timestamp
metrics.
2021-03-04 16:40:18 +00:00
Richard van der Hoff
df425c2c63 Prometheus metrics for logins and registrations (#9511)
Add prom metrics for number of users successfully registering and logging in, by SSO provider.
2021-03-04 16:39:27 +00:00
Richard van der Hoff
7eb6e39a8f Record the SSO Auth Provider in the login token (#9510)
This great big stack of commits is a a whole load of hoop-jumping to make it easier to store additional values in login tokens, and then to actually store the SSO Identity Provider in the login token. (Making use of that data will follow in a subsequent PR.)
2021-03-04 14:44:22 +00:00
1028 changed files with 16538 additions and 6483 deletions

View File

@@ -1,37 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from synapse.storage.engines import create_engine
logger = logging.getLogger("create_postgres_db")
if __name__ == "__main__":
# Create a PostgresEngine.
db_engine = create_engine({"name": "psycopg2", "args": {}})
# Connect to postgres to create the base database.
# We use "postgres" as a database because it's bound to exist and the "synapse" one
# doesn't exist yet.
db_conn = db_engine.module.connect(
user="postgres", host="postgres", password="postgres", dbname="postgres"
)
db_conn.autocommit = True
cur = db_conn.cursor()
cur.execute("CREATE DATABASE synapse;")
cur.close()
db_conn.close()

View File

@@ -0,0 +1,31 @@
#!/usr/bin/env python
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import psycopg2
# a very simple replacment for `psql`, to make up for the lack of the postgres client
# libraries in the synapse docker image.
# We use "postgres" as a database because it's bound to exist and the "synapse" one
# doesn't exist yet.
db_conn = psycopg2.connect(
user="postgres", host="postgres", password="postgres", dbname="postgres"
)
db_conn.autocommit = True
cur = db_conn.cursor()
for c in sys.argv[1:]:
cur.execute(c)

View File

@@ -1,16 +1,16 @@
#!/bin/bash
#!/usr/bin/env bash
# this script is run by buildkite in a plain `xenial` container; it installs the
# minimal requirements for tox and hands over to the py35-old tox environment.
# this script is run by buildkite in a plain `bionic` container; it installs the
# minimal requirements for tox and hands over to the py3-old tox environment.
set -ex
apt-get update
apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev xmlsec1 zlib1g-dev tox
apt-get install -y python3 python3-dev python3-pip libxml2-dev libxslt-dev xmlsec1 zlib1g-dev tox
export LANG="C.UTF-8"
# Prevent virtualenv from auto-updating pip to an incompatible version
export VIRTUALENV_NO_DOWNLOAD=1
exec tox -e py35-old,combine
exec tox -e py3-old,combine

View File

@@ -1,10 +1,10 @@
#!/bin/bash
#!/usr/bin/env bash
#
# Test script for 'synapse_port_db', which creates a virtualenv, installs Synapse along
# with additional dependencies needed for the test (such as coverage or the PostgreSQL
# driver), update the schema of the test SQLite database and run background updates on it,
# create an empty test database in PostgreSQL, then run the 'synapse_port_db' script to
# test porting the SQLite database to the PostgreSQL database (with coverage).
# Test script for 'synapse_port_db'.
# - sets up synapse and deps
# - runs the port script on a prepopulated test sqlite db
# - also runs it against an new sqlite db
set -xe
cd `dirname $0`/../..
@@ -22,15 +22,32 @@ echo "--- Generate the signing key"
# Generate the server's signing key.
python -m synapse.app.homeserver --generate-keys -c .buildkite/sqlite-config.yaml
echo "--- Prepare the databases"
echo "--- Prepare test database"
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
scripts-dev/update_database --database-config .buildkite/sqlite-config.yaml
# Create the PostgreSQL database.
./.buildkite/scripts/create_postgres_db.py
./.buildkite/scripts/postgres_exec.py "CREATE DATABASE synapse"
echo "+++ Run synapse_port_db"
# Run the script
echo "+++ Run synapse_port_db against test database"
coverage run scripts/synapse_port_db --sqlite-database .buildkite/test_db.db --postgres-config .buildkite/postgres-config.yaml
#####
# Now do the same again, on an empty database.
echo "--- Prepare empty SQLite database"
# we do this by deleting the sqlite db, and then doing the same again.
rm .buildkite/test_db.db
scripts-dev/update_database --database-config .buildkite/sqlite-config.yaml
# re-create the PostgreSQL database.
./.buildkite/scripts/postgres_exec.py \
"DROP DATABASE synapse" \
"CREATE DATABASE synapse"
echo "+++ Run synapse_port_db against empty database"
coverage run scripts/synapse_port_db --sqlite-database .buildkite/test_db.db --postgres-config .buildkite/postgres-config.yaml

8
.git-blame-ignore-revs Normal file
View File

@@ -0,0 +1,8 @@
# Black reformatting (#5482).
32e7c9e7f20b57dd081023ac42d6931a8da9b3a3
# Target Python 3.5 with black (#8664).
aff1eb7c671b0a3813407321d2702ec46c71fa56
# Update black to 20.8b1 (#9381).
0a00b7ff14890987f09112a2ae696c61001e6cf1

322
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,322 @@
name: Tests
on:
push:
branches: ["develop", "release-*"]
pull_request:
jobs:
lint:
runs-on: ubuntu-latest
strategy:
matrix:
toxenv:
- "check-sampleconfig"
- "check_codestyle"
- "check_isort"
- "mypy"
- "packaging"
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- run: pip install tox
- run: tox -e ${{ matrix.toxenv }}
lint-crlf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Check line endings
run: scripts-dev/check_line_terminators.sh
lint-newsfile:
if: ${{ github.base_ref == 'develop' || contains(github.base_ref, 'release-') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- run: pip install tox
- name: Patch Buildkite-specific test script
run: |
sed -i -e 's/\$BUILDKITE_PULL_REQUEST/${{ github.event.number }}/' \
scripts-dev/check-newsfragment
- run: scripts-dev/check-newsfragment
lint-sdist:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: "3.x"
- run: pip install wheel
- run: python setup.py sdist bdist_wheel
- uses: actions/upload-artifact@v2
with:
name: Python Distributions
path: dist/*
# Dummy step to gate other tests on without repeating the whole list
linting-done:
if: ${{ always() }} # Run this even if prior jobs were skipped
needs: [lint, lint-crlf, lint-newsfile, lint-sdist]
runs-on: ubuntu-latest
steps:
- run: "true"
trial:
if: ${{ !failure() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.6", "3.7", "3.8", "3.9"]
database: ["sqlite"]
include:
# Newest Python without optional deps
- python-version: "3.9"
toxenv: "py-noextras,combine"
# Oldest Python with PostgreSQL
- python-version: "3.6"
database: "postgres"
postgres-version: "9.6"
# Newest Python with PostgreSQL
- python-version: "3.9"
database: "postgres"
postgres-version: "13"
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.postgres-version }}
if: ${{ matrix.postgres-version }}
run: |
docker run -d -p 5432:5432 \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.postgres-version }}
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- run: pip install tox
- name: Await PostgreSQL
if: ${{ matrix.postgres-version }}
timeout-minutes: 2
run: until pg_isready -h localhost; do sleep 1; done
- run: tox -e py,combine
env:
TRIAL_FLAGS: "--jobs=2"
SYNAPSE_POSTGRES: ${{ matrix.database == 'postgres' || '' }}
SYNAPSE_POSTGRES_HOST: localhost
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
- name: Dump logs
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
trial-olddeps:
if: ${{ !failure() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Test with old deps
uses: docker://ubuntu:bionic # For old python and sqlite
with:
workdir: /github/workspace
entrypoint: .buildkite/scripts/test_old_deps.sh
env:
TRIAL_FLAGS: "--jobs=2"
- name: Dump logs
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
trial-pypy:
# Very slow; only run if the branch name includes 'pypy'
if: ${{ contains(github.ref, 'pypy') && !failure() }}
needs: linting-done
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["pypy-3.6"]
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1 libxml2-dev libxslt-dev
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- run: pip install tox
- run: tox -e py,combine
env:
TRIAL_FLAGS: "--jobs=2"
- name: Dump logs
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
sytest:
if: ${{ !failure() }}
needs: linting-done
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:${{ matrix.sytest-tag }}
volumes:
- ${{ github.workspace }}:/src
env:
BUILDKITE_BRANCH: ${{ github.head_ref }}
POSTGRES: ${{ matrix.postgres && 1}}
MULTI_POSTGRES: ${{ (matrix.postgres == 'multi-postgres') && 1}}
WORKERS: ${{ matrix.workers && 1 }}
REDIS: ${{ matrix.redis && 1 }}
BLACKLIST: ${{ matrix.workers && 'synapse-blacklist-with-workers' }}
strategy:
fail-fast: false
matrix:
include:
- sytest-tag: bionic
- sytest-tag: bionic
postgres: postgres
- sytest-tag: testing
postgres: postgres
- sytest-tag: bionic
postgres: multi-postgres
workers: workers
- sytest-tag: buster
postgres: multi-postgres
workers: workers
- sytest-tag: buster
postgres: postgres
workers: workers
redis: redis
steps:
- uses: actions/checkout@v2
- name: Prepare test blacklist
run: cat sytest-blacklist .buildkite/worker-blacklist > synapse-blacklist-with-workers
- name: Run SyTest
run: /bootstrap.sh synapse
working-directory: /src
- name: Dump results.tap
if: ${{ always() }}
run: cat /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v2
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.*, ', ') }})
path: |
/logs/results.tap
/logs/**/*.log*
portdb:
if: ${{ !failure() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
strategy:
matrix:
include:
- python-version: "3.6"
postgres-version: "9.6"
- python-version: "3.9"
postgres-version: "13"
services:
postgres:
image: postgres:${{ matrix.postgres-version }}
ports:
- 5432:5432
env:
POSTGRES_PASSWORD: "postgres"
POSTGRES_INITDB_ARGS: "--lc-collate C --lc-ctype C --encoding UTF8"
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Patch Buildkite-specific test scripts
run: |
sed -i -e 's/host="postgres"/host="localhost"/' .buildkite/scripts/postgres_exec.py
sed -i -e 's/host: postgres/host: localhost/' .buildkite/postgres-config.yaml
sed -i -e 's|/src/||' .buildkite/{sqlite,postgres}-config.yaml
sed -i -e 's/\$TOP/\$GITHUB_WORKSPACE/' .coveragerc
- run: .buildkite/scripts/test_synapse_port_db.sh
complement:
if: ${{ !failure() }}
needs: linting-done
runs-on: ubuntu-latest
container:
# https://github.com/matrix-org/complement/blob/master/dockerfiles/ComplementCIBuildkite.Dockerfile
image: matrixdotorg/complement:latest
env:
CI: true
ports:
- 8448:8448
volumes:
- /var/run/docker.sock:/var/run/docker.sock
steps:
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
with:
path: synapse
- name: Run actions/checkout@v2 for complement
uses: actions/checkout@v2
with:
repository: "matrix-org/complement"
path: complement
# Build initial Synapse image
- run: docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
working-directory: synapse
# Build a ready-to-run Synapse image based on the initial image above.
# This new image includes a config file, keys for signing and TLS, and
# other settings to make it suitable for testing under Complement.
- run: docker build -t complement-synapse -f Synapse.Dockerfile .
working-directory: complement/dockerfiles
# Run Complement
- run: go test -v -tags synapse_blacklist ./tests
env:
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
working-directory: complement

View File

@@ -1,3 +1,490 @@
Synapse 1.34.0rc1 (2021-05-12)
==============================
This release deprecates the `room_invite_state_types` configuration setting. See the [upgrade notes](https://github.com/matrix-org/synapse/blob/release-v1.34.0/UPGRADE.rst#upgrading-to-v1340) for instructions on updating your configuration file to use the new `room_prejoin_state` setting.
This release also deprecates the `POST /_synapse/admin/v1/rooms/<room_id>/delete` admin API route. Server administrators are encouraged to update their scripts to use the new `DELETE /_synapse/admin/v1/rooms/<room_id>` route instead.
Features
--------
- Add experimental option to track memory usage of the caches. ([\#9881](https://github.com/matrix-org/synapse/issues/9881))
- Add support for `DELETE /_synapse/admin/v1/rooms/<room_id>`. ([\#9889](https://github.com/matrix-org/synapse/issues/9889))
- Add limits to how often Synapse will GC, ensuring that large servers do not end up GC thrashing if `gc_thresholds` has not been correctly set. ([\#9902](https://github.com/matrix-org/synapse/issues/9902))
- Improve performance of sending events for worker-based deployments using Redis. ([\#9905](https://github.com/matrix-org/synapse/issues/9905), [\#9950](https://github.com/matrix-org/synapse/issues/9950), [\#9951](https://github.com/matrix-org/synapse/issues/9951))
- Improve performance after joining a large room when presence is enabled. ([\#9910](https://github.com/matrix-org/synapse/issues/9910), [\#9916](https://github.com/matrix-org/synapse/issues/9916))
- Support stable identifiers for [MSC1772](https://github.com/matrix-org/matrix-doc/pull/1772) Spaces. `m.space.child` events will now be taken into account when populating the experimental spaces summary response. Please see [the upgrade notes](https://github.com/matrix-org/synapse/blob/release-v1.34.0/UPGRADE.rst#upgrading-to-v1340) if you have customised `room_invite_state_types` in your configuration. ([\#9915](https://github.com/matrix-org/synapse/issues/9915), [\#9966](https://github.com/matrix-org/synapse/issues/9966))
- Improve performance of backfilling in large rooms. ([\#9935](https://github.com/matrix-org/synapse/issues/9935))
- Add a config option to allow you to prevent device display names from being shared over federation. Contributed by @aaronraimist. ([\#9945](https://github.com/matrix-org/synapse/issues/9945))
- Update support for [MSC2946](https://github.com/matrix-org/matrix-doc/pull/2946): Spaces Summary. ([\#9947](https://github.com/matrix-org/synapse/issues/9947), [\#9954](https://github.com/matrix-org/synapse/issues/9954))
Bugfixes
--------
- Fix a bug introduced in v1.32.0 where the associated connection was improperly logged for SQL logging statements. ([\#9895](https://github.com/matrix-org/synapse/issues/9895))
- Correct the type hint for the `user_may_create_room_alias` method of spam checkers. It is provided a `RoomAlias`, not a `str`. ([\#9896](https://github.com/matrix-org/synapse/issues/9896))
- Fix bug where user directory could get out of sync if room visibility and membership changed in quick succession. ([\#9910](https://github.com/matrix-org/synapse/issues/9910))
- Include the `origin_server_ts` property in the experimental [MSC2946](https://github.com/matrix-org/matrix-doc/pull/2946) support to allow clients to properly sort rooms. ([\#9928](https://github.com/matrix-org/synapse/issues/9928))
- Fix bugs introduced in v1.23.0 which made the PostgreSQL port script fail when run with a newly-created SQLite database. ([\#9930](https://github.com/matrix-org/synapse/issues/9930))
- Fix a bug introduced in Synapse 1.29.0 which caused `m.room_key_request` to-device messages sent from one user to another to be dropped. ([\#9961](https://github.com/matrix-org/synapse/issues/9961), [\#9965](https://github.com/matrix-org/synapse/issues/9965))
- Fix a bug introduced in v1.27.0 preventing users and appservices exempt from ratelimiting from creating rooms with many invitees. ([\#9968](https://github.com/matrix-org/synapse/issues/9968))
Updates to the Docker image
---------------------------
- Add `startup_delay` to docker healthcheck to reduce waiting time for coming online and update the documentation with extra options. Contributed by @Maquis196. ([\#9913](https://github.com/matrix-org/synapse/issues/9913))
Improved Documentation
----------------------
- Add `port` argument to the Postgres database sample config section. ([\#9911](https://github.com/matrix-org/synapse/issues/9911))
Deprecations and Removals
-------------------------
- Mark as deprecated `POST /_synapse/admin/v1/rooms/<room_id>/delete`. ([\#9889](https://github.com/matrix-org/synapse/issues/9889))
Internal Changes
----------------
- Reduce the length of Synapse's access tokens. ([\#5588](https://github.com/matrix-org/synapse/issues/5588))
- Export jemalloc stats to Prometheus if it is being used. ([\#9882](https://github.com/matrix-org/synapse/issues/9882))
- Add type hints to presence handler. ([\#9885](https://github.com/matrix-org/synapse/issues/9885))
- Reduce memory usage of the LRU caches. ([\#9886](https://github.com/matrix-org/synapse/issues/9886))
- Add type hints to the `synapse.handlers` module. ([\#9896](https://github.com/matrix-org/synapse/issues/9896))
- Time response time for external cache requests. ([\#9904](https://github.com/matrix-org/synapse/issues/9904))
- Minor fixes to the `make_full_schema.sh` script. ([\#9931](https://github.com/matrix-org/synapse/issues/9931))
- Move database schema files into a common directory. ([\#9932](https://github.com/matrix-org/synapse/issues/9932))
- Add debug logging for lost/delayed to-device messages. ([\#9959](https://github.com/matrix-org/synapse/issues/9959))
Synapse 1.33.2 (2021-05-11)
===========================
Due to the security issue highlighted below, server administrators are encouraged to update Synapse. We are not aware of these vulnerabilities being exploited in the wild.
Security advisory
-----------------
This release fixes a denial of service attack ([CVE-2021-29471](https://github.com/matrix-org/synapse/security/advisories/GHSA-x345-32rc-8h85)) against Synapse's push rules implementation. Server admins are encouraged to upgrade.
Internal Changes
----------------
- Unpin attrs dependency. ([\#9946](https://github.com/matrix-org/synapse/issues/9946))
Synapse 1.33.1 (2021-05-06)
===========================
Bugfixes
--------
- Fix bug where `/sync` would break if using the latest version of `attrs` dependency, by pinning to a previous version. ([\#9937](https://github.com/matrix-org/synapse/issues/9937))
Synapse 1.33.0 (2021-05-05)
===========================
Features
--------
- Build Debian packages for Ubuntu 21.04 (Hirsute Hippo). ([\#9909](https://github.com/matrix-org/synapse/issues/9909))
Synapse 1.33.0rc2 (2021-04-29)
==============================
Bugfixes
--------
- Fix tight loop when handling presence replication when using workers. Introduced in v1.33.0rc1. ([\#9900](https://github.com/matrix-org/synapse/issues/9900))
Synapse 1.33.0rc1 (2021-04-28)
==============================
Features
--------
- Update experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. ([\#9800](https://github.com/matrix-org/synapse/issues/9800), [\#9814](https://github.com/matrix-org/synapse/issues/9814))
- Add experimental support for handling presence on a worker. ([\#9819](https://github.com/matrix-org/synapse/issues/9819), [\#9820](https://github.com/matrix-org/synapse/issues/9820), [\#9828](https://github.com/matrix-org/synapse/issues/9828), [\#9850](https://github.com/matrix-org/synapse/issues/9850))
- Return a new template when an user attempts to renew their account multiple times with the same token, stating that their account is set to expire. This replaces the invalid token template that would previously be shown in this case. This change concerns the optional account validity feature. ([\#9832](https://github.com/matrix-org/synapse/issues/9832))
Bugfixes
--------
- Fixes the OIDC SSO flow when using a `public_baseurl` value including a non-root URL path. ([\#9726](https://github.com/matrix-org/synapse/issues/9726))
- Fix thumbnail generation for some sites with non-standard content types. Contributed by @rkfg. ([\#9788](https://github.com/matrix-org/synapse/issues/9788))
- Add some sanity checks to identity server passed to 3PID bind/unbind endpoints. ([\#9802](https://github.com/matrix-org/synapse/issues/9802))
- Limit the size of HTTP responses read over federation. ([\#9833](https://github.com/matrix-org/synapse/issues/9833))
- Fix a bug which could cause Synapse to get stuck in a loop of resyncing device lists. ([\#9867](https://github.com/matrix-org/synapse/issues/9867))
- Fix a long-standing bug where errors from federation did not propagate to the client. ([\#9868](https://github.com/matrix-org/synapse/issues/9868))
Improved Documentation
----------------------
- Add a note to the docker docs mentioning that we mirror upstream's supported Docker platforms. ([\#9801](https://github.com/matrix-org/synapse/issues/9801))
Internal Changes
----------------
- Add a dockerfile for running Synapse in worker-mode under Complement. ([\#9162](https://github.com/matrix-org/synapse/issues/9162))
- Apply `pyupgrade` across the codebase. ([\#9786](https://github.com/matrix-org/synapse/issues/9786))
- Move some replication processing out of `generic_worker`. ([\#9796](https://github.com/matrix-org/synapse/issues/9796))
- Replace `HomeServer.get_config()` with inline references. ([\#9815](https://github.com/matrix-org/synapse/issues/9815))
- Rename some handlers and config modules to not duplicate the top-level module. ([\#9816](https://github.com/matrix-org/synapse/issues/9816))
- Fix a long-standing bug which caused `max_upload_size` to not be correctly enforced. ([\#9817](https://github.com/matrix-org/synapse/issues/9817))
- Reduce CPU usage of the user directory by reusing existing calculated room membership. ([\#9821](https://github.com/matrix-org/synapse/issues/9821))
- Small speed up for joining large remote rooms. ([\#9825](https://github.com/matrix-org/synapse/issues/9825))
- Introduce flake8-bugbear to the test suite and fix some of its lint violations. ([\#9838](https://github.com/matrix-org/synapse/issues/9838))
- Only store the raw data in the in-memory caches, rather than objects that include references to e.g. the data stores. ([\#9845](https://github.com/matrix-org/synapse/issues/9845))
- Limit length of accepted email addresses. ([\#9855](https://github.com/matrix-org/synapse/issues/9855))
- Remove redundant `synapse.types.Collection` type definition. ([\#9856](https://github.com/matrix-org/synapse/issues/9856))
- Handle recently added rate limits correctly when using `--no-rate-limit` with the demo scripts. ([\#9858](https://github.com/matrix-org/synapse/issues/9858))
- Disable invite rate-limiting by default when running the unit tests. ([\#9871](https://github.com/matrix-org/synapse/issues/9871))
- Pass a reactor into `SynapseSite` to make testing easier. ([\#9874](https://github.com/matrix-org/synapse/issues/9874))
- Make `DomainSpecificString` an `attrs` class. ([\#9875](https://github.com/matrix-org/synapse/issues/9875))
- Add type hints to `synapse.api.auth` and `synapse.api.auth_blocking` modules. ([\#9876](https://github.com/matrix-org/synapse/issues/9876))
- Remove redundant `_PushHTTPChannel` test class. ([\#9878](https://github.com/matrix-org/synapse/issues/9878))
- Remove backwards-compatibility code for Python versions < 3.6. ([\#9879](https://github.com/matrix-org/synapse/issues/9879))
- Small performance improvement around handling new local presence updates. ([\#9887](https://github.com/matrix-org/synapse/issues/9887))
Synapse 1.32.2 (2021-04-22)
===========================
This release includes a fix for a regression introduced in 1.32.0.
Bugfixes
--------
- Fix a regression in Synapse 1.32.0 and 1.32.1 which caused `LoggingContext` errors in plugins. ([\#9857](https://github.com/matrix-org/synapse/issues/9857))
Synapse 1.32.1 (2021-04-21)
===========================
This release fixes [a regression](https://github.com/matrix-org/synapse/issues/9853)
in Synapse 1.32.0 that caused connected Prometheus instances to become unstable.
However, as this release is still subject to the `LoggingContext` change in 1.32.0,
it is recommended to remain on or downgrade to 1.31.0.
Bugfixes
--------
- Fix a regression in Synapse 1.32.0 which caused Synapse to report large numbers of Prometheus time series, potentially overwhelming Prometheus instances. ([\#9854](https://github.com/matrix-org/synapse/issues/9854))
Synapse 1.32.0 (2021-04-20)
===========================
**Note:** This release introduces [a regression](https://github.com/matrix-org/synapse/issues/9853)
that can overwhelm connected Prometheus instances. This issue was not present in
1.32.0rc1. If affected, it is recommended to downgrade to 1.31.0 in the meantime, and
follow [these instructions](https://github.com/matrix-org/synapse/pull/9854#issuecomment-823472183)
to clean up any excess writeahead logs.
**Note:** This release also mistakenly included a change that may affected Synapse
modules that import `synapse.logging.context.LoggingContext`, such as
[synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider).
This will be fixed in a later Synapse version.
**Note:** This release requires Python 3.6+ and Postgres 9.6+ or SQLite 3.22+.
This release removes the deprecated `GET /_synapse/admin/v1/users/<user_id>` admin API. Please use the [v2 API](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/user_admin_api.rst#query-user-account) instead, which has improved capabilities.
This release requires Application Services to use type `m.login.application_service` when registering users via the `/_matrix/client/r0/register` endpoint to comply with the spec. Please ensure your Application Services are up to date.
If you are using the `packages.matrix.org` Debian repository for Synapse packages,
note that we have recently updated the expiry date on the gpg signing key. If you see an
error similar to `The following signatures were invalid: EXPKEYSIG F473DD4473365DE1`, you
will need to get a fresh copy of the keys. You can do so with:
```sh
sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
```
Bugfixes
--------
- Fix the log lines of nested logging contexts. Broke in 1.32.0rc1. ([\#9829](https://github.com/matrix-org/synapse/issues/9829))
Synapse 1.32.0rc1 (2021-04-13)
==============================
Features
--------
- Add a Synapse module for routing presence updates between users. ([\#9491](https://github.com/matrix-org/synapse/issues/9491))
- Add an admin API to manage ratelimit for a specific user. ([\#9648](https://github.com/matrix-org/synapse/issues/9648))
- Include request information in structured logging output. ([\#9654](https://github.com/matrix-org/synapse/issues/9654))
- Add `order_by` to the admin API `GET /_synapse/admin/v2/users`. Contributed by @dklimpel. ([\#9691](https://github.com/matrix-org/synapse/issues/9691))
- Replace the `room_invite_state_types` configuration setting with `room_prejoin_state`. ([\#9700](https://github.com/matrix-org/synapse/issues/9700))
- Add experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership. ([\#9717](https://github.com/matrix-org/synapse/issues/9717), [\#9735](https://github.com/matrix-org/synapse/issues/9735))
- Update experimental support for Spaces: include `m.room.create` in the room state sent with room-invites. ([\#9710](https://github.com/matrix-org/synapse/issues/9710))
- Synapse now requires Python 3.6 or later. It also requires Postgres 9.6 or later or SQLite 3.22 or later. ([\#9766](https://github.com/matrix-org/synapse/issues/9766))
Bugfixes
--------
- Prevent `synapse_forward_extremities` and `synapse_excess_extremity_events` Prometheus metrics from initially reporting zero-values after startup. ([\#8926](https://github.com/matrix-org/synapse/issues/8926))
- Fix recently added ratelimits to correctly honour the application service `rate_limited` flag. ([\#9711](https://github.com/matrix-org/synapse/issues/9711))
- Fix longstanding bug which caused `duplicate key value violates unique constraint "remote_media_cache_thumbnails_media_origin_media_id_thumbna_key"` errors. ([\#9725](https://github.com/matrix-org/synapse/issues/9725))
- Fix bug where sharded federation senders could get stuck repeatedly querying the DB in a loop, using lots of CPU. ([\#9770](https://github.com/matrix-org/synapse/issues/9770))
- Fix duplicate logging of exceptions thrown during federation transaction processing. ([\#9780](https://github.com/matrix-org/synapse/issues/9780))
Updates to the Docker image
---------------------------
- Move opencontainers labels to the final Docker image such that users can inspect them. ([\#9765](https://github.com/matrix-org/synapse/issues/9765))
Improved Documentation
----------------------
- Make the `allowed_local_3pids` regex example in the sample config stricter. ([\#9719](https://github.com/matrix-org/synapse/issues/9719))
Deprecations and Removals
-------------------------
- Remove old admin API `GET /_synapse/admin/v1/users/<user_id>`. ([\#9401](https://github.com/matrix-org/synapse/issues/9401))
- Make `/_matrix/client/r0/register` expect a type of `m.login.application_service` when an Application Service registers a user, to align with [the relevant spec](https://spec.matrix.org/unstable/application-service-api/#server-admin-style-permissions). ([\#9548](https://github.com/matrix-org/synapse/issues/9548))
Internal Changes
----------------
- Replace deprecated `imp` module with successor `importlib`. Contributed by Cristina Muñoz. ([\#9718](https://github.com/matrix-org/synapse/issues/9718))
- Experiment with GitHub Actions for CI. ([\#9661](https://github.com/matrix-org/synapse/issues/9661))
- Introduce flake8-bugbear to the test suite and fix some of its lint violations. ([\#9682](https://github.com/matrix-org/synapse/issues/9682))
- Update `scripts-dev/complement.sh` to use a local checkout of Complement, allow running a subset of tests and have it use Synapse's Complement test blacklist. ([\#9685](https://github.com/matrix-org/synapse/issues/9685))
- Improve Jaeger tracing for `to_device` messages. ([\#9686](https://github.com/matrix-org/synapse/issues/9686))
- Add release helper script for automating part of the Synapse release process. ([\#9713](https://github.com/matrix-org/synapse/issues/9713))
- Add type hints to expiring cache. ([\#9730](https://github.com/matrix-org/synapse/issues/9730))
- Convert various testcases to `HomeserverTestCase`. ([\#9736](https://github.com/matrix-org/synapse/issues/9736))
- Start linting mypy with `no_implicit_optional`. ([\#9742](https://github.com/matrix-org/synapse/issues/9742))
- Add missing type hints to federation handler and server. ([\#9743](https://github.com/matrix-org/synapse/issues/9743))
- Check that a `ConfigError` is raised, rather than simply `Exception`, when appropriate in homeserver config file generation tests. ([\#9753](https://github.com/matrix-org/synapse/issues/9753))
- Fix incompatibility with `tox` 2.5. ([\#9769](https://github.com/matrix-org/synapse/issues/9769))
- Enable Complement tests for [MSC2946](https://github.com/matrix-org/matrix-doc/pull/2946): Spaces Summary API. ([\#9771](https://github.com/matrix-org/synapse/issues/9771))
- Use mock from the standard library instead of a separate package. ([\#9772](https://github.com/matrix-org/synapse/issues/9772))
- Update Black configuration to target Python 3.6. ([\#9781](https://github.com/matrix-org/synapse/issues/9781))
- Add option to skip unit tests when building Debian packages. ([\#9793](https://github.com/matrix-org/synapse/issues/9793))
Synapse 1.31.0 (2021-04-06)
===========================
**Note:** As announced in v1.25.0, and in line with the deprecation policy for platform dependencies, this is the last release to support Python 3.5 and PostgreSQL 9.5. Future versions of Synapse will require Python 3.6+ and PostgreSQL 9.6+, as per our [deprecation policy](docs/deprecation_policy.md).
This is also the last release that the Synapse team will be publishing packages for Debian Stretch and Ubuntu Xenial.
Improved Documentation
----------------------
- Add a document describing the deprecation policy for platform dependencies. ([\#9723](https://github.com/matrix-org/synapse/issues/9723))
Internal Changes
----------------
- Revert using `dmypy run` in lint script. ([\#9720](https://github.com/matrix-org/synapse/issues/9720))
- Pin flake8-bugbear's version. ([\#9734](https://github.com/matrix-org/synapse/issues/9734))
Synapse 1.31.0rc1 (2021-03-30)
==============================
Features
--------
- Add support to OpenID Connect login for requiring attributes on the `userinfo` response. Contributed by Hubbe King. ([\#9609](https://github.com/matrix-org/synapse/issues/9609))
- Add initial experimental support for a "space summary" API. ([\#9643](https://github.com/matrix-org/synapse/issues/9643), [\#9652](https://github.com/matrix-org/synapse/issues/9652), [\#9653](https://github.com/matrix-org/synapse/issues/9653))
- Add support for the busy presence state as described in [MSC3026](https://github.com/matrix-org/matrix-doc/pull/3026). ([\#9644](https://github.com/matrix-org/synapse/issues/9644))
- Add support for credentials for proxy authentication in the `HTTPS_PROXY` environment variable. ([\#9657](https://github.com/matrix-org/synapse/issues/9657))
Bugfixes
--------
- Fix a longstanding bug that could cause issues when editing a reply to a message. ([\#9585](https://github.com/matrix-org/synapse/issues/9585))
- Fix the `/capabilities` endpoint to return `m.change_password` as disabled if the local password database is not used for authentication. Contributed by @dklimpel. ([\#9588](https://github.com/matrix-org/synapse/issues/9588))
- Check if local passwords are enabled before setting them for the user. ([\#9636](https://github.com/matrix-org/synapse/issues/9636))
- Fix a bug where federation sending can stall due to `concurrent access` database exceptions when it falls behind. ([\#9639](https://github.com/matrix-org/synapse/issues/9639))
- Fix a bug introduced in Synapse 1.30.1 which meant the suggested `pip` incantation to install an updated `cryptography` was incorrect. ([\#9699](https://github.com/matrix-org/synapse/issues/9699))
Updates to the Docker image
---------------------------
- Speed up Docker builds and make it nicer to test against Complement while developing (install all dependencies before copying the project). ([\#9610](https://github.com/matrix-org/synapse/issues/9610))
- Include [opencontainers labels](https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys) in the Docker image. ([\#9612](https://github.com/matrix-org/synapse/issues/9612))
Improved Documentation
----------------------
- Clarify that `register_new_matrix_user` is present also when installed via non-pip package. ([\#9074](https://github.com/matrix-org/synapse/issues/9074))
- Update source install documentation to mention platform prerequisites before the source install steps. ([\#9667](https://github.com/matrix-org/synapse/issues/9667))
- Improve worker documentation for fallback/web auth endpoints. ([\#9679](https://github.com/matrix-org/synapse/issues/9679))
- Update the sample configuration for OIDC authentication. ([\#9695](https://github.com/matrix-org/synapse/issues/9695))
Internal Changes
----------------
- Preparatory steps for removing redundant `outlier` data from `event_json.internal_metadata` column. ([\#9411](https://github.com/matrix-org/synapse/issues/9411))
- Add type hints to the caching module. ([\#9442](https://github.com/matrix-org/synapse/issues/9442))
- Introduce flake8-bugbear to the test suite and fix some of its lint violations. ([\#9499](https://github.com/matrix-org/synapse/issues/9499), [\#9659](https://github.com/matrix-org/synapse/issues/9659))
- Add additional type hints to the Homeserver object. ([\#9631](https://github.com/matrix-org/synapse/issues/9631), [\#9638](https://github.com/matrix-org/synapse/issues/9638), [\#9675](https://github.com/matrix-org/synapse/issues/9675), [\#9681](https://github.com/matrix-org/synapse/issues/9681))
- Only save remote cross-signing and device keys if they're different from the current ones. ([\#9634](https://github.com/matrix-org/synapse/issues/9634))
- Rename storage function to fix spelling and not conflict with another function's name. ([\#9637](https://github.com/matrix-org/synapse/issues/9637))
- Improve performance of federation catch up by sending the latest events in the room to the remote, rather than just the last event sent by the local server. ([\#9640](https://github.com/matrix-org/synapse/issues/9640), [\#9664](https://github.com/matrix-org/synapse/issues/9664))
- In the `federation_client` commandline client, stop automatically adding the URL prefix, so that servlets on other prefixes can be tested. ([\#9645](https://github.com/matrix-org/synapse/issues/9645))
- In the `federation_client` commandline client, handle inline `signing_key`s in `homeserver.yaml`. ([\#9647](https://github.com/matrix-org/synapse/issues/9647))
- Fixed some antipattern issues to improve code quality. ([\#9649](https://github.com/matrix-org/synapse/issues/9649))
- Add a storage method for pulling all current user presence state from the database. ([\#9650](https://github.com/matrix-org/synapse/issues/9650))
- Import `HomeServer` from the proper module. ([\#9665](https://github.com/matrix-org/synapse/issues/9665))
- Increase default join ratelimiting burst rate. ([\#9674](https://github.com/matrix-org/synapse/issues/9674))
- Add type hints to third party event rules and visibility modules. ([\#9676](https://github.com/matrix-org/synapse/issues/9676))
- Bump mypy-zope to 0.2.13 to fix "Cannot determine consistent method resolution order (MRO)" errors when running mypy a second time. ([\#9678](https://github.com/matrix-org/synapse/issues/9678))
- Use interpreter from `$PATH` via `/usr/bin/env` instead of absolute paths in various scripts. ([\#9689](https://github.com/matrix-org/synapse/issues/9689))
- Make it possible to use `dmypy`. ([\#9692](https://github.com/matrix-org/synapse/issues/9692))
- Suppress "CryptographyDeprecationWarning: int_from_bytes is deprecated". ([\#9698](https://github.com/matrix-org/synapse/issues/9698))
- Use `dmypy run` in lint script for improved performance in type-checking while developing. ([\#9701](https://github.com/matrix-org/synapse/issues/9701))
- Fix undetected mypy error when using Python 3.6. ([\#9703](https://github.com/matrix-org/synapse/issues/9703))
- Fix type-checking CI on develop. ([\#9709](https://github.com/matrix-org/synapse/issues/9709))
Synapse 1.30.1 (2021-03-26)
===========================
This release is identical to Synapse 1.30.0, with the exception of explicitly
setting a minimum version of Python's Cryptography library to ensure that users
of Synapse are protected from the recent [OpenSSL security advisories](https://mta.openssl.org/pipermail/openssl-announce/2021-March/000198.html),
especially CVE-2021-3449.
Note that Cryptography defaults to bundling its own statically linked copy of
OpenSSL, which means that you may not be protected by your operating system's
security updates.
It's also worth noting that Cryptography no longer supports Python 3.5, so
admins deploying to older environments may not be protected against this or
future vulnerabilities. Synapse will be dropping support for Python 3.5 at the
end of March.
Updates to the Docker image
---------------------------
- Ensure that the docker container has up to date versions of openssl. ([\#9697](https://github.com/matrix-org/synapse/issues/9697))
Internal Changes
----------------
- Enforce that `cryptography` dependency is up to date to ensure it has the most recent openssl patches. ([\#9697](https://github.com/matrix-org/synapse/issues/9697))
Synapse 1.30.0 (2021-03-22)
===========================
Note that this release deprecates the ability for appservices to
call `POST /_matrix/client/r0/register` without the body parameter `type`. Appservice
developers should use a `type` value of `m.login.application_service` as
per [the spec](https://matrix.org/docs/spec/application_service/r0.1.2#server-admin-style-permissions).
In future releases, calling this endpoint with an access token - but without a `m.login.application_service`
type - will fail.
No significant changes.
Synapse 1.30.0rc1 (2021-03-16)
==============================
Features
--------
- Add prometheus metrics for number of users successfully registering and logging in. ([\#9510](https://github.com/matrix-org/synapse/issues/9510), [\#9511](https://github.com/matrix-org/synapse/issues/9511), [\#9573](https://github.com/matrix-org/synapse/issues/9573))
- Add `synapse_federation_last_sent_pdu_time` and `synapse_federation_last_received_pdu_time` prometheus metrics, which monitor federation delays by reporting the timestamps of messages sent and received to a set of remote servers. ([\#9540](https://github.com/matrix-org/synapse/issues/9540))
- Add support for generating JSON Web Tokens dynamically for use as OIDC client secrets. ([\#9549](https://github.com/matrix-org/synapse/issues/9549))
- Optimise handling of incomplete room history for incoming federation. ([\#9601](https://github.com/matrix-org/synapse/issues/9601))
- Finalise support for allowing clients to pick an SSO Identity Provider ([MSC2858](https://github.com/matrix-org/matrix-doc/pull/2858)). ([\#9617](https://github.com/matrix-org/synapse/issues/9617))
- Tell spam checker modules about the SSO IdP a user registered through if one was used. ([\#9626](https://github.com/matrix-org/synapse/issues/9626))
Bugfixes
--------
- Fix long-standing bug when generating thumbnails for some images with transparency: `TypeError: cannot unpack non-iterable int object`. ([\#9473](https://github.com/matrix-org/synapse/issues/9473))
- Purge chain cover indexes for events that were purged prior to Synapse v1.29.0. ([\#9542](https://github.com/matrix-org/synapse/issues/9542), [\#9583](https://github.com/matrix-org/synapse/issues/9583))
- Fix bug where federation requests were not correctly retried on 5xx responses. ([\#9567](https://github.com/matrix-org/synapse/issues/9567))
- Fix re-activating an account via the admin API when local passwords are disabled. ([\#9587](https://github.com/matrix-org/synapse/issues/9587))
- Fix a bug introduced in Synapse 1.20 which caused incoming federation transactions to stack up, causing slow recovery from outages. ([\#9597](https://github.com/matrix-org/synapse/issues/9597))
- Fix a bug introduced in v1.28.0 where the OpenID Connect callback endpoint could error with a `MacaroonInitException`. ([\#9620](https://github.com/matrix-org/synapse/issues/9620))
- Fix Internal Server Error on `GET /_synapse/client/saml2/authn_response` request. ([\#9623](https://github.com/matrix-org/synapse/issues/9623))
Updates to the Docker image
---------------------------
- Make use of an improved malloc implementation (`jemalloc`) in the docker image. ([\#8553](https://github.com/matrix-org/synapse/issues/8553))
Improved Documentation
----------------------
- Add relayd entry to reverse proxy example configurations. ([\#9508](https://github.com/matrix-org/synapse/issues/9508))
- Improve the SAML2 upgrade notes for 1.27.0. ([\#9550](https://github.com/matrix-org/synapse/issues/9550))
- Link to the "List user's media" admin API from the media admin API docs. ([\#9571](https://github.com/matrix-org/synapse/issues/9571))
- Clarify the spam checker modules documentation example to mention that `parse_config` is a required method. ([\#9580](https://github.com/matrix-org/synapse/issues/9580))
- Clarify the sample configuration for `stats` settings. ([\#9604](https://github.com/matrix-org/synapse/issues/9604))
Deprecations and Removals
-------------------------
- The `synapse_federation_last_sent_pdu_age` and `synapse_federation_last_received_pdu_age` prometheus metrics have been removed. They are replaced by `synapse_federation_last_sent_pdu_time` and `synapse_federation_last_received_pdu_time`. ([\#9540](https://github.com/matrix-org/synapse/issues/9540))
- Registering an Application Service user without using the `m.login.application_service` login type will be unsupported in an upcoming Synapse release. ([\#9559](https://github.com/matrix-org/synapse/issues/9559))
Internal Changes
----------------
- Add tests to ResponseCache. ([\#9458](https://github.com/matrix-org/synapse/issues/9458))
- Add type hints to purge room and server notice admin API. ([\#9520](https://github.com/matrix-org/synapse/issues/9520))
- Add extra logging to ObservableDeferred when callbacks throw exceptions. ([\#9523](https://github.com/matrix-org/synapse/issues/9523))
- Fix incorrect type hints. ([\#9528](https://github.com/matrix-org/synapse/issues/9528), [\#9543](https://github.com/matrix-org/synapse/issues/9543), [\#9591](https://github.com/matrix-org/synapse/issues/9591), [\#9608](https://github.com/matrix-org/synapse/issues/9608), [\#9618](https://github.com/matrix-org/synapse/issues/9618))
- Add an additional test for purging a room. ([\#9541](https://github.com/matrix-org/synapse/issues/9541))
- Add a `.git-blame-ignore-revs` file with the hashes of auto-formatting. ([\#9560](https://github.com/matrix-org/synapse/issues/9560))
- Increase the threshold before which outbound federation to a server goes into "catch up" mode, which is expensive for the remote server to handle. ([\#9561](https://github.com/matrix-org/synapse/issues/9561))
- Fix spurious errors reported by the `config-lint.sh` script. ([\#9562](https://github.com/matrix-org/synapse/issues/9562))
- Fix type hints and tests for BlacklistingAgentWrapper and BlacklistingReactorWrapper. ([\#9563](https://github.com/matrix-org/synapse/issues/9563))
- Do not have mypy ignore type hints from unpaddedbase64. ([\#9568](https://github.com/matrix-org/synapse/issues/9568))
- Improve efficiency of calculating the auth chain in large rooms. ([\#9576](https://github.com/matrix-org/synapse/issues/9576))
- Convert `synapse.types.Requester` to an `attrs` class. ([\#9586](https://github.com/matrix-org/synapse/issues/9586))
- Add logging for redis connection setup. ([\#9590](https://github.com/matrix-org/synapse/issues/9590))
- Improve logging when processing incoming transactions. ([\#9596](https://github.com/matrix-org/synapse/issues/9596))
- Remove unused `stats.retention` setting, and emit a warning if stats are disabled. ([\#9604](https://github.com/matrix-org/synapse/issues/9604))
- Prevent attempting to bundle aggregations for state events in /context APIs. ([\#9619](https://github.com/matrix-org/synapse/issues/9619))
Synapse 1.29.0 (2021-03-08)
===========================

View File

@@ -6,7 +6,7 @@ There are 3 steps to follow under **Installation Instructions**.
- [Choosing your server name](#choosing-your-server-name)
- [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source)
- [Platform-Specific Instructions](#platform-specific-instructions)
- [Platform-specific prerequisites](#platform-specific-prerequisites)
- [Debian/Ubuntu/Raspbian](#debianubunturaspbian)
- [ArchLinux](#archlinux)
- [CentOS/Fedora](#centosfedora)
@@ -38,6 +38,7 @@ There are 3 steps to follow under **Installation Instructions**.
- [URL previews](#url-previews)
- [Troubleshooting Installation](#troubleshooting-installation)
## Choosing your server name
It is important to choose the name for your server before you install Synapse,
@@ -60,17 +61,14 @@ that your email address is probably `user@example.com` rather than
(Prebuilt packages are available for some platforms - see [Prebuilt packages](#prebuilt-packages).)
When installing from source please make sure that the [Platform-specific prerequisites](#platform-specific-prerequisites) are already installed.
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 3.5.2 or later, up to Python 3.9.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
Synapse is written in Python but some of the libraries it uses are written in
C. So before we can install Synapse itself we need a working C compiler and the
header files for Python C extensions. See [Platform-Specific
Instructions](#platform-specific-instructions) for information on installing
these on various platforms.
To install the Synapse homeserver run:
@@ -128,7 +126,11 @@ source env/bin/activate
synctl start
```
#### Platform-Specific Instructions
#### Platform-specific prerequisites
Synapse is written in Python but some of the libraries it uses are written in
C. So before we can install Synapse itself we need a working C compiler and the
header files for Python C extensions.
##### Debian/Ubuntu/Raspbian
@@ -526,14 +528,24 @@ email will be disabled.
The easiest way to create a new user is to do so from a client like [Element](https://element.io/).
Alternatively you can do so from the command line if you have installed via pip.
Alternatively, you can do so from the command line. This can be done as follows:
This can be done as follows:
1. If synapse was installed via pip, activate the virtualenv as follows (if Synapse was
installed via a prebuilt package, `register_new_matrix_user` should already be
on the search path):
```sh
cd ~/synapse
source env/bin/activate
synctl start # if not already running
```
2. Run the following command:
```sh
register_new_matrix_user -c homeserver.yaml http://localhost:8008
```
```sh
$ source ~/synapse/env/bin/activate
$ synctl start # if not already running
$ register_new_matrix_user -c homeserver.yaml http://localhost:8008
This will prompt you to add details for the new user, and will then connect to
the running Synapse to create the new user. For example:
```
New user localpart: erikj
Password:
Confirm password:

View File

@@ -20,9 +20,10 @@ recursive-include scripts *
recursive-include scripts-dev *
recursive-include synapse *.pyi
recursive-include tests *.py
include tests/http/ca.crt
include tests/http/ca.key
include tests/http/server.key
recursive-include tests *.pem
recursive-include tests *.p8
recursive-include tests *.crt
recursive-include tests *.key
recursive-include synapse/res *
recursive-include synapse/static *.css

View File

@@ -183,8 +183,9 @@ Using a reverse proxy with Synapse
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_,
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_ or
`HAProxy <https://www.haproxy.org/>`_ in front of Synapse. One advantage of
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_,
`HAProxy <https://www.haproxy.org/>`_ or
`relayd <https://man.openbsd.org/relayd.8>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.
@@ -313,6 +314,15 @@ Testing with SyTest is recommended for verifying that changes related to the
Client-Server API are functioning correctly. See the `installation instructions
<https://github.com/matrix-org/sytest#installing>`_ for details.
Platform dependencies
=====================
Synapse uses a number of platform dependencies such as Python and PostgreSQL,
and aims to follow supported upstream versions. See the
`<docs/deprecation_policy.md>`_ document for more details.
Troubleshooting
===============
@@ -383,12 +393,17 @@ massive excess of outgoing federation requests (see `discussion
indicate that your server is also issuing far more outgoing federation
requests than can be accounted for by your users' activity, this is a
likely cause. The misbehavior can be worked around by setting
``use_presence: false`` in the Synapse config file.
the following in the Synapse config file:
.. code-block:: yaml
presence:
enabled: false
People can't accept room invitations from me
--------------------------------------------
The typical failure mode here is that you send an invitation to someone
The typical failure mode here is that you send an invitation to someone
to join a room or direct chat, but when they go to accept it, they get an
error (typically along the lines of "Invalid signature"). They might see
something like the following in their logs::

View File

@@ -85,6 +85,101 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.34.0
====================
`room_invite_state_types` configuration setting
-----------------------------------------------
The ``room_invite_state_types`` configuration setting has been deprecated and
replaced with ``room_prejoin_state``. See the `sample configuration file <https://github.com/matrix-org/synapse/blob/v1.34.0/docs/sample_config.yaml#L1515>`_.
If you have set ``room_invite_state_types`` to the default value you should simply
remove it from your configuration file. The default value used to be:
.. code:: yaml
room_invite_state_types:
- "m.room.join_rules"
- "m.room.canonical_alias"
- "m.room.avatar"
- "m.room.encryption"
- "m.room.name"
If you have customised this value by adding addition state types, you should
remove ``room_invite_state_types`` and configure ``additional_event_types`` with
your customisations.
If you have customised this value by removing state types, you should rename
``room_invite_state_types`` to ``additional_event_types``, and set
``disable_default_event_types`` to ``true``.
Upgrading to v1.33.0
====================
Account Validity HTML templates can now display a user's expiration date
------------------------------------------------------------------------
This may affect you if you have enabled the account validity feature, and have made use of a
custom HTML template specified by the ``account_validity.template_dir`` or ``account_validity.account_renewed_html_path``
Synapse config options.
The template can now accept an ``expiration_ts`` variable, which represents the unix timestamp in milliseconds for the
future date of which their account has been renewed until. See the
`default template <https://github.com/matrix-org/synapse/blob/release-v1.33.0/synapse/res/templates/account_renewed.html>`_
for an example of usage.
ALso note that a new HTML template, ``account_previously_renewed.html``, has been added. This is is shown to users
when they attempt to renew their account with a valid renewal token that has already been used before. The default
template contents can been found
`here <https://github.com/matrix-org/synapse/blob/release-v1.33.0/synapse/res/templates/account_previously_renewed.html>`_,
and can also accept an ``expiration_ts`` variable. This template replaces the error message users would previously see
upon attempting to use a valid renewal token more than once.
Upgrading to v1.32.0
====================
Regression causing connected Prometheus instances to become overwhelmed
-----------------------------------------------------------------------
This release introduces `a regression <https://github.com/matrix-org/synapse/issues/9853>`_
that can overwhelm connected Prometheus instances. This issue is not present in
Synapse v1.32.0rc1.
If you have been affected, please downgrade to 1.31.0. You then may need to
remove excess writeahead logs in order for Prometheus to recover. Instructions
for doing so are provided
`here <https://github.com/matrix-org/synapse/pull/9854#issuecomment-823472183>`_.
Dropping support for old Python, Postgres and SQLite versions
-------------------------------------------------------------
In line with our `deprecation policy <https://github.com/matrix-org/synapse/blob/release-v1.32.0/docs/deprecation_policy.md>`_,
we've dropped support for Python 3.5 and PostgreSQL 9.5, as they are no longer supported upstream.
This release of Synapse requires Python 3.6+ and PostgresSQL 9.6+ or SQLite 3.22+.
Removal of old List Accounts Admin API
--------------------------------------
The deprecated v1 "list accounts" admin API (``GET /_synapse/admin/v1/users/<user_id>``) has been removed in this version.
The `v2 list accounts API <https://github.com/matrix-org/synapse/blob/master/docs/admin_api/user_admin_api.rst#list-accounts>`_
has been available since Synapse 1.7.0 (2019-12-13), and is accessible under ``GET /_synapse/admin/v2/users``.
The deprecation of the old endpoint was announced with Synapse 1.28.0 (released on 2021-02-25).
Application Services must use type ``m.login.application_service`` when registering users
-----------------------------------------------------------------------------------------
In compliance with the
`Application Service spec <https://matrix.org/docs/spec/application_service/r0.1.2#server-admin-style-permissions>`_,
Application Services are now required to use the ``m.login.application_service`` type when registering users via the
``/_matrix/client/r0/register`` endpoint. This behaviour was deprecated in Synapse v1.30.0.
Please ensure your Application Services are up to date.
Upgrading to v1.29.0
====================
@@ -98,9 +193,12 @@ will log a warning on each received request.
To avoid the warning, administrators using a reverse proxy should ensure that
the reverse proxy sets `X-Forwarded-Proto` header to `https` or `http` to
indicate the protocol used by the client. See the `reverse proxy documentation
<docs/reverse_proxy.md>`_, where the example configurations have been updated to
show how to set this header.
indicate the protocol used by the client.
Synapse also requires the `Host` header to be preserved.
See the `reverse proxy documentation <docs/reverse_proxy.md>`_, where the
example configurations have been updated to show how to set these headers.
(Users of `Caddy <https://caddyserver.com/>`_ are unaffected, since we believe it
sets `X-Forwarded-Proto` by default.)
@@ -124,6 +222,13 @@ This version changes the URI used for callbacks from OAuth2 and SAML2 identity p
need to add ``[synapse public baseurl]/_synapse/client/saml2/authn_response`` as a permitted
"ACS location" (also known as "allowed callback URLs") at the identity provider.
The "Issuer" in the "AuthnRequest" to the SAML2 identity provider is also updated to
``[synapse public baseurl]/_synapse/client/saml2/metadata.xml``. If your SAML2 identity
provider uses this property to validate or otherwise identify Synapse, its configuration
will need to be updated to use the new URL. Alternatively you could create a new, separate
"EntityDescriptor" in your SAML2 identity provider with the new URLs and leave the URLs in
the existing "EntityDescriptor" as they were.
Changes to HTML templates
-------------------------

1
changelog.d/xxxx.misc Normal file
View File

@@ -0,0 +1 @@
Ensure python uses `malloc` when running Synapse in Docker.

View File

@@ -24,6 +24,7 @@ import sys
import time
import urllib
from http import TwistedHttpClient
from typing import Optional
import nacl.encoding
import nacl.signing
@@ -718,7 +719,7 @@ class SynapseCmd(cmd.Cmd):
method,
path,
data=None,
query_params={"access_token": None},
query_params: Optional[dict] = None,
alt_text=None,
):
"""Runs an HTTP request and pretty prints the output.
@@ -729,6 +730,8 @@ class SynapseCmd(cmd.Cmd):
data: Raw JSON data if any
query_params: dict of query parameters to add to the url
"""
query_params = query_params or {"access_token": None}
url = self._url() + path
if "access_token" in query_params:
query_params["access_token"] = self._tok()

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -16,6 +15,7 @@
import json
import urllib
from pprint import pformat
from typing import Optional
from twisted.internet import defer, reactor
from twisted.web.client import Agent, readBody
@@ -85,8 +85,9 @@ class TwistedHttpClient(HttpClient):
body = yield readBody(response)
defer.returnValue(json.loads(body))
def _create_put_request(self, url, json_data, headers_dict={}):
def _create_put_request(self, url, json_data, headers_dict: Optional[dict] = None):
"""Wrapper of _create_request to issue a PUT request"""
headers_dict = headers_dict or {}
if "Content-Type" not in headers_dict:
raise defer.error(RuntimeError("Must include Content-Type header for PUTs"))
@@ -95,14 +96,22 @@ class TwistedHttpClient(HttpClient):
"PUT", url, producer=_JsonProducer(json_data), headers_dict=headers_dict
)
def _create_get_request(self, url, headers_dict={}):
def _create_get_request(self, url, headers_dict: Optional[dict] = None):
"""Wrapper of _create_request to issue a GET request"""
return self._create_request("GET", url, headers_dict=headers_dict)
return self._create_request("GET", url, headers_dict=headers_dict or {})
@defer.inlineCallbacks
def do_request(
self, method, url, data=None, qparams=None, jsonreq=True, headers={}
self,
method,
url,
data=None,
qparams=None,
jsonreq=True,
headers: Optional[dict] = None,
):
headers = headers or {}
if qparams:
url = "%s?%s" % (url, urllib.urlencode(qparams, True))
@@ -123,8 +132,12 @@ class TwistedHttpClient(HttpClient):
defer.returnValue(json.loads(body))
@defer.inlineCallbacks
def _create_request(self, method, url, producer=None, headers_dict={}):
def _create_request(
self, method, url, producer=None, headers_dict: Optional[dict] = None
):
"""Creates and sends a request to the given url"""
headers_dict = headers_dict or {}
headers_dict["User-Agent"] = ["Synapse Cmd Client"]
retries_left = 5

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# this script will use the api:
# https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
DOMAIN=yourserver.tld
# add this user as admin in your home server:

View File

@@ -50,15 +50,24 @@ PACKAGE_BUILD_DIR="debian/matrix-synapse-py3"
VIRTUALENV_DIR="${PACKAGE_BUILD_DIR}${DH_VIRTUALENV_INSTALL_ROOT}/matrix-synapse"
TARGET_PYTHON="${VIRTUALENV_DIR}/bin/python"
# we copy the tests to a temporary directory so that we can put them on the
# PYTHONPATH without putting the uninstalled synapse on the pythonpath.
tmpdir=`mktemp -d`
trap "rm -r $tmpdir" EXIT
case "$DEB_BUILD_OPTIONS" in
*nocheck*)
# Skip running tests if "nocheck" present in $DEB_BUILD_OPTIONS
;;
cp -r tests "$tmpdir"
*)
# Copy tests to a temporary directory so that we can put them on the
# PYTHONPATH without putting the uninstalled synapse on the pythonpath.
tmpdir=`mktemp -d`
trap "rm -r $tmpdir" EXIT
PYTHONPATH="$tmpdir" \
"${TARGET_PYTHON}" -m twisted.trial --reporter=text -j2 tests
cp -r tests "$tmpdir"
PYTHONPATH="$tmpdir" \
"${TARGET_PYTHON}" -m twisted.trial --reporter=text -j2 tests
;;
esac
# build the config file
"${TARGET_PYTHON}" "${VIRTUALENV_DIR}/bin/generate_config" \

58
debian/changelog vendored
View File

@@ -1,3 +1,61 @@
matrix-synapse-py3 (1.33.2) stable; urgency=medium
* New synapse release 1.33.2.
-- Synapse Packaging team <packages@matrix.org> Tue, 11 May 2021 11:17:59 +0100
matrix-synapse-py3 (1.33.1) stable; urgency=medium
* New synapse release 1.33.1.
-- Synapse Packaging team <packages@matrix.org> Thu, 06 May 2021 14:06:33 +0100
matrix-synapse-py3 (1.33.0) stable; urgency=medium
* New synapse release 1.33.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 05 May 2021 14:15:27 +0100
matrix-synapse-py3 (1.32.2) stable; urgency=medium
* New synapse release 1.32.2.
-- Synapse Packaging team <packages@matrix.org> Wed, 22 Apr 2021 12:43:52 +0100
matrix-synapse-py3 (1.32.1) stable; urgency=medium
* New synapse release 1.32.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 21 Apr 2021 14:00:55 +0100
matrix-synapse-py3 (1.32.0) stable; urgency=medium
[ Dan Callahan ]
* Skip tests when DEB_BUILD_OPTIONS contains "nocheck".
[ Synapse Packaging team ]
* New synapse release 1.32.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 20 Apr 2021 14:28:39 +0100
matrix-synapse-py3 (1.31.0) stable; urgency=medium
* New synapse release 1.31.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 06 Apr 2021 13:08:29 +0100
matrix-synapse-py3 (1.30.1) stable; urgency=medium
* New synapse release 1.30.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 26 Mar 2021 12:01:28 +0000
matrix-synapse-py3 (1.30.0) stable; urgency=medium
* New synapse release 1.30.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 22 Mar 2021 13:15:34 +0000
matrix-synapse-py3 (1.29.0) stable; urgency=medium
[ Jonathan de Jong ]

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
DIR="$( cd "$( dirname "$0" )" && pwd )"
@@ -96,18 +96,48 @@ for port in 8080 8081 8082; do
# Check script parameters
if [ $# -eq 1 ]; then
if [ $1 = "--no-rate-limit" ]; then
# messages rate limit
echo 'rc_messages_per_second: 1000' >> $DIR/etc/$port.config
echo 'rc_message_burst_count: 1000' >> $DIR/etc/$port.config
# registration rate limit
printf 'rc_registration:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config
# login rate limit
echo 'rc_login:' >> $DIR/etc/$port.config
printf ' address:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config
printf ' account:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config
printf ' failed_attempts:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config
# Disable any rate limiting
ratelimiting=$(cat <<-RC
rc_message:
per_second: 1000
burst_count: 1000
rc_registration:
per_second: 1000
burst_count: 1000
rc_login:
address:
per_second: 1000
burst_count: 1000
account:
per_second: 1000
burst_count: 1000
failed_attempts:
per_second: 1000
burst_count: 1000
rc_admin_redaction:
per_second: 1000
burst_count: 1000
rc_joins:
local:
per_second: 1000
burst_count: 1000
remote:
per_second: 1000
burst_count: 1000
rc_3pid_validation:
per_second: 1000
burst_count: 1000
rc_invites:
per_room:
per_second: 1000
burst_count: 1000
per_user:
per_second: 1000
burst_count: 1000
RC
)
echo "${ratelimiting}" >> $DIR/etc/$port.config
fi
fi

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
DIR="$( cd "$( dirname "$0" )" && pwd )"

View File

@@ -28,33 +28,32 @@ RUN apt-get update && apt-get install -y \
libwebp-dev \
libxml++2.6-dev \
libxslt1-dev \
openssl \
rustc \
zlib1g-dev \
&& rm -rf /var/lib/apt/lists/*
&& rm -rf /var/lib/apt/lists/*
# Build dependencies that are not available as wheels, to speed up rebuilds
RUN pip install --prefix="/install" --no-warn-script-location \
cryptography \
frozendict \
jaeger-client \
opentracing \
# Match the version constraints of Synapse
"prometheus_client>=0.4.0" \
psycopg2 \
pycparser \
pyrsistent \
pyyaml \
simplejson \
threadloop \
thrift
# now install synapse and all of the python deps to /install.
COPY synapse /synapse/synapse/
# Copy just what we need to pip install
COPY scripts /synapse/scripts/
COPY MANIFEST.in README.rst setup.py synctl /synapse/
COPY synapse/__init__.py /synapse/synapse/__init__.py
COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
# To speed up rebuilds, install all of the dependencies before we copy over
# the whole synapse project so that we this layer in the Docker cache can be
# used while you develop on the source
#
# This is aiming at installing the `install_requires` and `extras_require` from `setup.py`
RUN pip install --prefix="/install" --no-warn-script-location \
/synapse[all]
/synapse[all]
# Copy over the rest of the project
COPY synapse /synapse/synapse/
# Install the synapse package itself and all of its children packages.
#
# This is aiming at installing only the `packages=find_packages(...)` from `setup.py
RUN pip install --prefix="/install" --no-deps --no-warn-script-location /synapse
###
### Stage 1: runtime
@@ -62,6 +61,11 @@ RUN pip install --prefix="/install" --no-warn-script-location \
FROM docker.io/python:${PYTHON_VERSION}-slim
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse'
LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/synapse/blob/master/docker/README.md'
LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git'
LABEL org.opencontainers.image.licenses='Apache-2.0'
RUN apt-get update && apt-get install -y \
curl \
gosu \
@@ -69,7 +73,10 @@ RUN apt-get update && apt-get install -y \
libpq5 \
libwebp6 \
xmlsec1 \
&& rm -rf /var/lib/apt/lists/*
libjemalloc2 \
libssl-dev \
openssl \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py
@@ -81,5 +88,5 @@ EXPOSE 8008/tcp 8009/tcp 8448/tcp
ENTRYPOINT ["/start.py"]
HEALTHCHECK --interval=1m --timeout=5s \
CMD curl -fSs http://localhost:8008/health || exit 1
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
CMD curl -fSs http://localhost:8008/health || exit 1

23
docker/Dockerfile-workers Normal file
View File

@@ -0,0 +1,23 @@
# Inherit from the official Synapse docker image
FROM matrixdotorg/synapse
# Install deps
RUN apt-get update
RUN apt-get install -y supervisor redis nginx
# Remove the default nginx sites
RUN rm /etc/nginx/sites-enabled/default
# Copy Synapse worker, nginx and supervisord configuration template files
COPY ./docker/conf-workers/* /conf/
# Expose nginx listener port
EXPOSE 8080/tcp
# Volume for user-editable config files, logs etc.
VOLUME ["/data"]
# A script to read environment variables and create the necessary
# files to run the desired worker configuration. Will start supervisord.
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
ENTRYPOINT ["/configure_workers_and_start.py"]

140
docker/README-testing.md Normal file
View File

@@ -0,0 +1,140 @@
# Running tests against a dockerised Synapse
It's possible to run integration tests against Synapse
using [Complement](https://github.com/matrix-org/complement). Complement is a Matrix Spec
compliance test suite for homeservers, and supports any homeserver docker image configured
to listen on ports 8008/8448. This document contains instructions for building Synapse
docker images that can be run inside Complement for testing purposes.
Note that running Synapse's unit tests from within the docker image is not supported.
## Testing with SQLite and single-process Synapse
> Note that `scripts-dev/complement.sh` is a script that will automatically build
> and run an SQLite-based, single-process of Synapse against Complement.
The instructions below will set up Complement testing for a single-process,
SQLite-based Synapse deployment.
Start by building the base Synapse docker image. If you wish to run tests with the latest
release of Synapse, instead of your current checkout, you can skip this step. From the
root of the repository:
```sh
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```
This will build an image with the tag `matrixdotorg/synapse`.
Next, build the Synapse image for Complement. You will need a local checkout
of Complement. Change to the root of your Complement checkout and run:
```sh
docker build -t complement-synapse -f "dockerfiles/Synapse.Dockerfile" dockerfiles
```
This will build an image with the tag `complement-synapse`, which can be handed to
Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to
[Complement's documentation](https://github.com/matrix-org/complement/#running) for
how to run the tests, as well as the various available command line flags.
## Testing with PostgreSQL and single or multi-process Synapse
The above docker image only supports running Synapse with SQLite and in a
single-process topology. The following instructions are used to build a Synapse image for
Complement that supports either single or multi-process topology with a PostgreSQL
database backend.
As with the single-process image, build the base Synapse docker image. If you wish to run
tests with the latest release of Synapse, instead of your current checkout, you can skip
this step. From the root of the repository:
```sh
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```
This will build an image with the tag `matrixdotorg/synapse`.
Next, we build a new image with worker support based on `matrixdotorg/synapse:latest`.
Again, from the root of the repository:
```sh
docker build -t matrixdotorg/synapse-workers -f docker/Dockerfile-workers .
```
This will build an image with the tag` matrixdotorg/synapse-workers`.
It's worth noting at this point that this image is fully functional, and
can be used for testing against locally. See instructions for using the container
under
[Running the Dockerfile-worker image standalone](#running-the-dockerfile-worker-image-standalone)
below.
Finally, build the Synapse image for Complement, which is based on
`matrixdotorg/synapse-workers`. You will need a local checkout of Complement. Change to
the root of your Complement checkout and run:
```sh
docker build -t matrixdotorg/complement-synapse-workers -f dockerfiles/SynapseWorkers.Dockerfile dockerfiles
```
This will build an image with the tag `complement-synapse`, which can be handed to
Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to
[Complement's documentation](https://github.com/matrix-org/complement/#running) for
how to run the tests, as well as the various available command line flags.
## Running the Dockerfile-worker image standalone
For manual testing of a multi-process Synapse instance in Docker,
[Dockerfile-workers](Dockerfile-workers) is a Dockerfile that will produce an image
bundling all necessary components together for a workerised homeserver instance.
This includes any desired Synapse worker processes, a nginx to route traffic accordingly,
a redis for worker communication and a supervisord instance to start up and monitor all
processes. You will need to provide your own postgres container to connect to, and TLS
is not handled by the container.
Once you've built the image using the above instructions, you can run it. Be sure
you've set up a volume according to the [usual Synapse docker instructions](README.md).
Then run something along the lines of:
```
docker run -d --name synapse \
--mount type=volume,src=synapse-data,dst=/data \
-p 8008:8008 \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=no \
-e POSTGRES_HOST=postgres \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=somesecret \
-e SYNAPSE_WORKER_TYPES=synchrotron,media_repository,user_dir \
-e SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK=1 \
matrixdotorg/synapse-workers
```
...substituting `POSTGRES*` variables for those that match a postgres host you have
available (usually a running postgres docker container).
The `SYNAPSE_WORKER_TYPES` environment variable is a comma-separated list of workers to
use when running the container. All possible worker names are defined by the keys of the
`WORKERS_CONFIG` variable in [this script](configure_workers_and_start.py), which the
Dockerfile makes use of to generate appropriate worker, nginx and supervisord config
files.
Sharding is supported for a subset of workers, in line with the
[worker documentation](../docs/workers.md). To run multiple instances of a given worker
type, simply specify the type multiple times in `SYNAPSE_WORKER_TYPES`
(e.g `SYNAPSE_WORKER_TYPES=event_creator,event_creator...`).
Otherwise, `SYNAPSE_WORKER_TYPES` can either be left empty or unset to spawn no workers
(leaving only the main process). The container is configured to use redis-based worker
mode.
Logs for workers and the main process are logged to stdout and can be viewed with
standard `docker logs` tooling. Worker logs contain their worker name
after the timestamp.
Setting `SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK=1` will cause worker logs to be written to
`<data_dir>/logs/<worker_name>.log`. Logs are kept for 1 week and rotate every day at 00:
00, according to the container's clock. Logging for the main process must still be
configured by modifying the homeserver's log config in your Synapse data volume.

View File

@@ -2,13 +2,16 @@
This Docker image will run Synapse as a single process. By default it uses a
sqlite database; for production use you should connect it to a separate
postgres database.
postgres database. The image also does *not* provide a TURN server.
The image also does *not* provide a TURN server.
This image should work on all platforms that are supported by Docker upstream.
Note that Docker's WS1-backend Linux Containers on Windows
platform is [experimental](https://github.com/docker/for-win/issues/6470) and
is not supported by this image.
## Volumes
By default, the image expects a single volume, located at ``/data``, that will hold:
By default, the image expects a single volume, located at `/data`, that will hold:
* configuration files;
* uploaded media and thumbnails;
@@ -16,11 +19,11 @@ By default, the image expects a single volume, located at ``/data``, that will h
* the appservices configuration.
You are free to use separate volumes depending on storage endpoints at your
disposal. For instance, ``/data/media`` could be stored on a large but low
disposal. For instance, `/data/media` could be stored on a large but low
performance hdd storage while other files could be stored on high performance
endpoints.
In order to setup an application service, simply create an ``appservices``
In order to setup an application service, simply create an `appservices`
directory in the data volume and write the application service Yaml
configuration file there. Multiple application services are supported.
@@ -53,6 +56,8 @@ The following environment variables are supported in `generate` mode:
* `SYNAPSE_SERVER_NAME` (mandatory): the server public hostname.
* `SYNAPSE_REPORT_STATS` (mandatory, `yes` or `no`): whether to enable
anonymous statistics reporting.
* `SYNAPSE_HTTP_PORT`: the port Synapse should listen on for http traffic.
Defaults to `8008`.
* `SYNAPSE_CONFIG_DIR`: where additional config files (such as the log config
and event signing key) will be stored. Defaults to `/data`.
* `SYNAPSE_CONFIG_PATH`: path to the file to be generated. Defaults to
@@ -73,6 +78,8 @@ docker run -d --name synapse \
matrixdotorg/synapse:latest
```
(assuming 8008 is the port Synapse is configured to listen on for http traffic.)
You can then check that it has started correctly with:
```
@@ -184,6 +191,16 @@ whilst running the above `docker run` commands.
```
--no-healthcheck
```
## Disabling the healthcheck in docker-compose file
If you wish to disable the healthcheck via docker-compose, append the following to your service configuration.
```
healthcheck:
disable: true
```
## Setting custom healthcheck on docker run
If you wish to point the healthcheck at a different port with docker command, add the following
@@ -195,12 +212,18 @@ If you wish to point the healthcheck at a different port with docker command, ad
## Setting the healthcheck in docker-compose file
You can add the following to set a custom healthcheck in a docker compose file.
You will need version >2.1 for this to work.
You will need docker-compose version >2.1 for this to work.
```
healthcheck:
test: ["CMD", "curl", "-fSs", "http://localhost:8008/health"]
interval: 1m
timeout: 10s
interval: 15s
timeout: 5s
retries: 3
start_period: 5s
```
## Using jemalloc
Jemalloc is embedded in the image and will be used instead of the default allocator.
You can read about jemalloc by reading the Synapse [README](../README.md).

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# The script to build the Debian package, as ran inside the Docker image.

View File

@@ -0,0 +1,27 @@
# This file contains the base config for the reverse proxy, as part of ../Dockerfile-workers.
# configure_workers_and_start.py uses and amends to this file depending on the workers
# that have been selected.
{{ upstream_directives }}
server {
# Listen on an unoccupied port number
listen 8008;
listen [::]:8008;
server_name localhost;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 100M;
{{ worker_locations }}
# Send all other traffic to the main process
location ~* ^(\\/_matrix|\\/_synapse) {
proxy_pass http://localhost:8080;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
}
}

View File

@@ -0,0 +1,9 @@
# This file contains the base for the shared homeserver config file between Synapse workers,
# as part of ./Dockerfile-workers.
# configure_workers_and_start.py uses and amends to this file depending on the workers
# that have been selected.
redis:
enabled: true
{{ shared_worker_config }}

View File

@@ -0,0 +1,41 @@
# This file contains the base config for supervisord, as part of ../Dockerfile-workers.
# configure_workers_and_start.py uses and amends to this file depending on the workers
# that have been selected.
[supervisord]
nodaemon=true
user=root
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
priority=500
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
username=www-data
autorestart=true
[program:redis]
command=/usr/bin/redis-server /etc/redis/redis.conf --daemonize no
priority=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
username=redis
autorestart=true
[program:synapse_main]
command=/usr/local/bin/python -m synapse.app.homeserver --config-path="{{ main_config_path }}" --config-path=/conf/workers/shared.yaml
priority=10
# Log startup failures to supervisord's stdout/err
# Regular synapse logs will still go in the configured data directory
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=unexpected
exitcodes=0
# Additional process blocks
{{ worker_config }}

View File

@@ -0,0 +1,26 @@
# This is a configuration template for a single worker instance, and is
# used by Dockerfile-workers.
# Values will be change depending on whichever workers are selected when
# running that image.
worker_app: "{{ app }}"
worker_name: "{{ name }}"
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: {{ port }}
{% if listener_resources %}
resources:
- names:
{%- for resource in listener_resources %}
- {{ resource }}
{%- endfor %}
{% endif %}
worker_log_config: {{ worker_log_config_filepath }}
{{ worker_extra_conf }}

View File

@@ -40,7 +40,9 @@ listeners:
compress: false
{% endif %}
- port: 8008
# Allow configuring in case we want to reverse proxy 8008
# using another process in the same container
- port: {{ SYNAPSE_HTTP_PORT or 8008 }}
tls: false
bind_addresses: ['::']
type: http
@@ -173,18 +175,10 @@ report_stats: False
## API Configuration ##
room_invite_state_types:
- "m.room.join_rules"
- "m.room.canonical_alias"
- "m.room.avatar"
- "m.room.name"
{% if SYNAPSE_APPSERVICES %}
app_service_config_files:
{% for appservice in SYNAPSE_APPSERVICES %} - "{{ appservice }}"
{% endfor %}
{% else %}
app_service_config_files: []
{% endif %}
macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}"

View File

@@ -2,9 +2,34 @@ version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
{% if worker_name %}
format: '%(asctime)s - worker:{{ worker_name }} - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
{% else %}
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
{% endif %}
handlers:
file:
class: logging.handlers.TimedRotatingFileHandler
formatter: precise
filename: {{ LOG_FILE_PATH or "homeserver.log" }}
when: "midnight"
backupCount: 6 # Does not include the current log file.
encoding: utf8
# Default to buffering writes to log file for efficiency. This means that
# there will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR
# logs will still be flushed immediately.
buffer:
class: logging.handlers.MemoryHandler
target: file
# The capacity is the number of log lines that are buffered before
# being written to disk. Increasing this will lead to better
# performance, at the expensive of it taking longer for log lines to
# be written to disk.
capacity: 10
flushLevel: 30 # Flush for WARNING logs as well
console:
class: logging.StreamHandler
formatter: precise
@@ -17,6 +42,11 @@ loggers:
root:
level: {{ SYNAPSE_LOG_LEVEL or "INFO" }}
{% if LOG_FILE_PATH %}
handlers: [console, buffer]
{% else %}
handlers: [console]
{% endif %}
disable_existing_loggers: false

View File

@@ -0,0 +1,558 @@
#!/usr/bin/env python
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script reads environment variables and generates a shared Synapse worker,
# nginx and supervisord configs depending on the workers requested.
#
# The environment variables it reads are:
# * SYNAPSE_SERVER_NAME: The desired server_name of the homeserver.
# * SYNAPSE_REPORT_STATS: Whether to report stats.
# * SYNAPSE_WORKER_TYPES: A comma separated list of worker names as specified in WORKER_CONFIG
# below. Leave empty for no workers, or set to '*' for all possible workers.
#
# NOTE: According to Complement's ENTRYPOINT expectations for a homeserver image (as defined
# in the project's README), this script may be run multiple times, and functionality should
# continue to work if so.
import os
import subprocess
import sys
import jinja2
import yaml
MAIN_PROCESS_HTTP_LISTENER_PORT = 8080
WORKERS_CONFIG = {
"pusher": {
"app": "synapse.app.pusher",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": {"start_pushers": False},
"worker_extra_conf": "",
},
"user_dir": {
"app": "synapse.app.user_dir",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$"
],
"shared_extra_conf": {"update_user_directory": False},
"worker_extra_conf": "",
},
"media_repository": {
"app": "synapse.app.media_repository",
"listener_resources": ["media"],
"endpoint_patterns": [
"^/_matrix/media/",
"^/_synapse/admin/v1/purge_media_cache$",
"^/_synapse/admin/v1/room/.*/media.*$",
"^/_synapse/admin/v1/user/.*/media.*$",
"^/_synapse/admin/v1/media/.*$",
"^/_synapse/admin/v1/quarantine_media/.*$",
],
"shared_extra_conf": {"enable_media_repo": False},
"worker_extra_conf": "enable_media_repo: true",
},
"appservice": {
"app": "synapse.app.appservice",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": {"notify_appservices": False},
"worker_extra_conf": "",
},
"federation_sender": {
"app": "synapse.app.federation_sender",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": {"send_federation": False},
"worker_extra_conf": "",
},
"synchrotron": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(v2_alpha|r0)/sync$",
"^/_matrix/client/(api/v1|v2_alpha|r0)/events$",
"^/_matrix/client/(api/v1|r0)/initialSync$",
"^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"federation_reader": {
"app": "synapse.app.generic_worker",
"listener_resources": ["federation"],
"endpoint_patterns": [
"^/_matrix/federation/(v1|v2)/event/",
"^/_matrix/federation/(v1|v2)/state/",
"^/_matrix/federation/(v1|v2)/state_ids/",
"^/_matrix/federation/(v1|v2)/backfill/",
"^/_matrix/federation/(v1|v2)/get_missing_events/",
"^/_matrix/federation/(v1|v2)/publicRooms",
"^/_matrix/federation/(v1|v2)/query/",
"^/_matrix/federation/(v1|v2)/make_join/",
"^/_matrix/federation/(v1|v2)/make_leave/",
"^/_matrix/federation/(v1|v2)/send_join/",
"^/_matrix/federation/(v1|v2)/send_leave/",
"^/_matrix/federation/(v1|v2)/invite/",
"^/_matrix/federation/(v1|v2)/query_auth/",
"^/_matrix/federation/(v1|v2)/event_auth/",
"^/_matrix/federation/(v1|v2)/exchange_third_party_invite/",
"^/_matrix/federation/(v1|v2)/user/devices/",
"^/_matrix/federation/(v1|v2)/get_groups_publicised$",
"^/_matrix/key/v2/query",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"federation_inbound": {
"app": "synapse.app.generic_worker",
"listener_resources": ["federation"],
"endpoint_patterns": ["/_matrix/federation/(v1|v2)/send/"],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"event_persister": {
"app": "synapse.app.generic_worker",
"listener_resources": ["replication"],
"endpoint_patterns": [],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"background_worker": {
"app": "synapse.app.generic_worker",
"listener_resources": [],
"endpoint_patterns": [],
# This worker cannot be sharded. Therefore there should only ever be one background
# worker, and it should be named background_worker1
"shared_extra_conf": {"run_background_tasks_on": "background_worker1"},
"worker_extra_conf": "",
},
"event_creator": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact",
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send",
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
"^/_matrix/client/(api/v1|r0|unstable)/join/",
"^/_matrix/client/(api/v1|r0|unstable)/profile/",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"frontend_proxy": {
"app": "synapse.app.frontend_proxy",
"listener_resources": ["client", "replication"],
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|unstable)/keys/upload"],
"shared_extra_conf": {},
"worker_extra_conf": (
"worker_main_http_uri: http://127.0.0.1:%d"
% (MAIN_PROCESS_HTTP_LISTENER_PORT,),
),
},
}
# Templates for sections that may be inserted multiple times in config files
SUPERVISORD_PROCESS_CONFIG_BLOCK = """
[program:synapse_{name}]
command=/usr/local/bin/python -m {app} \
--config-path="{config_path}" \
--config-path=/conf/workers/shared.yaml \
--config-path=/conf/workers/{name}.yaml
autorestart=unexpected
priority=500
exitcodes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
"""
NGINX_LOCATION_CONFIG_BLOCK = """
location ~* {endpoint} {
proxy_pass {upstream};
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
}
"""
NGINX_UPSTREAM_CONFIG_BLOCK = """
upstream {upstream_worker_type} {
{body}
}
"""
# Utility functions
def log(txt: str):
"""Log something to the stdout.
Args:
txt: The text to log.
"""
print(txt)
def error(txt: str):
"""Log something and exit with an error code.
Args:
txt: The text to log in error.
"""
log(txt)
sys.exit(2)
def convert(src: str, dst: str, **template_vars):
"""Generate a file from a template
Args:
src: Path to the input file.
dst: Path to write to.
template_vars: The arguments to replace placeholder variables in the template with.
"""
# Read the template file
with open(src) as infile:
template = infile.read()
# Generate a string from the template. We disable autoescape to prevent template
# variables from being escaped.
rendered = jinja2.Template(template, autoescape=False).render(**template_vars)
# Write the generated contents to a file
#
# We use append mode in case the files have already been written to by something else
# (for instance, as part of the instructions in a dockerfile).
with open(dst, "a") as outfile:
# In case the existing file doesn't end with a newline
outfile.write("\n")
outfile.write(rendered)
def add_sharding_to_shared_config(
shared_config: dict,
worker_type: str,
worker_name: str,
worker_port: int,
) -> None:
"""Given a dictionary representing a config file shared across all workers,
append sharded worker information to it for the current worker_type instance.
Args:
shared_config: The config dict that all worker instances share (after being converted to YAML)
worker_type: The type of worker (one of those defined in WORKERS_CONFIG).
worker_name: The name of the worker instance.
worker_port: The HTTP replication port that the worker instance is listening on.
"""
# The instance_map config field marks the workers that write to various replication streams
instance_map = shared_config.setdefault("instance_map", {})
# Worker-type specific sharding config
if worker_type == "pusher":
shared_config.setdefault("pusher_instances", []).append(worker_name)
elif worker_type == "federation_sender":
shared_config.setdefault("federation_sender_instances", []).append(worker_name)
elif worker_type == "event_persister":
# Event persisters write to the events stream, so we need to update
# the list of event stream writers
shared_config.setdefault("stream_writers", {}).setdefault("events", []).append(
worker_name
)
# Map of stream writer instance names to host/ports combos
instance_map[worker_name] = {
"host": "localhost",
"port": worker_port,
}
elif worker_type == "media_repository":
# The first configured media worker will run the media background jobs
shared_config.setdefault("media_instance_running_background_jobs", worker_name)
def generate_base_homeserver_config():
"""Starts Synapse and generates a basic homeserver config, which will later be
modified for worker support.
Raises: CalledProcessError if calling start.py returned a non-zero exit code.
"""
# start.py already does this for us, so just call that.
# note that this script is copied in in the official, monolith dockerfile
os.environ["SYNAPSE_HTTP_PORT"] = str(MAIN_PROCESS_HTTP_LISTENER_PORT)
subprocess.check_output(["/usr/local/bin/python", "/start.py", "migrate_config"])
def generate_worker_files(environ, config_path: str, data_dir: str):
"""Read the desired list of workers from environment variables and generate
shared homeserver, nginx and supervisord configs.
Args:
environ: _Environ[str]
config_path: Where to output the generated Synapse main worker config file.
data_dir: The location of the synapse data directory. Where log and
user-facing config files live.
"""
# Note that yaml cares about indentation, so care should be taken to insert lines
# into files at the correct indentation below.
# shared_config is the contents of a Synapse config file that will be shared amongst
# the main Synapse process as well as all workers.
# It is intended mainly for disabling functionality when certain workers are spun up,
# and adding a replication listener.
# First read the original config file and extract the listeners block. Then we'll add
# another listener for replication. Later we'll write out the result.
listeners = [
{
"port": 9093,
"bind_address": "127.0.0.1",
"type": "http",
"resources": [{"names": ["replication"]}],
}
]
with open(config_path) as file_stream:
original_config = yaml.safe_load(file_stream)
original_listeners = original_config.get("listeners")
if original_listeners:
listeners += original_listeners
# The shared homeserver config. The contents of which will be inserted into the
# base shared worker jinja2 template.
#
# This config file will be passed to all workers, included Synapse's main process.
shared_config = {"listeners": listeners}
# The supervisord config. The contents of which will be inserted into the
# base supervisord jinja2 template.
#
# Supervisord will be in charge of running everything, from redis to nginx to Synapse
# and all of its worker processes. Load the config template, which defines a few
# services that are necessary to run.
supervisord_config = ""
# Upstreams for load-balancing purposes. This dict takes the form of a worker type to the
# ports of each worker. For example:
# {
# worker_type: {1234, 1235, ...}}
# }
# and will be used to construct 'upstream' nginx directives.
nginx_upstreams = {}
# A map of: {"endpoint": "upstream"}, where "upstream" is a str representing what will be
# placed after the proxy_pass directive. The main benefit to representing this data as a
# dict over a str is that we can easily deduplicate endpoints across multiple instances
# of the same worker.
#
# An nginx site config that will be amended to depending on the workers that are
# spun up. To be placed in /etc/nginx/conf.d.
nginx_locations = {}
# Read the desired worker configuration from the environment
worker_types = environ.get("SYNAPSE_WORKER_TYPES")
if worker_types is None:
# No workers, just the main process
worker_types = []
else:
# Split type names by comma
worker_types = worker_types.split(",")
# Create the worker configuration directory if it doesn't already exist
os.makedirs("/conf/workers", exist_ok=True)
# Start worker ports from this arbitrary port
worker_port = 18009
# A counter of worker_type -> int. Used for determining the name for a given
# worker type when generating its config file, as each worker's name is just
# worker_type + instance #
worker_type_counter = {}
# For each worker type specified by the user, create config values
for worker_type in worker_types:
worker_type = worker_type.strip()
worker_config = WORKERS_CONFIG.get(worker_type)
if worker_config:
worker_config = worker_config.copy()
else:
log(worker_type + " is an unknown worker type! It will be ignored")
continue
new_worker_count = worker_type_counter.setdefault(worker_type, 0) + 1
worker_type_counter[worker_type] = new_worker_count
# Name workers by their type concatenated with an incrementing number
# e.g. federation_reader1
worker_name = worker_type + str(new_worker_count)
worker_config.update(
{"name": worker_name, "port": worker_port, "config_path": config_path}
)
# Update the shared config with any worker-type specific options
shared_config.update(worker_config["shared_extra_conf"])
# Check if more than one instance of this worker type has been specified
worker_type_total_count = worker_types.count(worker_type)
if worker_type_total_count > 1:
# Update the shared config with sharding-related options if necessary
add_sharding_to_shared_config(
shared_config, worker_type, worker_name, worker_port
)
# Enable the worker in supervisord
supervisord_config += SUPERVISORD_PROCESS_CONFIG_BLOCK.format_map(worker_config)
# Add nginx location blocks for this worker's endpoints (if any are defined)
for pattern in worker_config["endpoint_patterns"]:
# Determine whether we need to load-balance this worker
if worker_type_total_count > 1:
# Create or add to a load-balanced upstream for this worker
nginx_upstreams.setdefault(worker_type, set()).add(worker_port)
# Upstreams are named after the worker_type
upstream = "http://" + worker_type
else:
upstream = "http://localhost:%d" % (worker_port,)
# Note that this endpoint should proxy to this upstream
nginx_locations[pattern] = upstream
# Write out the worker's logging config file
# Check whether we should write worker logs to disk, in addition to the console
extra_log_template_args = {}
if environ.get("SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK"):
extra_log_template_args["LOG_FILE_PATH"] = "{dir}/logs/{name}.log".format(
dir=data_dir, name=worker_name
)
# Render and write the file
log_config_filepath = "/conf/workers/{name}.log.config".format(name=worker_name)
convert(
"/conf/log.config",
log_config_filepath,
worker_name=worker_name,
**extra_log_template_args,
)
# Then a worker config file
convert(
"/conf/worker.yaml.j2",
"/conf/workers/{name}.yaml".format(name=worker_name),
**worker_config,
worker_log_config_filepath=log_config_filepath,
)
worker_port += 1
# Build the nginx location config blocks
nginx_location_config = ""
for endpoint, upstream in nginx_locations.items():
nginx_location_config += NGINX_LOCATION_CONFIG_BLOCK.format(
endpoint=endpoint,
upstream=upstream,
)
# Determine the load-balancing upstreams to configure
nginx_upstream_config = ""
for upstream_worker_type, upstream_worker_ports in nginx_upstreams.items():
body = ""
for port in upstream_worker_ports:
body += " server localhost:%d;\n" % (port,)
# Add to the list of configured upstreams
nginx_upstream_config += NGINX_UPSTREAM_CONFIG_BLOCK.format(
upstream_worker_type=upstream_worker_type,
body=body,
)
# Finally, we'll write out the config files.
# Shared homeserver config
convert(
"/conf/shared.yaml.j2",
"/conf/workers/shared.yaml",
shared_worker_config=yaml.dump(shared_config),
)
# Nginx config
convert(
"/conf/nginx.conf.j2",
"/etc/nginx/conf.d/matrix-synapse.conf",
worker_locations=nginx_location_config,
upstream_directives=nginx_upstream_config,
)
# Supervisord config
convert(
"/conf/supervisord.conf.j2",
"/etc/supervisor/conf.d/supervisord.conf",
main_config_path=config_path,
worker_config=supervisord_config,
)
# Ensure the logging directory exists
log_dir = data_dir + "/logs"
if not os.path.exists(log_dir):
os.mkdir(log_dir)
def start_supervisord():
"""Starts up supervisord which then starts and monitors all other necessary processes
Raises: CalledProcessError if calling start.py return a non-zero exit code.
"""
subprocess.run(["/usr/bin/supervisord"], stdin=subprocess.PIPE)
def main(args, environ):
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
# override SYNAPSE_NO_TLS, we don't support TLS in worker mode,
# this needs to be handled by a frontend proxy
environ["SYNAPSE_NO_TLS"] = "yes"
# Generate the base homeserver config if one does not yet exist
if not os.path.exists(config_path):
log("Generating base homeserver config")
generate_base_homeserver_config()
# This script may be run multiple times (mostly by Complement, see note at top of file).
# Don't re-configure workers in this instance.
mark_filepath = "/conf/workers_have_been_configured"
if not os.path.exists(mark_filepath):
# Always regenerate all other config files
generate_worker_files(environ, config_path, data_dir)
# Mark workers as being configured
with open(mark_filepath, "w") as f:
f.write("")
# Start supervisord, which will start Synapse, all of the configured worker
# processes, redis, nginx etc. according to the config we created above.
start_supervisord()
if __name__ == "__main__":
main(sys.argv, os.environ)

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# This script runs the PostgreSQL tests inside a Docker container. It expects
# the relevant source files to be mounted into /src (done automatically by the

View File

@@ -3,6 +3,7 @@
import codecs
import glob
import os
import platform
import subprocess
import sys
@@ -213,6 +214,14 @@ def main(args, environ):
if "-m" not in args:
args = ["-m", synapse_worker] + args
jemallocpath = "/usr/lib/%s-linux-gnu/libjemalloc.so.2" % (platform.machine(),)
if os.path.isfile(jemallocpath):
environ["LD_PRELOAD"] = jemallocpath
environ["PYTHONMALLOC"] = "malloc"
else:
log("Could not find %s, will not use" % (jemallocpath,))
# if there are no config files passed to synapse, try adding the default file
if not any(p.startswith("--config-path") or p.startswith("-c") for p in args):
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
@@ -248,9 +257,9 @@ running with 'migrate_config'. See the README for more details.
args = ["python"] + args
if ownership is not None:
args = ["gosu", ownership] + args
os.execv("/usr/sbin/gosu", args)
os.execve("/usr/sbin/gosu", args, environ)
else:
os.execv("/usr/local/bin/python", args)
os.execve("/usr/local/bin/python", args, environ)
if __name__ == "__main__":

View File

@@ -1,5 +1,7 @@
# Contents
- [List all media in a room](#list-all-media-in-a-room)
- [Querying media](#querying-media)
* [List all media in a room](#list-all-media-in-a-room)
* [List all media uploaded by a user](#list-all-media-uploaded-by-a-user)
- [Quarantine media](#quarantine-media)
* [Quarantining media by ID](#quarantining-media-by-id)
* [Quarantining media in a room](#quarantining-media-in-a-room)
@@ -10,7 +12,11 @@
* [Delete local media by date or size](#delete-local-media-by-date-or-size)
- [Purge Remote Media API](#purge-remote-media-api)
# List all media in a room
# Querying media
These APIs allow extracting media information from the homeserver.
## List all media in a room
This API gets a list of known media in a room.
However, it only shows media from unencrypted events or rooms.
@@ -36,6 +42,12 @@ The API returns a JSON body like the following:
}
```
## List all media uploaded by a user
Listing all media that has been uploaded by a local user can be achieved through
the use of the [List media of a user](user_admin_api.rst#list-media-of-a-user)
Admin API.
# Quarantine media
Quarantining media means that it is marked as inaccessible by users. It applies

View File

@@ -427,7 +427,7 @@ the new room. Users on other servers will be unaffected.
The API is:
```
POST /_synapse/admin/v1/rooms/<room_id>/delete
DELETE /_synapse/admin/v1/rooms/<room_id>
```
with a body of:
@@ -528,6 +528,15 @@ You will have to manually handle, if you so choose, the following:
* Users that would have been booted from the room (and will have been force-joined to the Content Violation room).
* Removal of the Content Violation room if desired.
## Deprecated endpoint
The previous deprecated API will be removed in a future release, it was:
```
POST /_synapse/admin/v1/rooms/<room_id>/delete
```
It behaves the same way than the current endpoint except the path and the method.
# Make Room Admin API

View File

@@ -111,35 +111,16 @@ List Accounts
=============
This API returns all local user accounts.
By default, the response is ordered by ascending user ID.
The api is::
The API is::
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
The parameter ``from`` is optional but used for pagination, denoting the
offset in the returned results. This should be treated as an opaque value and
not explicitly set to anything other than the return value of ``next_token``
from a previous call.
The parameter ``limit`` is optional but is used for pagination, denoting the
maximum number of items to return in this call. Defaults to ``100``.
The parameter ``user_id`` is optional and filters to only return users with user IDs
that contain this value. This parameter is ignored when using the ``name`` parameter.
The parameter ``name`` is optional and filters to only return users with user ID localparts
**or** displaynames that contain this value.
The parameter ``guests`` is optional and if ``false`` will **exclude** guest users.
Defaults to ``true`` to include guest users.
The parameter ``deactivated`` is optional and if ``true`` will **include** deactivated users.
Defaults to ``false`` to exclude deactivated users.
A JSON body is returned with the following shape:
A response body like the following is returned:
.. code:: json
@@ -175,6 +156,66 @@ with ``from`` set to the value of ``next_token``. This will return a new page.
If the endpoint does not return a ``next_token`` then there are no more users
to paginate through.
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - Is optional and filters to only return users with user IDs
that contain this value. This parameter is ignored when using the ``name`` parameter.
- ``name`` - Is optional and filters to only return users with user ID localparts
**or** displaynames that contain this value.
- ``guests`` - string representing a bool - Is optional and if ``false`` will **exclude** guest users.
Defaults to ``true`` to include guest users.
- ``deactivated`` - string representing a bool - Is optional and if ``true`` will **include** deactivated users.
Defaults to ``false`` to exclude deactivated users.
- ``limit`` - string representing a positive integer - Is optional but is used for pagination,
denoting the maximum number of items to return in this call. Defaults to ``100``.
- ``from`` - string representing a positive integer - Is optional but used for pagination,
denoting the offset in the returned results. This should be treated as an opaque value and
not explicitly set to anything other than the return value of ``next_token`` from a previous call.
Defaults to ``0``.
- ``order_by`` - The method by which to sort the returned list of users.
If the ordered field has duplicates, the second order is always by ascending ``name``,
which guarantees a stable ordering. Valid values are:
- ``name`` - Users are ordered alphabetically by ``name``. This is the default.
- ``is_guest`` - Users are ordered by ``is_guest`` status.
- ``admin`` - Users are ordered by ``admin`` status.
- ``user_type`` - Users are ordered alphabetically by ``user_type``.
- ``deactivated`` - Users are ordered by ``deactivated`` status.
- ``shadow_banned`` - Users are ordered by ``shadow_banned`` status.
- ``displayname`` - Users are ordered alphabetically by ``displayname``.
- ``avatar_url`` - Users are ordered alphabetically by avatar URL.
- ``dir`` - Direction of media order. Either ``f`` for forwards or ``b`` for backwards.
Setting this value to ``b`` will reverse the above sort order. Defaults to ``f``.
Caution. The database only has indexes on the columns ``name`` and ``created_ts``.
This means that if a different sort order is used (``is_guest``, ``admin``,
``user_type``, ``deactivated``, ``shadow_banned``, ``avatar_url`` or ``displayname``),
this can cause a large load on the database, especially for large environments.
**Response**
The following fields are returned in the JSON response body:
- ``users`` - An array of objects, each containing information about an user.
User objects contain the following fields:
- ``name`` - string - Fully-qualified user ID (ex. ``@user:server.com``).
- ``is_guest`` - bool - Status if that user is a guest account.
- ``admin`` - bool - Status if that user is a server administrator.
- ``user_type`` - string - Type of the user. Normal users are type ``None``.
This allows user type specific behaviour. There are also types ``support`` and ``bot``.
- ``deactivated`` - bool - Status if that user has been marked as deactivated.
- ``shadow_banned`` - bool - Status if that user has been marked as shadow banned.
- ``displayname`` - string - The user's display name if they have set one.
- ``avatar_url`` - string - The user's avatar URL if they have set one.
- ``next_token``: string representing a positive integer - Indication for pagination. See above.
- ``total`` - integer - Total number of media.
Query current sessions for a user
=================================
@@ -823,3 +864,118 @@ The following parameters should be set in the URL:
- ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must
be local.
Override ratelimiting for users
===============================
This API allows to override or disable ratelimiting for a specific user.
There are specific APIs to set, get and delete a ratelimit.
Get status of ratelimit
-----------------------
The API is::
GET /_synapse/admin/v1/users/<user_id>/override_ratelimit
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
A response body like the following is returned:
.. code:: json
{
"messages_per_second": 0,
"burst_count": 0
}
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must
be local.
**Response**
The following fields are returned in the JSON response body:
- ``messages_per_second`` - integer - The number of actions that can
be performed in a second. `0` mean that ratelimiting is disabled for this user.
- ``burst_count`` - integer - How many actions that can be performed before
being limited.
If **no** custom ratelimit is set, an empty JSON dict is returned.
.. code:: json
{}
Set ratelimit
-------------
The API is::
POST /_synapse/admin/v1/users/<user_id>/override_ratelimit
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
A response body like the following is returned:
.. code:: json
{
"messages_per_second": 0,
"burst_count": 0
}
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must
be local.
Body parameters:
- ``messages_per_second`` - positive integer, optional. The number of actions that can
be performed in a second. Defaults to ``0``.
- ``burst_count`` - positive integer, optional. How many actions that can be performed
before being limited. Defaults to ``0``.
To disable users' ratelimit set both values to ``0``.
**Response**
The following fields are returned in the JSON response body:
- ``messages_per_second`` - integer - The number of actions that can
be performed in a second.
- ``burst_count`` - integer - How many actions that can be performed before
being limited.
Delete ratelimit
----------------
The API is::
DELETE /_synapse/admin/v1/users/<user_id>/override_ratelimit
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
An empty JSON dict is returned.
.. code:: json
{}
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must
be local.

View File

@@ -128,6 +128,9 @@ Some guidelines follow:
will be if no sub-options are enabled).
- Lines should be wrapped at 80 characters.
- Use two-space indents.
- `true` and `false` are spelt thus (as opposed to `True`, etc.)
- Use single quotes (`'`) rather than double-quotes (`"`) or backticks
(`` ` ``) to refer to configuration options.
Example:

View File

@@ -0,0 +1,33 @@
Deprecation Policy for Platform Dependencies
============================================
Synapse has a number of platform dependencies, including Python and PostgreSQL.
This document outlines the policy towards which versions we support, and when we
drop support for versions in the future.
Policy
------
Synapse follows the upstream support life cycles for Python and PostgreSQL,
i.e. when a version reaches End of Life Synapse will withdraw support for that
version in future releases.
Details on the upstream support life cycles for Python and PostgreSQL are
documented at https://endoflife.date/python and
https://endoflife.date/postgresql.
Context
-------
It is important for system admins to have a clear understanding of the platform
requirements of Synapse and its deprecation policies so that they can
effectively plan upgrading their infrastructure ahead of time. This is
especially important in contexts where upgrading the infrastructure requires
auditing and approval from a security team, or where otherwise upgrading is a
long process.
By following the upstream support life cycles Synapse can ensure that its
dependencies continue to get security patches, while not requiring system admins
to constantly update their platform dependencies to the latest versions.

View File

@@ -226,7 +226,7 @@ Synapse config:
oidc_providers:
- idp_id: github
idp_name: Github
idp_brand: "org.matrix.github" # optional: styling hint for clients
idp_brand: "github" # optional: styling hint for clients
discover: false
issuer: "https://github.com/"
client_id: "your-client-id" # TO BE FILLED
@@ -252,7 +252,7 @@ oidc_providers:
oidc_providers:
- idp_id: google
idp_name: Google
idp_brand: "org.matrix.google" # optional: styling hint for clients
idp_brand: "google" # optional: styling hint for clients
issuer: "https://accounts.google.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
@@ -299,7 +299,7 @@ Synapse config:
oidc_providers:
- idp_id: gitlab
idp_name: Gitlab
idp_brand: "org.matrix.gitlab" # optional: styling hint for clients
idp_brand: "gitlab" # optional: styling hint for clients
issuer: "https://gitlab.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
@@ -334,7 +334,7 @@ Synapse config:
```yaml
- idp_id: facebook
idp_name: Facebook
idp_brand: "org.matrix.facebook" # optional: styling hint for clients
idp_brand: "facebook" # optional: styling hint for clients
discover: false
issuer: "https://facebook.com"
client_id: "your-client-id" # TO BE FILLED
@@ -386,7 +386,7 @@ oidc_providers:
config:
subject_claim: "id"
localpart_template: "{{ user.login }}"
display_name_template: "{{ user.full_name }}"
display_name_template: "{{ user.full_name }}"
```
### XWiki
@@ -401,8 +401,7 @@ oidc_providers:
idp_name: "XWiki"
issuer: "https://myxwikihost/xwiki/oidc/"
client_id: "your-client-id" # TO BE FILLED
# Needed until https://github.com/matrix-org/synapse/issues/9212 is fixed
client_secret: "dontcare"
client_auth_method: none
scopes: ["openid", "profile"]
user_profile_method: "userinfo_endpoint"
user_mapping_provider:
@@ -410,3 +409,40 @@ oidc_providers:
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
```
## Apple
Configuring "Sign in with Apple" (SiWA) requires an Apple Developer account.
You will need to create a new "Services ID" for SiWA, and create and download a
private key with "SiWA" enabled.
As well as the private key file, you will need:
* Client ID: the "identifier" you gave the "Services ID"
* Team ID: a 10-character ID associated with your developer account.
* Key ID: the 10-character identifier for the key.
https://help.apple.com/developer-account/?lang=en#/dev77c875b7e has more
documentation on setting up SiWA.
The synapse config will look like this:
```yaml
- idp_id: apple
idp_name: Apple
issuer: "https://appleid.apple.com"
client_id: "your-client-id" # Set to the "identifier" for your "ServicesID"
client_auth_method: "client_secret_post"
client_secret_jwt_key:
key_file: "/path/to/AuthKey_KEYIDCODE.p8" # point to your key file
jwt_header:
alg: ES256
kid: "KEYIDCODE" # Set to the 10-char Key ID
jwt_payload:
iss: TEAMIDCODE # Set to the 10-char Team ID
scopes: ["name", "email", "openid"]
authorization_endpoint: https://appleid.apple.com/auth/authorize?response_mode=form_post
user_mapping_provider:
config:
email_template: "{{ user.email }}"
```

View File

@@ -0,0 +1,235 @@
# Presence Router Module
Synapse supports configuring a module that can specify additional users
(local or remote) to should receive certain presence updates from local
users.
Note that routing presence via Application Service transactions is not
currently supported.
The presence routing module is implemented as a Python class, which will
be imported by the running Synapse.
## Python Presence Router Class
The Python class is instantiated with two objects:
* A configuration object of some type (see below).
* An instance of `synapse.module_api.ModuleApi`.
It then implements methods related to presence routing.
Note that one method of `ModuleApi` that may be useful is:
```python
async def ModuleApi.send_local_online_presence_to(users: Iterable[str]) -> None
```
which can be given a list of local or remote MXIDs to broadcast known, online user
presence to (for those users that the receiving user is considered interested in).
It does not include state for users who are currently offline, and it can only be
called on workers that support sending federation.
### Module structure
Below is a list of possible methods that can be implemented, and whether they are
required.
#### `parse_config`
```python
def parse_config(config_dict: dict) -> Any
```
**Required.** A static method that is passed a dictionary of config options, and
should return a validated config object. This method is described further in
[Configuration](#configuration).
#### `get_users_for_states`
```python
async def get_users_for_states(
self,
state_updates: Iterable[UserPresenceState],
) -> Dict[str, Set[UserPresenceState]]:
```
**Required.** An asynchronous method that is passed an iterable of user presence
state. This method can determine whether a given presence update should be sent to certain
users. It does this by returning a dictionary with keys representing local or remote
Matrix User IDs, and values being a python set
of `synapse.handlers.presence.UserPresenceState` instances.
Synapse will then attempt to send the specified presence updates to each user when
possible.
#### `get_interested_users`
```python
async def get_interested_users(self, user_id: str) -> Union[Set[str], str]
```
**Required.** An asynchronous method that is passed a single Matrix User ID. This
method is expected to return the users that the passed in user may be interested in the
presence of. Returned users may be local or remote. The presence routed as a result of
what this method returns is sent in addition to the updates already sent between users
that share a room together. Presence updates are deduplicated.
This method should return a python set of Matrix User IDs, or the object
`synapse.events.presence_router.PresenceRouter.ALL_USERS` to indicate that the passed
user should receive presence information for *all* known users.
For clarity, if the user `@alice:example.org` is passed to this method, and the Set
`{"@bob:example.com", "@charlie:somewhere.org"}` is returned, this signifies that Alice
should receive presence updates sent by Bob and Charlie, regardless of whether these
users share a room.
### Example
Below is an example implementation of a presence router class.
```python
from typing import Dict, Iterable, Set, Union
from synapse.events.presence_router import PresenceRouter
from synapse.handlers.presence import UserPresenceState
from synapse.module_api import ModuleApi
class PresenceRouterConfig:
def __init__(self):
# Config options with their defaults
# A list of users to always send all user presence updates to
self.always_send_to_users = [] # type: List[str]
# A list of users to ignore presence updates for. Does not affect
# shared-room presence relationships
self.blacklisted_users = [] # type: List[str]
class ExamplePresenceRouter:
"""An example implementation of synapse.presence_router.PresenceRouter.
Supports routing all presence to a configured set of users, or a subset
of presence from certain users to members of certain rooms.
Args:
config: A configuration object.
module_api: An instance of Synapse's ModuleApi.
"""
def __init__(self, config: PresenceRouterConfig, module_api: ModuleApi):
self._config = config
self._module_api = module_api
@staticmethod
def parse_config(config_dict: dict) -> PresenceRouterConfig:
"""Parse a configuration dictionary from the homeserver config, do
some validation and return a typed PresenceRouterConfig.
Args:
config_dict: The configuration dictionary.
Returns:
A validated config object.
"""
# Initialise a typed config object
config = PresenceRouterConfig()
always_send_to_users = config_dict.get("always_send_to_users")
blacklisted_users = config_dict.get("blacklisted_users")
# Do some validation of config options... otherwise raise a
# synapse.config.ConfigError.
config.always_send_to_users = always_send_to_users
config.blacklisted_users = blacklisted_users
return config
async def get_users_for_states(
self,
state_updates: Iterable[UserPresenceState],
) -> Dict[str, Set[UserPresenceState]]:
"""Given an iterable of user presence updates, determine where each one
needs to go. Returned results will not affect presence updates that are
sent between users who share a room.
Args:
state_updates: An iterable of user presence state updates.
Returns:
A dictionary of user_id -> set of UserPresenceState that the user should
receive.
"""
destination_users = {} # type: Dict[str, Set[UserPresenceState]
# Ignore any updates for blacklisted users
desired_updates = set()
for update in state_updates:
if update.state_key not in self._config.blacklisted_users:
desired_updates.add(update)
# Send all presence updates to specific users
for user_id in self._config.always_send_to_users:
destination_users[user_id] = desired_updates
return destination_users
async def get_interested_users(
self,
user_id: str,
) -> Union[Set[str], PresenceRouter.ALL_USERS]:
"""
Retrieve a list of users that `user_id` is interested in receiving the
presence of. This will be in addition to those they share a room with.
Optionally, the object PresenceRouter.ALL_USERS can be returned to indicate
that this user should receive all incoming local and remote presence updates.
Note that this method will only be called for local users.
Args:
user_id: A user requesting presence updates.
Returns:
A set of user IDs to return additional presence updates for, or
PresenceRouter.ALL_USERS to return presence updates for all other users.
"""
if user_id in self._config.always_send_to_users:
return PresenceRouter.ALL_USERS
return set()
```
#### A note on `get_users_for_states` and `get_interested_users`
Both of these methods are effectively two different sides of the same coin. The logic
regarding which users should receive updates for other users should be the same
between them.
`get_users_for_states` is called when presence updates come in from either federation
or local users, and is used to either direct local presence to remote users, or to
wake up the sync streams of local users to collect remote presence.
In contrast, `get_interested_users` is used to determine the users that presence should
be fetched for when a local user is syncing. This presence is then retrieved, before
being fed through `get_users_for_states` once again, with only the syncing user's
routing information pulled from the resulting dictionary.
Their routing logic should thus line up, else you may run into unintended behaviour.
## Configuration
Once you've crafted your module and installed it into the same Python environment as
Synapse, amend your homeserver config file with the following.
```yaml
presence:
routing_module:
module: my_module.ExamplePresenceRouter
config:
# Any configuration options for your module. The below is an example.
# of setting options for ExamplePresenceRouter.
always_send_to_users: ["@presence_gobbler:example.org"]
blacklisted_users:
- "@alice:example.com"
- "@bob:example.com"
...
```
The contents of `config` will be passed as a Python dictionary to the static
`parse_config` method of your class. The object returned by this method will
then be passed to the `__init__` method of your module as `config`.

View File

@@ -3,8 +3,9 @@
It is recommended to put a reverse proxy such as
[nginx](https://nginx.org/en/docs/http/ngx_http_proxy_module.html),
[Apache](https://httpd.apache.org/docs/current/mod/mod_proxy_http.html),
[Caddy](https://caddyserver.com/docs/quick-starts/reverse-proxy) or
[HAProxy](https://www.haproxy.org/) in front of Synapse. One advantage
[Caddy](https://caddyserver.com/docs/quick-starts/reverse-proxy),
[HAProxy](https://www.haproxy.org/) or
[relayd](https://man.openbsd.org/relayd.8) in front of Synapse. One advantage
of doing so is that it means that you can expose the default https port
(443) to Matrix clients without needing to run Synapse with root
privileges.
@@ -103,10 +104,11 @@ example.com:8448 {
```
<VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com;
ServerName matrix.example.com
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
AllowEncodedSlashes NoDecode
ProxyPreserveHost on
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client nocanon
@@ -115,7 +117,7 @@ example.com:8448 {
<VirtualHost *:8448>
SSLEngine on
ServerName example.com;
ServerName example.com
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
AllowEncodedSlashes NoDecode
@@ -134,6 +136,8 @@ example.com:8448 {
</IfModule>
```
**NOTE 3**: Missing `ProxyPreserveHost on` can lead to a redirect loop.
### HAProxy
```
@@ -162,6 +166,52 @@ backend matrix
server matrix 127.0.0.1:8008
```
### Relayd
```
table <webserver> { 127.0.0.1 }
table <matrixserver> { 127.0.0.1 }
http protocol "https" {
tls { no tlsv1.0, ciphers "HIGH" }
tls keypair "example.com"
match header set "X-Forwarded-For" value "$REMOTE_ADDR"
match header set "X-Forwarded-Proto" value "https"
# set CORS header for .well-known/matrix/server, .well-known/matrix/client
# httpd does not support setting headers, so do it here
match request path "/.well-known/matrix/*" tag "matrix-cors"
match response tagged "matrix-cors" header set "Access-Control-Allow-Origin" value "*"
pass quick path "/_matrix/*" forward to <matrixserver>
pass quick path "/_synapse/client/*" forward to <matrixserver>
# pass on non-matrix traffic to webserver
pass forward to <webserver>
}
relay "https_traffic" {
listen on egress port 443 tls
protocol "https"
forward to <matrixserver> port 8008 check tcp
forward to <webserver> port 8080 check tcp
}
http protocol "matrix" {
tls { no tlsv1.0, ciphers "HIGH" }
tls keypair "example.com"
block
pass quick path "/_matrix/*" forward to <matrixserver>
pass quick path "/_synapse/client/*" forward to <matrixserver>
}
relay "matrix_federation" {
listen on egress port 8448 tls
protocol "matrix"
forward to <matrixserver> port 8008 check tcp
}
```
## Homeserver Configuration
You will also want to set `bind_addresses: ['127.0.0.1']` and

View File

@@ -82,15 +82,33 @@ pid_file: DATADIR/homeserver.pid
#
#soft_file_limit: 0
# Set to false to disable presence tracking on this homeserver.
# Presence tracking allows users to see the state (e.g online/offline)
# of other local and remote users.
#
#use_presence: false
presence:
# Uncomment to disable presence tracking on this homeserver. This option
# replaces the previous top-level 'use_presence' option.
#
#enabled: false
# Presence routers are third-party modules that can specify additional logic
# to where presence updates from users are routed.
#
presence_router:
# The custom module's class. Uncomment to use a custom presence router module.
#
#module: "my_custom_router.PresenceRouter"
# Configuration options of the custom module. Refer to your module's
# documentation for available options.
#
#config:
# example_option: 'something'
# Whether to require authentication to retrieve profile data (avatars,
# display names) of other users through the client API. Defaults to
# 'false'. Note that profile data is also available via the federation
# API, so this setting is of limited value if federation is enabled on
# the server.
# API, unless allow_profile_lookup_over_federation is set to false.
#
#require_auth_for_profile_requests: true
@@ -134,6 +152,16 @@ pid_file: DATADIR/homeserver.pid
#
#gc_thresholds: [700, 10, 10]
# The minimum time in seconds between each GC for a generation, regardless of
# the GC thresholds. This ensures that we don't do GC too frequently.
#
# A value of `[1s, 10s, 30s]` indicates that a second must pass between consecutive
# generation 0 GCs, etc.
#
# Defaults to `[1s, 10s, 30s]`.
#
#gc_min_interval: [0.5s, 30s, 1m]
# Set the limit on the returned events in the timeline in the get
# and sync operations. The default value is 100. -1 means no upper limit.
#
@@ -713,6 +741,12 @@ acme:
#
#allow_profile_lookup_over_federation: false
# Uncomment to disable device display name lookup over federation. By default, the
# Federation API allows other homeservers to obtain device display names of any user
# on this homeserver. Defaults to 'true'.
#
#allow_device_name_lookup_over_federation: false
## Caching ##
@@ -792,6 +826,7 @@ caches:
# password: secretpassword
# database: synapse
# host: localhost
# port: 5432
# cp_min: 5
# cp_max: 10
#
@@ -870,10 +905,10 @@ log_config: "CONFDIR/SERVERNAME.log.config"
#rc_joins:
# local:
# per_second: 0.1
# burst_count: 3
# burst_count: 10
# remote:
# per_second: 0.01
# burst_count: 3
# burst_count: 10
#
#rc_3pid_validation:
# per_second: 0.003
@@ -1157,69 +1192,6 @@ url_preview_accept_language:
#
#enable_registration: false
# Optional account validity configuration. This allows for accounts to be denied
# any request after a given period.
#
# Once this feature is enabled, Synapse will look for registered users without an
# expiration date at startup and will add one to every account it found using the
# current settings at that time.
# This means that, if a validity period is set, and Synapse is restarted (it will
# then derive an expiration date from the current validity period), and some time
# after that the validity period changes and Synapse is restarted, the users'
# expiration dates won't be updated unless their account is manually renewed. This
# date will be randomly selected within a range [now + period - d ; now + period],
# where d is equal to 10% of the validity period.
#
account_validity:
# The account validity feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# The period after which an account is valid after its registration. When
# renewing the account, its validity period will be extended by this amount
# of time. This parameter is required when using the account validity
# feature.
#
#period: 6w
# The amount of time before an account's expiry date at which Synapse will
# send an email to the account's email address with a renewal link. By
# default, no such emails are sent.
#
# If you enable this setting, you will also need to fill out the 'email' and
# 'public_baseurl' configuration sections.
#
#renew_at: 1w
# The subject of the email sent out with the renewal link. '%(app)s' can be
# used as a placeholder for the 'app_name' parameter from the 'email'
# section.
#
# Note that the placeholder must be written '%(app)s', including the
# trailing 's'.
#
# If this is not set, a default value is used.
#
#renew_email_subject: "Renew your %(app)s account"
# Directory in which Synapse will try to find templates for the HTML files to
# serve to the user when trying to renew an account. If not set, default
# templates from within the Synapse package will be used.
#
#template_dir: "res/templates"
# File within 'template_dir' giving the HTML to be displayed to the user after
# they successfully renewed their account. If not set, default text is used.
#
#account_renewed_html_path: "account_renewed.html"
# File within 'template_dir' giving the HTML to be displayed when the user
# tries to renew an account with an invalid renewal token. If not set,
# default text is used.
#
#invalid_token_html_path: "invalid_token.html"
# Time that a user's session remains valid for, after they log in.
#
# Note that this is not currently compatible with guest logins.
@@ -1247,9 +1219,9 @@ account_validity:
#
#allowed_local_3pids:
# - medium: email
# pattern: '.*@matrix\.org'
# pattern: '^[^@]+@matrix\.org$'
# - medium: email
# pattern: '.*@vector\.im'
# pattern: '^[^@]+@vector\.im$'
# - medium: msisdn
# pattern: '\+44'
@@ -1414,6 +1386,91 @@ account_threepid_delegates:
#auto_join_rooms_for_guests: false
## Account Validity ##
# Optional account validity configuration. This allows for accounts to be denied
# any request after a given period.
#
# Once this feature is enabled, Synapse will look for registered users without an
# expiration date at startup and will add one to every account it found using the
# current settings at that time.
# This means that, if a validity period is set, and Synapse is restarted (it will
# then derive an expiration date from the current validity period), and some time
# after that the validity period changes and Synapse is restarted, the users'
# expiration dates won't be updated unless their account is manually renewed. This
# date will be randomly selected within a range [now + period - d ; now + period],
# where d is equal to 10% of the validity period.
#
account_validity:
# The account validity feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# The period after which an account is valid after its registration. When
# renewing the account, its validity period will be extended by this amount
# of time. This parameter is required when using the account validity
# feature.
#
#period: 6w
# The amount of time before an account's expiry date at which Synapse will
# send an email to the account's email address with a renewal link. By
# default, no such emails are sent.
#
# If you enable this setting, you will also need to fill out the 'email' and
# 'public_baseurl' configuration sections.
#
#renew_at: 1w
# The subject of the email sent out with the renewal link. '%(app)s' can be
# used as a placeholder for the 'app_name' parameter from the 'email'
# section.
#
# Note that the placeholder must be written '%(app)s', including the
# trailing 's'.
#
# If this is not set, a default value is used.
#
#renew_email_subject: "Renew your %(app)s account"
# Directory in which Synapse will try to find templates for the HTML files to
# serve to the user when trying to renew an account. If not set, default
# templates from within the Synapse package will be used.
#
# The currently available templates are:
#
# * account_renewed.html: Displayed to the user after they have successfully
# renewed their account.
#
# * account_previously_renewed.html: Displayed to the user if they attempt to
# renew their account with a token that is valid, but that has already
# been used. In this case the account is not renewed again.
#
# * invalid_token.html: Displayed to the user when they try to renew an account
# with an unknown or invalid renewal token.
#
# See https://github.com/matrix-org/synapse/tree/master/synapse/res/templates for
# default template contents.
#
# The file name of some of these templates can be configured below for legacy
# reasons.
#
#template_dir: "res/templates"
# A custom file name for the 'account_renewed.html' template.
#
# If not set, the file is assumed to be named "account_renewed.html".
#
#account_renewed_html_path: "account_renewed.html"
# A custom file name for the 'invalid_token.html' template.
#
# If not set, the file is assumed to be named "invalid_token.html".
#
#invalid_token_html_path: "invalid_token.html"
## Metrics ###
# Enable collection and rendering of performance metrics
@@ -1452,14 +1509,32 @@ metrics_flags:
## API Configuration ##
# A list of event types that will be included in the room_invite_state
# Controls for the state that is shared with users who receive an invite
# to a room
#
#room_invite_state_types:
# - "m.room.join_rules"
# - "m.room.canonical_alias"
# - "m.room.avatar"
# - "m.room.encryption"
# - "m.room.name"
room_prejoin_state:
# By default, the following state event types are shared with users who
# receive invites to the room:
#
# - m.room.join_rules
# - m.room.canonical_alias
# - m.room.avatar
# - m.room.encryption
# - m.room.name
# - m.room.create
#
# Uncomment the following to disable these defaults (so that only the event
# types listed in 'additional_event_types' are shared). Defaults to 'false'.
#
#disable_default_event_types: true
# Additional state event types to share with users when they are invited
# to a room.
#
# By default, this list is empty (so only the default event types are shared).
#
#additional_event_types:
# - org.example.custom.event.type
# A list of application service config files to use
@@ -1759,6 +1834,9 @@ saml2_config:
# Note that, if this is changed, users authenticating via that provider
# will no longer be recognised as the same user!
#
# (Use "oidc" here if you are migrating from an old "oidc_config"
# configuration.)
#
# idp_name: A user-facing name for this identity provider, which is used to
# offer the user a choice of login mechanisms.
#
@@ -1780,7 +1858,26 @@ saml2_config:
#
# client_id: Required. oauth2 client id to use.
#
# client_secret: Required. oauth2 client secret to use.
# client_secret: oauth2 client secret to use. May be omitted if
# client_secret_jwt_key is given, or if client_auth_method is 'none'.
#
# client_secret_jwt_key: Alternative to client_secret: details of a key used
# to create a JSON Web Token to be used as an OAuth2 client secret. If
# given, must be a dictionary with the following properties:
#
# key: a pem-encoded signing key. Must be a suitable key for the
# algorithm specified. Required unless 'key_file' is given.
#
# key_file: the path to file containing a pem-encoded signing key file.
# Required unless 'key' is given.
#
# jwt_header: a dictionary giving properties to include in the JWT
# header. Must include the key 'alg', giving the algorithm used to
# sign the JWT, such as "ES256", using the JWA identifiers in
# RFC7518.
#
# jwt_payload: an optional dictionary giving properties to include in
# the JWT payload. Normally this should include an 'iss' key.
#
# client_auth_method: auth method to use when exchanging the token. Valid
# values are 'client_secret_basic' (default), 'client_secret_post' and
@@ -1821,7 +1918,7 @@ saml2_config:
# sub-properties:
#
# module: The class name of a custom mapping module. Default is
# 'synapse.handlers.oidc_handler.JinjaOidcMappingProvider'.
# 'synapse.handlers.oidc.JinjaOidcMappingProvider'.
# See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers
# for information on implementing a custom mapping provider.
#
@@ -1855,6 +1952,24 @@ saml2_config:
# which is set to the claims returned by the UserInfo Endpoint and/or
# in the ID Token.
#
# It is possible to configure Synapse to only allow logins if certain attributes
# match particular values in the OIDC userinfo. The requirements can be listed under
# `attribute_requirements` as shown below. All of the listed attributes must
# match for the login to be permitted. Additional attributes can be added to
# userinfo by expanding the `scopes` section of the OIDC config to retrieve
# additional information from the OIDC provider.
#
# If the OIDC claim is a list, then the attribute must match any value in the list.
# Otherwise, it must exactly match the value of the claim. Using the example
# below, the `family_name` claim MUST be "Stephensson", but the `groups`
# claim MUST contain "admin".
#
# attribute_requirements:
# - attribute: family_name
# value: "Stephensson"
# - attribute: groups
# value: "admin"
#
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
# for information on how to configure these options.
#
@@ -1887,34 +2002,9 @@ oidc_providers:
# localpart_template: "{{ user.login }}"
# display_name_template: "{{ user.name }}"
# email_template: "{{ user.email }}"
# For use with Keycloak
#
#- idp_id: keycloak
# idp_name: Keycloak
# issuer: "https://127.0.0.1:8443/auth/realms/my_realm_name"
# client_id: "synapse"
# client_secret: "copy secret generated in Keycloak UI"
# scopes: ["openid", "profile"]
# For use with Github
#
#- idp_id: github
# idp_name: Github
# idp_brand: org.matrix.github
# discover: false
# issuer: "https://github.com/"
# client_id: "your-client-id" # TO BE FILLED
# client_secret: "your-client-secret" # TO BE FILLED
# authorization_endpoint: "https://github.com/login/oauth/authorize"
# token_endpoint: "https://github.com/login/oauth/access_token"
# userinfo_endpoint: "https://api.github.com/user"
# scopes: ["read:user"]
# user_mapping_provider:
# config:
# subject_claim: "id"
# localpart_template: "{{ user.login }}"
# display_name_template: "{{ user.name }}"
# attribute_requirements:
# - attribute: userGroup
# value: "synapseUsers"
# Enable Central Authentication Service (CAS) for registration and login.
@@ -2627,19 +2717,20 @@ user_directory:
# Local statistics collection. Used in populating the room directory.
# Settings for local room and user statistics collection. See
# docs/room_and_user_statistics.md.
#
# 'bucket_size' controls how large each statistics timeslice is. It can
# be defined in a human readable short form -- e.g. "1d", "1y".
#
# 'retention' controls how long historical statistics will be kept for.
# It can be defined in a human readable short form -- e.g. "1d", "1y".
#
#
#stats:
# enabled: true
# bucket_size: 1d
# retention: 1y
stats:
# Uncomment the following to disable room and user statistics. Note that doing
# so may cause certain features (such as the room directory) not to work
# correctly.
#
#enabled: false
# The size of each timeslice in the room_stats_historical and
# user_stats_historical tables, as a time period. Defaults to "1d".
#
#bucket_size: 1h
# Server Notices room configuration

View File

@@ -14,6 +14,7 @@ The Python class is instantiated with two objects:
* An instance of `synapse.module_api.ModuleApi`.
It then implements methods which return a boolean to alter behavior in Synapse.
All the methods must be defined.
There's a generic method for checking every event (`check_event_for_spam`), as
well as some specific methods:
@@ -24,6 +25,7 @@ well as some specific methods:
* `user_may_publish_room`
* `check_username_for_spam`
* `check_registration_for_spam`
* `check_media_file_for_spam`
The details of each of these methods (as well as their inputs and outputs)
are documented in the `synapse.events.spamcheck.SpamChecker` class.
@@ -31,6 +33,10 @@ are documented in the `synapse.events.spamcheck.SpamChecker` class.
The `ModuleApi` class provides a way for the custom spam checker class to
call back into the homeserver internals.
Additionally, a `parse_config` method is mandatory and receives the plugin config
dictionary. After parsing, It must return an object which will be
passed to `__init__` later.
### Example
```python
@@ -41,6 +47,10 @@ class ExampleSpamChecker:
self.config = config
self.api = api
@staticmethod
def parse_config(config):
return config
async def check_event_for_spam(self, foo):
return False # allow all events
@@ -59,7 +69,13 @@ class ExampleSpamChecker:
async def check_username_for_spam(self, user_profile):
return False # allow all usernames
async def check_registration_for_spam(self, email_threepid, username, request_info):
async def check_registration_for_spam(
self,
email_threepid,
username,
request_info,
auth_provider_id,
):
return RegistrationBehaviour.ALLOW # allow all registrations
async def check_media_file_for_spam(self, file_wrapper, file_info):

View File

@@ -106,7 +106,7 @@ A custom mapping provider must specify the following methods:
Synapse has a built-in OpenID mapping provider if a custom provider isn't
specified in the config. It is located at
[`synapse.handlers.oidc_handler.JinjaOidcMappingProvider`](../synapse/handlers/oidc_handler.py).
[`synapse.handlers.oidc.JinjaOidcMappingProvider`](../synapse/handlers/oidc.py).
## SAML Mapping Providers
@@ -190,4 +190,4 @@ A custom mapping provider must specify the following methods:
Synapse has a built-in SAML mapping provider if a custom provider isn't
specified in the config. It is located at
[`synapse.handlers.saml_handler.DefaultSamlMappingProvider`](../synapse/handlers/saml_handler.py).
[`synapse.handlers.saml.DefaultSamlMappingProvider`](../synapse/handlers/saml.py).

View File

@@ -232,7 +232,6 @@ expressions:
# Registration/login requests
^/_matrix/client/(api/v1|r0|unstable)/login$
^/_matrix/client/(r0|unstable)/register$
^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
# Event sending requests
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
@@ -276,7 +275,7 @@ using):
Ensure that all SSO logins go to a single process.
For multiple workers not handling the SSO endpoints properly, see
[#7530](https://github.com/matrix-org/synapse/issues/7530) and
[#7530](https://github.com/matrix-org/synapse/issues/7530) and
[#9427](https://github.com/matrix-org/synapse/issues/9427).
Note that a HTTP listener with `client` and `federation` resources must be

View File

@@ -1,12 +1,14 @@
[mypy]
namespace_packages = True
plugins = mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
follow_imports = silent
follow_imports = normal
check_untyped_defs = True
show_error_codes = True
show_traceback = True
mypy_path = stubs
warn_unreachable = True
local_partial_types = True
no_implicit_optional = True
# To find all folders that pass mypy you run:
#
@@ -20,8 +22,9 @@ files =
synapse/crypto,
synapse/event_auth.py,
synapse/events/builder.py,
synapse/events/validator.py,
synapse/events/spamcheck.py,
synapse/events/third_party_rules.py,
synapse/events/validator.py,
synapse/federation,
synapse/groups,
synapse/handlers,
@@ -69,7 +72,9 @@ files =
synapse/util/async_helpers.py,
synapse/util/caches,
synapse/util/metrics.py,
synapse/util/macaroons.py,
synapse/util/stringutils.py,
synapse/visibility.py,
tests/replication,
tests/test_utils,
tests/handlers/test_password_providers.py,
@@ -116,9 +121,6 @@ ignore_missing_imports = True
[mypy-saml2.*]
ignore_missing_imports = True
[mypy-unpaddedbase64]
ignore_missing_imports = True
[mypy-canonicaljson]
ignore_missing_imports = True
@@ -169,3 +171,6 @@ ignore_missing_imports = True
[mypy-txacme.*]
ignore_missing_imports = True
[mypy-pympler.*]
ignore_missing_imports = True

View File

@@ -35,7 +35,7 @@
showcontent = true
[tool.black]
target-version = ['py35']
target-version = ['py36']
exclude = '''
(

View File

@@ -18,14 +18,13 @@ import threading
from concurrent.futures import ThreadPoolExecutor
DISTS = (
"debian:stretch",
"debian:buster",
"debian:bullseye",
"debian:sid",
"ubuntu:xenial",
"ubuntu:bionic",
"ubuntu:focal",
"ubuntu:groovy",
"ubuntu:bionic", # 18.04 LTS (our EOL forced by Py36 on 2021-12-23)
"ubuntu:focal", # 20.04 LTS (our EOL forced by Py38 on 2024-10-14)
"ubuntu:groovy", # 20.10 (EOL 2021-07-07)
"ubuntu:hirsute", # 21.04 (EOL 2022-01-05)
)
DESC = '''\
@@ -43,7 +42,7 @@ class Builder(object):
self._lock = threading.Lock()
self._failed = False
def run_build(self, dist):
def run_build(self, dist, skip_tests=False):
"""Build deb for a single distribution"""
if self._failed:
@@ -51,13 +50,13 @@ class Builder(object):
raise Exception("failed")
try:
self._inner_build(dist)
self._inner_build(dist, skip_tests)
except Exception as e:
print("build of %s failed: %s" % (dist, e), file=sys.stderr)
self._failed = True
raise
def _inner_build(self, dist):
def _inner_build(self, dist, skip_tests=False):
projdir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
os.chdir(projdir)
@@ -101,6 +100,7 @@ class Builder(object):
"--volume=" + debsdir + ":/debs",
"-e", "TARGET_USERID=%i" % (os.getuid(), ),
"-e", "TARGET_GROUPID=%i" % (os.getgid(), ),
"-e", "DEB_BUILD_OPTIONS=%s" % ("nocheck" if skip_tests else ""),
"dh-venv-builder:" + tag,
], stdout=stdout, stderr=subprocess.STDOUT)
@@ -124,7 +124,7 @@ class Builder(object):
self.active_containers.remove(c)
def run_builds(dists, jobs=1):
def run_builds(dists, jobs=1, skip_tests=False):
builder = Builder(redirect_stdout=(jobs > 1))
def sig(signum, _frame):
@@ -133,7 +133,7 @@ def run_builds(dists, jobs=1):
signal.signal(signal.SIGINT, sig)
with ThreadPoolExecutor(max_workers=jobs) as e:
res = e.map(builder.run_build, dists)
res = e.map(lambda dist: builder.run_build(dist, skip_tests), dists)
# make sure we consume the iterable so that exceptions are raised.
for r in res:
@@ -148,9 +148,13 @@ if __name__ == '__main__':
'-j', '--jobs', type=int, default=1,
help='specify the number of builds to run in parallel',
)
parser.add_argument(
'--no-check', action='store_true',
help='skip running tests after building',
)
parser.add_argument(
'dist', nargs='*', default=DISTS,
help='a list of distributions to build for. Default: %(default)s',
)
args = parser.parse_args()
run_builds(dists=args.dist, jobs=args.jobs)
run_builds(dists=args.dist, jobs=args.jobs, skip_tests=args.no_check)

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
#
# A script which checks that an appropriate news file has been added on this
# branch.

View File

@@ -1,22 +1,49 @@
#! /bin/bash -eu
#!/usr/bin/env bash
# This script is designed for developers who want to test their code
# against Complement.
#
# It makes a Synapse image which represents the current checkout,
# then downloads Complement and runs it with that image.
# builds a synapse-complement image on top, then runs tests with it.
#
# By default the script will fetch the latest Complement master branch and
# run tests with that. This can be overridden to use a custom Complement
# checkout by setting the COMPLEMENT_DIR environment variable to the
# filepath of a local Complement checkout.
#
# A regular expression of test method names can be supplied as the first
# argument to the script. Complement will then only run those tests. If
# no regex is supplied, all tests are run. For example;
#
# ./complement.sh "TestOutboundFederation(Profile|Send)"
#
# Exit if a line returns a non-zero exit code
set -e
# Change to the repository root
cd "$(dirname $0)/.."
# Check for a user-specified Complement checkout
if [[ -z "$COMPLEMENT_DIR" ]]; then
echo "COMPLEMENT_DIR not set. Fetching the latest Complement checkout..."
wget -Nq https://github.com/matrix-org/complement/archive/master.tar.gz
tar -xzf master.tar.gz
COMPLEMENT_DIR=complement-master
echo "Checkout available at 'complement-master'"
fi
# Build the base Synapse image from the local checkout
docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
# Build the Synapse monolith image from Complement, based on the above image we just built
docker build -t complement-synapse -f "$COMPLEMENT_DIR/dockerfiles/Synapse.Dockerfile" "$COMPLEMENT_DIR/dockerfiles"
# Download Complement
wget -N https://github.com/matrix-org/complement/archive/master.tar.gz
tar -xzf master.tar.gz
cd complement-master
cd "$COMPLEMENT_DIR"
# Build the Synapse image from Complement, based on the above image we just built
docker build -t complement-synapse -f dockerfiles/Synapse.Dockerfile ./dockerfiles
EXTRA_COMPLEMENT_ARGS=""
if [[ -n "$1" ]]; then
# A test name regex has been set, supply it to Complement
EXTRA_COMPLEMENT_ARGS+="-run $1 "
fi
# Run the tests on the resulting image!
COMPLEMENT_BASE_IMAGE=complement-synapse go test -v -count=1 ./tests
# Run the tests!
COMPLEMENT_BASE_IMAGE=complement-synapse go test -v -tags synapse_blacklist,msc2946,msc3083 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests

View File

@@ -1,10 +1,15 @@
#!/bin/bash
#!/usr/bin/env bash
# Find linting errors in Synapse's default config file.
# Exits with 0 if there are no problems, or another code otherwise.
# cd to the root of the repository
cd `dirname $0`/..
# Restore backup of sample config upon script exit
trap "mv docs/sample_config.yaml.bak docs/sample_config.yaml" EXIT
# Fix non-lowercase true/false values
sed -i.bak -E "s/: +True/: true/g; s/: +False/: false/g;" docs/sample_config.yaml
rm docs/sample_config.yaml.bak
# Check if anything changed
git diff --exit-code docs/sample_config.yaml
diff docs/sample_config.yaml docs/sample_config.yaml.bak

View File

@@ -140,7 +140,7 @@ if __name__ == "__main__":
definitions = {}
for directory in args.directories:
for root, dirs, files in os.walk(directory):
for root, _, files in os.walk(directory):
for filename in files:
if filename.endswith(".py"):
filepath = os.path.join(root, filename)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python2
#!/usr/bin/env python
import sys

View File

@@ -22,8 +22,8 @@ import sys
from typing import Any, Optional
from urllib import parse as urlparse
import nacl.signing
import requests
import signedjson.key
import signedjson.types
import srvlookup
import yaml
@@ -44,18 +44,6 @@ def encode_base64(input_bytes):
return output_string
def decode_base64(input_string):
"""Decode a base64 string to bytes inferring padding from the length of the
string."""
input_bytes = input_string.encode("ascii")
input_len = len(input_bytes)
padding = b"=" * (3 - ((input_len + 3) % 4))
output_len = 3 * ((input_len + 2) // 4) + (input_len + 2) % 4 - 2
output_bytes = base64.b64decode(input_bytes + padding)
return output_bytes[:output_len]
def encode_canonical_json(value):
return json.dumps(
value,
@@ -88,42 +76,6 @@ def sign_json(
return json_object
NACL_ED25519 = "ed25519"
def decode_signing_key_base64(algorithm, version, key_base64):
"""Decode a base64 encoded signing key
Args:
algorithm (str): The algorithm the key is for (currently "ed25519").
version (str): Identifies this key out of the keys for this entity.
key_base64 (str): Base64 encoded bytes of the key.
Returns:
A SigningKey object.
"""
if algorithm == NACL_ED25519:
key_bytes = decode_base64(key_base64)
key = nacl.signing.SigningKey(key_bytes)
key.version = version
key.alg = NACL_ED25519
return key
else:
raise ValueError("Unsupported algorithm %s" % (algorithm,))
def read_signing_keys(stream):
"""Reads a list of keys from a stream
Args:
stream : A stream to iterate for keys.
Returns:
list of SigningKey objects.
"""
keys = []
for line in stream:
algorithm, version, key_base64 = line.split()
keys.append(decode_signing_key_base64(algorithm, version, key_base64))
return keys
def request(
method: Optional[str],
origin_name: str,
@@ -223,23 +175,28 @@ def main():
parser.add_argument("--body", help="Data to send as the body of the HTTP request")
parser.add_argument(
"path", help="request path. We will add '/_matrix/federation/v1/' to this."
"path", help="request path, including the '/_matrix/federation/...' prefix."
)
args = parser.parse_args()
if not args.server_name or not args.signing_key_path:
args.signing_key = None
if args.signing_key_path:
with open(args.signing_key_path) as f:
args.signing_key = f.readline()
if not args.server_name or not args.signing_key:
read_args_from_config(args)
with open(args.signing_key_path) as f:
key = read_signing_keys(f)[0]
algorithm, version, key_base64 = args.signing_key.split()
key = signedjson.key.decode_signing_key_base64(algorithm, version, key_base64)
result = request(
args.method,
args.server_name,
key,
args.destination,
"/_matrix/federation/v1/" + args.path,
args.path,
content=args.body,
)
@@ -255,10 +212,16 @@ def main():
def read_args_from_config(args):
with open(args.config, "r") as fh:
config = yaml.safe_load(fh)
if not args.server_name:
args.server_name = config["server_name"]
if not args.signing_key_path:
args.signing_key_path = config["signing_key_path"]
if not args.signing_key:
if "signing_key" in config:
args.signing_key = config["signing_key"]
else:
with open(config["signing_key_path"]) as f:
args.signing_key = f.readline()
class MatrixConnectionAdapter(HTTPAdapter):

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
#
# Update/check the docs/sample_config.yaml

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
#
# Runs linting scripts over the local Synapse checkout
# isort - sorts import statements

View File

@@ -48,7 +48,7 @@ args = parser.parse_args()
for directory in args.directories:
for root, dirs, files in os.walk(directory):
for root, _, files in os.walk(directory):
for filename in files:
if filename.endswith(".py"):
filepath = os.path.join(root, filename)

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
#
# This script generates SQL files for creating a brand new Synapse DB with the latest
# schema, on both SQLite3 and Postgres.
@@ -6,7 +6,7 @@
# It does so by having Synapse generate an up-to-date SQLite DB, then running
# synapse_port_db to convert it to Postgres. It then dumps the contents of both.
POSTGRES_HOST="localhost"
export PGHOST="localhost"
POSTGRES_DB_NAME="synapse_full_schema.$$"
SQLITE_FULL_SCHEMA_OUTPUT_FILE="full.sql.sqlite"
@@ -32,7 +32,7 @@ usage() {
while getopts "p:co:h" opt; do
case $opt in
p)
POSTGRES_USERNAME=$OPTARG
export PGUSER=$OPTARG
;;
c)
# Print all commands that are being executed
@@ -69,7 +69,7 @@ if [ ${#unsatisfied_requirements} -ne 0 ]; then
exit 1
fi
if [ -z "$POSTGRES_USERNAME" ]; then
if [ -z "$PGUSER" ]; then
echo "No postgres username supplied"
usage
exit 1
@@ -84,8 +84,9 @@ fi
# Create the output directory if it doesn't exist
mkdir -p "$OUTPUT_DIR"
read -rsp "Postgres password for '$POSTGRES_USERNAME': " POSTGRES_PASSWORD
read -rsp "Postgres password for '$PGUSER': " PGPASSWORD
echo ""
export PGPASSWORD
# Exit immediately if a command fails
set -e
@@ -131,9 +132,9 @@ report_stats: false
database:
name: "psycopg2"
args:
user: "$POSTGRES_USERNAME"
host: "$POSTGRES_HOST"
password: "$POSTGRES_PASSWORD"
user: "$PGUSER"
host: "$PGHOST"
password: "$PGPASSWORD"
database: "$POSTGRES_DB_NAME"
# Suppress the key server warning.
@@ -150,7 +151,7 @@ scripts-dev/update_database --database-config "$SQLITE_CONFIG"
# Create the PostgreSQL database.
echo "Creating postgres database..."
createdb $POSTGRES_DB_NAME
createdb --lc-collate=C --lc-ctype=C --template=template0 "$POSTGRES_DB_NAME"
echo "Copying data from SQLite3 to Postgres with synapse_port_db..."
if [ -z "$COVERAGE" ]; then
@@ -181,7 +182,7 @@ DROP TABLE user_directory_search_docsize;
DROP TABLE user_directory_search_stat;
"
sqlite3 "$SQLITE_DB" <<< "$SQL"
psql $POSTGRES_DB_NAME -U "$POSTGRES_USERNAME" -w <<< "$SQL"
psql "$POSTGRES_DB_NAME" -w <<< "$SQL"
echo "Dumping SQLite3 schema to '$OUTPUT_DIR/$SQLITE_FULL_SCHEMA_OUTPUT_FILE'..."
sqlite3 "$SQLITE_DB" ".dump" > "$OUTPUT_DIR/$SQLITE_FULL_SCHEMA_OUTPUT_FILE"

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e
@@ -6,4 +6,4 @@ set -e
# next PR number.
CURRENT_NUMBER=`curl -s "https://api.github.com/repos/matrix-org/synapse/issues?state=all&per_page=1" | jq -r ".[0].number"`
CURRENT_NUMBER=$((CURRENT_NUMBER+1))
echo $CURRENT_NUMBER
echo $CURRENT_NUMBER

244
scripts-dev/release.py Executable file
View File

@@ -0,0 +1,244 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""An interactive script for doing a release. See `run()` below.
"""
import subprocess
import sys
from typing import Optional
import click
import git
from packaging import version
from redbaron import RedBaron
@click.command()
def run():
"""An interactive script to walk through the initial stages of creating a
release, including creating release branch, updating changelog and pushing to
GitHub.
Requires the dev dependencies be installed, which can be done via:
pip install -e .[dev]
"""
# Make sure we're in a git repo.
try:
repo = git.Repo()
except git.InvalidGitRepositoryError:
raise click.ClickException("Not in Synapse repo.")
if repo.is_dirty():
raise click.ClickException("Uncommitted changes exist.")
click.secho("Updating git repo...")
repo.remote().fetch()
# Parse the AST and load the `__version__` node so that we can edit it
# later.
with open("synapse/__init__.py") as f:
red = RedBaron(f.read())
version_node = None
for node in red:
if node.type != "assignment":
continue
if node.target.type != "name":
continue
if node.target.value != "__version__":
continue
version_node = node
break
if not version_node:
print("Failed to find '__version__' definition in synapse/__init__.py")
sys.exit(1)
# Parse the current version.
current_version = version.parse(version_node.value.value.strip('"'))
assert isinstance(current_version, version.Version)
# Figure out what sort of release we're doing and calcuate the new version.
rc = click.confirm("RC", default=True)
if current_version.pre:
# If the current version is an RC we don't need to bump any of the
# version numbers (other than the RC number).
base_version = "{}.{}.{}".format(
current_version.major,
current_version.minor,
current_version.micro,
)
if rc:
new_version = "{}.{}.{}rc{}".format(
current_version.major,
current_version.minor,
current_version.micro,
current_version.pre[1] + 1,
)
else:
new_version = base_version
else:
# If this is a new release cycle then we need to know if its a major
# version bump or a hotfix.
release_type = click.prompt(
"Release type",
type=click.Choice(("major", "hotfix")),
show_choices=True,
default="major",
)
if release_type == "major":
base_version = new_version = "{}.{}.{}".format(
current_version.major,
current_version.minor + 1,
0,
)
if rc:
new_version = "{}.{}.{}rc1".format(
current_version.major,
current_version.minor + 1,
0,
)
else:
base_version = new_version = "{}.{}.{}".format(
current_version.major,
current_version.minor,
current_version.micro + 1,
)
if rc:
new_version = "{}.{}.{}rc1".format(
current_version.major,
current_version.minor,
current_version.micro + 1,
)
# Confirm the calculated version is OK.
if not click.confirm(f"Create new version: {new_version}?", default=True):
click.get_current_context().abort()
# Switch to the release branch.
release_branch_name = f"release-v{base_version}"
release_branch = find_ref(repo, release_branch_name)
if release_branch:
if release_branch.is_remote():
# If the release branch only exists on the remote we check it out
# locally.
repo.git.checkout(release_branch_name)
release_branch = repo.active_branch
else:
# If a branch doesn't exist we create one. We ask which one branch it
# should be based off, defaulting to sensible values depending on the
# release type.
if current_version.is_prerelease:
default = release_branch_name
elif release_type == "major":
default = "develop"
else:
default = "master"
branch_name = click.prompt(
"Which branch should the release be based on?", default=default
)
base_branch = find_ref(repo, branch_name)
if not base_branch:
print(f"Could not find base branch {branch_name}!")
click.get_current_context().abort()
# Check out the base branch and ensure it's up to date
repo.head.reference = base_branch
repo.head.reset(index=True, working_tree=True)
if not base_branch.is_remote():
update_branch(repo)
# Create the new release branch
release_branch = repo.create_head(release_branch_name, commit=base_branch)
# Switch to the release branch and ensure its up to date.
repo.git.checkout(release_branch_name)
update_branch(repo)
# Update the `__version__` variable and write it back to the file.
version_node.value = '"' + new_version + '"'
with open("synapse/__init__.py", "w") as f:
f.write(red.dumps())
# Generate changelogs
subprocess.run("python3 -m towncrier", shell=True)
# Generate debian changelogs if its not an RC.
if not rc:
subprocess.run(
f'dch -M -v {new_version} "New synapse release {new_version}."', shell=True
)
subprocess.run('dch -M -r -D stable ""', shell=True)
# Show the user the changes and ask if they want to edit the change log.
repo.git.add("-u")
subprocess.run("git diff --cached", shell=True)
if click.confirm("Edit changelog?", default=False):
click.edit(filename="CHANGES.md")
# Commit the changes.
repo.git.add("-u")
repo.git.commit(f"-m {new_version}")
# We give the option to bail here in case the user wants to make sure things
# are OK before pushing.
if not click.confirm("Push branch to github?", default=True):
print("")
print("Run when ready to push:")
print("")
print(f"\tgit push -u {repo.remote().name} {repo.active_branch.name}")
print("")
sys.exit(0)
# Otherwise, push and open the changelog in the browser.
repo.git.push("-u", repo.remote().name, repo.active_branch.name)
click.launch(
f"https://github.com/matrix-org/synapse/blob/{repo.active_branch.name}/CHANGES.md"
)
def find_ref(repo: git.Repo, ref_name: str) -> Optional[git.HEAD]:
"""Find the branch/ref, looking first locally then in the remote."""
if ref_name in repo.refs:
return repo.refs[ref_name]
elif ref_name in repo.remote().refs:
return repo.remote().refs[ref_name]
else:
return None
def update_branch(repo: git.Repo):
"""Ensure branch is up to date if it has a remote"""
if repo.active_branch.tracking_branch():
repo.git.merge(repo.active_branch.tracking_branch().name)
if __name__ == "__main__":
run()

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env python
#
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -51,7 +50,7 @@ def main(src_repo, dest_repo):
parts = line.split("|")
if len(parts) != 2:
print("Unable to parse input line %s" % line, file=sys.stderr)
exit(1)
sys.exit(1)
move_media(parts[0], parts[1], src_paths, dest_paths)

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
@@ -635,8 +634,11 @@ class Porter(object):
"device_inbox_sequence", ("device_inbox", "device_federation_outbox")
)
await self._setup_sequence(
"account_data_sequence", ("room_account_data", "room_tags_revisions", "account_data"))
await self._setup_sequence("receipts_sequence", ("receipts_linearized", ))
"account_data_sequence",
("room_account_data", "room_tags_revisions", "account_data"),
)
await self._setup_sequence("receipts_sequence", ("receipts_linearized",))
await self._setup_sequence("presence_stream_sequence", ("presence_stream",))
await self._setup_auth_chain_sequence()
# Step 3. Get tables.
@@ -911,10 +913,11 @@ class Porter(object):
(curr_forward_id + 1,),
)
txn.execute(
"ALTER SEQUENCE events_backfill_stream_seq RESTART WITH %s",
(curr_backward_id + 1,),
)
if curr_backward_id:
txn.execute(
"ALTER SEQUENCE events_backfill_stream_seq RESTART WITH %s",
(curr_backward_id + 1,),
)
await self.postgres_store.db_pool.runInteraction(
"_setup_events_stream_seqs", _setup_events_stream_seqs_set_pos,
@@ -952,10 +955,11 @@ class Porter(object):
(curr_chain_id,),
)
await self.postgres_store.db_pool.runInteraction(
"_setup_event_auth_chain_id", r,
)
if curr_chain_id is not None:
await self.postgres_store.db_pool.runInteraction(
"_setup_event_auth_chain_id",
r,
)
##############################################

View File

@@ -3,6 +3,7 @@ test_suite = tests
[check-manifest]
ignore =
.git-blame-ignore-revs
contrib
contrib/*
docs/*
@@ -21,11 +22,10 @@ ignore=W503,W504,E203,E731,E501
[isort]
line_length = 88
sections=FUTURE,STDLIB,COMPAT,THIRDPARTY,TWISTED,FIRSTPARTY,TESTS,LOCALFOLDER
sections=FUTURE,STDLIB,THIRDPARTY,TWISTED,FIRSTPARTY,TESTS,LOCALFOLDER
default_section=THIRDPARTY
known_first_party = synapse
known_tests=tests
known_compat = mock
known_twisted=twisted,OpenSSL
multi_line_output=3
include_trailing_comma=true

View File

@@ -99,17 +99,25 @@ CONDITIONAL_REQUIREMENTS["lint"] = [
"isort==5.7.0",
"black==20.8b1",
"flake8-comprehensions",
"flake8-bugbear==21.3.2",
"flake8",
]
CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.11"]
CONDITIONAL_REQUIREMENTS["dev"] = CONDITIONAL_REQUIREMENTS["lint"] + [
# The following are used by the release script
"click==7.1.2",
"redbaron==0.9.2",
"GitPython==3.1.14",
]
CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.13"]
# Dependencies which are exclusively required by unit test code. This is
# NOT a list of all modules that are necessary to run the unit tests.
# Tests assume that all optional dependencies are installed.
#
# parameterized_class decorator was introduced in parameterized 0.7.0
CONDITIONAL_REQUIREMENTS["test"] = ["mock>=2.0", "parameterized>=0.7.0"]
CONDITIONAL_REQUIREMENTS["test"] = ["parameterized>=0.7.0"]
setup(
name="matrix-synapse",
@@ -122,13 +130,12 @@ setup(
zip_safe=False,
long_description=long_description,
long_description_content_type="text/x-rst",
python_requires="~=3.5",
python_requires="~=3.6",
classifiers=[
"Development Status :: 5 - Production/Stable",
"Topic :: Communications :: Chat",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -17,7 +16,9 @@
"""
from typing import Any, List, Optional, Type, Union
class RedisProtocol:
from twisted.internet import protocol
class RedisProtocol(protocol.Protocol):
def publish(self, channel: str, message: bytes): ...
async def ping(self) -> None: ...
async def set(
@@ -52,7 +53,7 @@ def lazyConnection(
class ConnectionHandler: ...
class RedisFactory:
class RedisFactory(protocol.ReconnectingClientFactory):
continueTrying: bool
handler: RedisProtocol
pool: List[RedisProtocol]

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018-9 New Vector Ltd
#
@@ -22,8 +21,8 @@ import os
import sys
# Check that we're not running on an unsupported Python version.
if sys.version_info < (3, 5):
print("Synapse requires Python 3.5 or above.")
if sys.version_info < (3, 6):
print("Synapse requires Python 3.6 or above.")
sys.exit(1)
# Twisted and canonicaljson will fail to import when this file is executed to
@@ -48,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.29.0"
__version__ = "1.34.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector
#

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2014 - 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -13,14 +12,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import List, Optional, Tuple
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple
import pymacaroons
from netaddr import IPAddress
from twisted.web.server import Request
import synapse.types
from synapse import event_auth
from synapse.api.auth_blocking import AuthBlocking
from synapse.api.constants import EventTypes, HistoryVisibility, Membership
@@ -37,10 +35,14 @@ from synapse.http import get_request_user_agent
from synapse.http.site import SynapseRequest
from synapse.logging import opentracing as opentracing
from synapse.storage.databases.main.registration import TokenLookupResult
from synapse.types import StateMap, UserID
from synapse.types import Requester, StateMap, UserID, create_requester
from synapse.util.caches.lrucache import LruCache
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
from synapse.util.metrics import Measure
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@@ -65,9 +67,10 @@ class Auth:
"""
FIXME: This class contains a mix of functions for authenticating users
of our client-server API and authenticating events added to room graphs.
The latter should be moved to synapse.handlers.event_auth.EventAuthHandler.
"""
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.clock = hs.get_clock()
self.store = hs.get_datastore()
@@ -79,19 +82,21 @@ class Auth:
self._auth_blocking = AuthBlocking(self.hs)
self._account_validity = hs.config.account_validity
self._account_validity_enabled = (
hs.config.account_validity.account_validity_enabled
)
self._track_appservice_user_ips = hs.config.track_appservice_user_ips
self._macaroon_secret_key = hs.config.macaroon_secret_key
async def check_from_context(
self, room_version: str, event, context, do_sig_check=True
):
) -> None:
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = self.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events = await self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events.values()}
auth_events_by_id = await self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events_by_id.values()}
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
event_auth.check(
@@ -148,26 +153,20 @@ class Auth:
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
async def check_host_in_room(self, room_id, host):
async def check_host_in_room(self, room_id: str, host: str) -> bool:
with Measure(self.clock, "check_host_in_room"):
latest_event_ids = await self.store.is_host_joined(room_id, host)
return latest_event_ids
return await self.store.is_host_joined(room_id, host)
def can_federate(self, event, auth_events):
creation_event = auth_events.get((EventTypes.Create, ""))
return creation_event.content.get("m.federate", True) is True
def get_public_keys(self, invite_event):
def get_public_keys(self, invite_event: EventBase) -> List[Dict[str, Any]]:
return event_auth.get_public_keys(invite_event)
async def get_user_by_req(
self,
request: Request,
request: SynapseRequest,
allow_guest: bool = False,
rights: str = "access",
allow_expired: bool = False,
) -> synapse.types.Requester:
) -> Requester:
"""Get a registered user's ID.
Args:
@@ -193,7 +192,7 @@ class Auth:
access_token = self.get_access_token_from_request(request)
user_id, app_service = await self._get_appservice_user_id(request)
if user_id:
if user_id and app_service:
if ip_addr and self._track_appservice_user_ips:
await self.store.insert_client_ip(
user_id=user_id,
@@ -203,9 +202,7 @@ class Auth:
device_id="dummy-device", # stubbed
)
requester = synapse.types.create_requester(
user_id, app_service=app_service
)
requester = create_requester(user_id, app_service=app_service)
request.requester = user_id
opentracing.set_tag("authenticated_entity", user_id)
@@ -222,7 +219,7 @@ class Auth:
shadow_banned = user_info.shadow_banned
# Deny the request if the user account has expired.
if self._account_validity.enabled and not allow_expired:
if self._account_validity_enabled and not allow_expired:
if await self.store.is_account_expired(
user_info.user_id, self.clock.time_msec()
):
@@ -248,7 +245,7 @@ class Auth:
errcode=Codes.GUEST_ACCESS_FORBIDDEN,
)
requester = synapse.types.create_requester(
requester = create_requester(
user_info.user_id,
token_id,
is_guest,
@@ -268,7 +265,9 @@ class Auth:
except KeyError:
raise MissingClientTokenError()
async def _get_appservice_user_id(self, request):
async def _get_appservice_user_id(
self, request: Request
) -> Tuple[Optional[str], Optional[ApplicationService]]:
app_service = self.store.get_app_service_by_token(
self.get_access_token_from_request(request)
)
@@ -280,6 +279,9 @@ class Auth:
if ip_address not in app_service.ip_range_whitelist:
return None, None
# This will always be set by the time Twisted calls us.
assert request.args is not None
if b"user_id" not in request.args:
return app_service.sender, app_service
@@ -384,7 +386,9 @@ class Auth:
logger.warning("Invalid macaroon in auth: %s %s", type(e), e)
raise InvalidClientTokenError("Invalid macaroon passed.")
def _parse_and_validate_macaroon(self, token, rights="access"):
def _parse_and_validate_macaroon(
self, token: str, rights: str = "access"
) -> Tuple[str, bool]:
"""Takes a macaroon and tries to parse and validate it. This is cached
if and only if rights == access and there isn't an expiry.
@@ -408,7 +412,7 @@ class Auth:
raise _InvalidMacaroonException()
try:
user_id = self.get_user_id_from_macaroon(macaroon)
user_id = get_value_from_macaroon(macaroon, "user_id")
guest = False
for caveat in macaroon.caveats:
@@ -416,7 +420,12 @@ class Auth:
guest = True
self.validate_macaroon(macaroon, rights, user_id=user_id)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
except (
pymacaroons.exceptions.MacaroonException,
KeyError,
TypeError,
ValueError,
):
raise InvalidClientTokenError("Invalid macaroon passed.")
if rights == "access":
@@ -424,36 +433,16 @@ class Auth:
return user_id, guest
def get_user_id_from_macaroon(self, macaroon):
"""Retrieve the user_id given by the caveats on the macaroon.
Does *not* validate the macaroon.
Args:
macaroon (pymacaroons.Macaroon): The macaroon to validate
Returns:
(str) user id
Raises:
InvalidClientCredentialsError if there is no user_id caveat in the
macaroon
"""
user_prefix = "user_id = "
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
return caveat.caveat_id[len(user_prefix) :]
raise InvalidClientTokenError("No user caveat in macaroon")
def validate_macaroon(self, macaroon, type_string, user_id):
def validate_macaroon(
self, macaroon: pymacaroons.Macaroon, type_string: str, user_id: str
) -> None:
"""
validate that a Macaroon is understood by and was signed by this server.
Args:
macaroon(pymacaroons.Macaroon): The macaroon to validate
type_string(str): The kind of token required (e.g. "access",
"delete_pusher")
user_id (str): The user_id required
macaroon: The macaroon to validate
type_string: The kind of token required (e.g. "access", "delete_pusher")
user_id: The user_id required
"""
v = pymacaroons.Verifier()
@@ -465,30 +454,20 @@ class Auth:
v.satisfy_exact("type = " + type_string)
v.satisfy_exact("user_id = %s" % user_id)
v.satisfy_exact("guest = true")
v.satisfy_general(self._verify_expiry)
satisfy_expiry(v, self.clock.time_msec)
# access_tokens include a nonce for uniqueness: any value is acceptable
v.satisfy_general(lambda c: c.startswith("nonce = "))
v.verify(macaroon, self._macaroon_secret_key)
def _verify_expiry(self, caveat):
prefix = "time < "
if not caveat.startswith(prefix):
return False
expiry = int(caveat[len(prefix) :])
now = self.hs.get_clock().time_msec()
return now < expiry
def get_appservice_by_req(self, request: SynapseRequest) -> ApplicationService:
token = self.get_access_token_from_request(request)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warning("Unrecognised appservice access token.")
raise InvalidClientTokenError()
request.requester = synapse.types.create_requester(
service.sender, app_service=service
)
request.requester = create_requester(service.sender, app_service=service)
return service
async def is_server_admin(self, user: UserID) -> bool:
@@ -540,7 +519,7 @@ class Auth:
return auth_ids
async def check_can_change_room_list(self, room_id: str, user: UserID):
async def check_can_change_room_list(self, room_id: str, user: UserID) -> bool:
"""Determine whether the user is allowed to edit the room's entry in the
published room list.
@@ -575,28 +554,33 @@ class Auth:
return user_level >= send_level
@staticmethod
def has_access_token(request: Request):
def has_access_token(request: Request) -> bool:
"""Checks if the request has an access_token.
Returns:
bool: False if no access_token was given, True otherwise.
False if no access_token was given, True otherwise.
"""
# This will always be set by the time Twisted calls us.
assert request.args is not None
query_params = request.args.get(b"access_token")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers)
@staticmethod
def get_access_token_from_request(request: Request):
def get_access_token_from_request(request: Request) -> str:
"""Extracts the access_token from the request.
Args:
request: The http request.
Returns:
unicode: The access_token
The access_token
Raises:
MissingClientTokenError: If there isn't a single access_token in the
request
"""
# This will always be set by the time Twisted calls us.
assert request.args is not None
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
query_params = request.args.get(b"access_token")
@@ -665,5 +649,5 @@ class Auth:
% (user_id, room_id),
)
def check_auth_blocking(self, *args, **kwargs):
return self._auth_blocking.check_auth_blocking(*args, **kwargs)
async def check_auth_blocking(self, *args, **kwargs) -> None:
await self._auth_blocking.check_auth_blocking(*args, **kwargs)

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -14,18 +13,21 @@
# limitations under the License.
import logging
from typing import Optional
from typing import TYPE_CHECKING, Optional
from synapse.api.constants import LimitBlockingTypes, UserTypes
from synapse.api.errors import Codes, ResourceLimitError
from synapse.config.server import is_threepid_reserved
from synapse.types import Requester
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class AuthBlocking:
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self._server_notices_mxid = hs.config.server_notices_mxid
@@ -44,7 +46,7 @@ class AuthBlocking:
threepid: Optional[dict] = None,
user_type: Optional[str] = None,
requester: Optional[Requester] = None,
):
) -> None:
"""Checks if the user should be rejected for some external reason,
such as monthly active user limiting or global disable flag

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2018-2019 New Vector Ltd
@@ -18,6 +17,9 @@
"""Contains constants from the specification."""
# the max size of a (canonical-json-encoded) event
MAX_PDU_SIZE = 65536
# the "depth" field on events is limited to 2**63 - 1
MAX_DEPTH = 2 ** 63 - 1
@@ -51,6 +53,7 @@ class PresenceState:
OFFLINE = "offline"
UNAVAILABLE = "unavailable"
ONLINE = "online"
BUSY = "org.matrix.msc3026.busy"
class JoinRules:
@@ -58,6 +61,8 @@ class JoinRules:
KNOCK = "knock"
INVITE = "invite"
PRIVATE = "private"
# As defined for MSC3083.
MSC3083_RESTRICTED = "restricted"
class LoginType:
@@ -70,6 +75,11 @@ class LoginType:
DUMMY = "m.login.dummy"
# This is used in the `type` parameter for /register when called by
# an appservice to register a new user.
APP_SERVICE_REGISTRATION_TYPE = "m.login.application_service"
class EventTypes:
Member = "m.room.member"
Create = "m.room.create"
@@ -100,10 +110,18 @@ class EventTypes:
Dummy = "org.matrix.dummy_event"
SpaceChild = "m.space.child"
SpaceParent = "m.space.parent"
MSC1772_SPACE_CHILD = "org.matrix.msc1772.space.child"
MSC1772_SPACE_PARENT = "org.matrix.msc1772.space.parent"
class ToDeviceEventTypes:
RoomKeyRequest = "m.room_key_request"
class EduTypes:
Presence = "m.presence"
RoomKeyRequest = "m.room_key_request"
class RejectedReason:
@@ -160,6 +178,10 @@ class EventContentFields:
# cf https://github.com/matrix-org/matrix-doc/pull/2228
SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after"
# cf https://github.com/matrix-org/matrix-doc/pull/1772
ROOM_TYPE = "type"
MSC1772_ROOM_TYPE = "org.matrix.msc1772.type"
class RoomEncryptionAlgorithms:
MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2"

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2018-2019 New Vector Ltd

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -17,6 +17,7 @@ from collections import OrderedDict
from typing import Hashable, Optional, Tuple
from synapse.api.errors import LimitExceededError
from synapse.storage.databases.main import DataStore
from synapse.types import Requester
from synapse.util import Clock
@@ -31,10 +32,13 @@ class Ratelimiter:
burst_count: How many actions that can be performed before being limited.
"""
def __init__(self, clock: Clock, rate_hz: float, burst_count: int):
def __init__(
self, store: DataStore, clock: Clock, rate_hz: float, burst_count: int
):
self.clock = clock
self.rate_hz = rate_hz
self.burst_count = burst_count
self.store = store
# A ordered dictionary keeping track of actions, when they were last
# performed and how often. Each entry is a mapping from a key of arbitrary type
@@ -46,60 +50,36 @@ class Ratelimiter:
OrderedDict()
) # type: OrderedDict[Hashable, Tuple[float, int, float]]
def can_requester_do_action(
async def can_do_action(
self,
requester: Requester,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,
_time_now_s: Optional[int] = None,
) -> Tuple[bool, float]:
"""Can the requester perform the action?
Args:
requester: The requester to key off when rate limiting. The user property
will be used.
rate_hz: The long term number of actions that can be performed in a second.
Overrides the value set during instantiation if set.
burst_count: How many actions that can be performed before being limited.
Overrides the value set during instantiation if set.
update: Whether to count this check as performing the action
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
Returns:
A tuple containing:
* A bool indicating if they can perform the action now
* The reactor timestamp for when the action can be performed next.
-1 if rate_hz is less than or equal to zero
"""
# Disable rate limiting of users belonging to any AS that is configured
# not to be rate limited in its registration file (rate_limited: true|false).
if requester.app_service and not requester.app_service.is_rate_limited():
return True, -1.0
return self.can_do_action(
requester.user.to_string(), rate_hz, burst_count, update, _time_now_s
)
def can_do_action(
self,
key: Hashable,
requester: Optional[Requester],
key: Optional[Hashable] = None,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,
n_actions: int = 1,
_time_now_s: Optional[int] = None,
) -> Tuple[bool, float]:
"""Can the entity (e.g. user or IP address) perform the action?
Checks if the user has ratelimiting disabled in the database by looking
for null/zero values in the `ratelimit_override` table. (Non-zero
values aren't honoured, as they're specific to the event sending
ratelimiter, rather than all ratelimiters)
Args:
key: The key we should use when rate limiting. Can be a user ID
(when sending events), an IP address, etc.
requester: The requester that is doing the action, if any. Used to check
if the user has ratelimits disabled in the database.
key: An arbitrary key used to classify an action. Defaults to the
requester's user ID.
rate_hz: The long term number of actions that can be performed in a second.
Overrides the value set during instantiation if set.
burst_count: How many actions that can be performed before being limited.
Overrides the value set during instantiation if set.
update: Whether to count this check as performing the action
n_actions: The number of times the user wants to do this action. If the user
cannot do all of the actions, the user's action count is not incremented
at all.
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
@@ -109,6 +89,30 @@ class Ratelimiter:
* The reactor timestamp for when the action can be performed next.
-1 if rate_hz is less than or equal to zero
"""
if key is None:
if not requester:
raise ValueError("Must supply at least one of `requester` or `key`")
key = requester.user.to_string()
if requester:
# Disable rate limiting of users belonging to any AS that is configured
# not to be rate limited in its registration file (rate_limited: true|false).
if requester.app_service and not requester.app_service.is_rate_limited():
return True, -1.0
# Check if ratelimiting has been disabled for the user.
#
# Note that we don't use the returned rate/burst count, as the table
# is specifically for the event sending ratelimiter. Instead, we
# only use it to (somewhat cheekily) infer whether the user should
# be subject to any rate limiting or not.
override = await self.store.get_ratelimit_for_user(
requester.authenticated_entity
)
if override and not override.messages_per_second:
return True, -1.0
# Override default values if set
time_now_s = _time_now_s if _time_now_s is not None else self.clock.time()
rate_hz = rate_hz if rate_hz is not None else self.rate_hz
@@ -124,17 +128,20 @@ class Ratelimiter:
time_delta = time_now_s - time_start
performed_count = action_count - time_delta * rate_hz
if performed_count < 0:
# Allow, reset back to count 1
allowed = True
performed_count = 0
time_start = time_now_s
action_count = 1.0
elif performed_count > burst_count - 1.0:
# This check would be easier read as performed_count + n_actions > burst_count,
# but performed_count might be a very precise float (with lots of numbers
# following the point) in which case Python might round it up when adding it to
# n_actions. Writing it this way ensures it doesn't happen.
if performed_count > burst_count - n_actions:
# Deny, we have exceeded our burst count
allowed = False
else:
# We haven't reached our limit yet
allowed = True
action_count += 1.0
action_count = performed_count + n_actions
if update:
self.actions[key] = (action_count, time_start, rate_hz)
@@ -175,23 +182,36 @@ class Ratelimiter:
else:
del self.actions[key]
def ratelimit(
async def ratelimit(
self,
key: Hashable,
requester: Optional[Requester],
key: Optional[Hashable] = None,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,
n_actions: int = 1,
_time_now_s: Optional[int] = None,
):
"""Checks if an action can be performed. If not, raises a LimitExceededError
Checks if the user has ratelimiting disabled in the database by looking
for null/zero values in the `ratelimit_override` table. (Non-zero
values aren't honoured, as they're specific to the event sending
ratelimiter, rather than all ratelimiters)
Args:
key: An arbitrary key used to classify an action
requester: The requester that is doing the action, if any. Used to check for
if the user has ratelimits disabled.
key: An arbitrary key used to classify an action. Defaults to the
requester's user ID.
rate_hz: The long term number of actions that can be performed in a second.
Overrides the value set during instantiation if set.
burst_count: How many actions that can be performed before being limited.
Overrides the value set during instantiation if set.
update: Whether to count this check as performing the action
n_actions: The number of times the user wants to do this action. If the user
cannot do all of the actions, the user's action count is not incremented
at all.
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
@@ -201,11 +221,13 @@ class Ratelimiter:
"""
time_now_s = _time_now_s if _time_now_s is not None else self.clock.time()
allowed, time_allowed = self.can_do_action(
allowed, time_allowed = await self.can_do_action(
requester,
key,
rate_hz=rate_hz,
burst_count=burst_count,
update=update,
n_actions=n_actions,
_time_now_s=time_now_s,
)

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2019 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -57,7 +56,7 @@ class RoomVersion:
state_res = attr.ib(type=int) # one of the StateResolutionVersions
enforce_key_validity = attr.ib(type=bool)
# bool: before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules
# Before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules
special_case_aliases_auth = attr.ib(type=bool)
# Strictly enforce canonicaljson, do not allow:
# * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
@@ -69,6 +68,8 @@ class RoomVersion:
limit_notifications_power_levels = attr.ib(type=bool)
# MSC2174/MSC2176: Apply updated redaction rules algorithm.
msc2176_redaction_rules = attr.ib(type=bool)
# MSC3083: Support the 'restricted' join_rule.
msc3083_join_rules = attr.ib(type=bool)
class RoomVersions:
@@ -82,6 +83,7 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
V2 = RoomVersion(
"2",
@@ -93,6 +95,7 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
V3 = RoomVersion(
"3",
@@ -104,6 +107,7 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
V4 = RoomVersion(
"4",
@@ -115,6 +119,7 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
V5 = RoomVersion(
"5",
@@ -126,6 +131,7 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
V6 = RoomVersion(
"6",
@@ -137,6 +143,7 @@ class RoomVersions:
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
MSC2176 = RoomVersion(
"org.matrix.msc2176",
@@ -148,6 +155,19 @@ class RoomVersions:
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=True,
msc3083_join_rules=False,
)
MSC3083 = RoomVersion(
"org.matrix.msc3083",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
)
@@ -162,4 +182,5 @@ KNOWN_ROOM_VERSIONS = {
RoomVersions.V6,
RoomVersions.MSC2176,
)
# Note that we do not include MSC3083 here unless it is enabled in the config.
} # type: Dict[str, RoomVersion]

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -22,7 +21,9 @@ logger = logging.getLogger(__name__)
try:
python_dependencies.check_requirements()
except python_dependencies.DependencyException as e:
sys.stderr.writelines(e.message)
sys.stderr.writelines(
e.message # noqa: B306, DependencyException.message is a property
)
sys.exit(1)

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
# Copyright 2019-2021 The Matrix.org Foundation C.I.C
#
@@ -21,20 +20,24 @@ import signal
import socket
import sys
import traceback
import warnings
from typing import Awaitable, Callable, Iterable
from cryptography.utils import CryptographyDeprecationWarning
from typing_extensions import NoReturn
from twisted.internet import defer, error, reactor
from twisted.protocols.tls import TLSMemoryBIOFactory
import synapse
from synapse.api.constants import MAX_PDU_SIZE
from synapse.app import check_bind_error
from synapse.app.phone_stats_home import start_phone_stats_home
from synapse.config.server import ListenerConfig
from synapse.config.homeserver import HomeServerConfig
from synapse.crypto import context_factory
from synapse.logging.context import PreserveLoggingContext
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.util.async_helpers import Linearizer
from synapse.util.daemonize import daemonize_process
from synapse.util.rlimit import change_resource_limit
@@ -113,6 +116,7 @@ def start_reactor(
def run():
logger.info("Running")
setup_jemalloc_stats()
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
@@ -195,6 +199,25 @@ def listen_metrics(bind_addresses, port):
start_http_server(port, addr=host, registry=RegistryProxy)
def listen_manhole(bind_addresses: Iterable[str], port: int, manhole_globals: dict):
# twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
# warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
# suppress the warning for now.
warnings.filterwarnings(
action="ignore",
category=CryptographyDeprecationWarning,
message="int_from_bytes is deprecated",
)
from synapse.util.manhole import manhole
listen_tcp(
bind_addresses,
port,
manhole(username="matrix", password="rabbithole", globals=manhole_globals),
)
def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
"""
Create a TCP socket for a port and several addresses
@@ -268,7 +291,7 @@ def refresh_certificate(hs):
logger.info("Context factories updated.")
async def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]):
async def start(hs: "synapse.server.HomeServer"):
"""
Start a Synapse server or worker.
@@ -280,7 +303,6 @@ async def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerCon
Args:
hs: homeserver instance
listeners: Listener configuration ('listeners' in homeserver.yaml)
"""
# Set up the SIGHUP machinery.
if hasattr(signal, "SIGHUP"):
@@ -316,7 +338,7 @@ async def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerCon
synapse.logging.opentracing.init_tracer(hs) # type: ignore[attr-defined] # noqa
# It is now safe to start your Synapse.
hs.start_listening(listeners)
hs.start_listening()
hs.get_datastore().db_pool.start_profiling()
hs.get_pusherpool().start()
@@ -510,3 +532,25 @@ def sdnotify(state):
# this is a bit surprising, since we don't expect to have a NOTIFY_SOCKET
# unless systemd is expecting us to notify it.
logger.warning("Unable to send notification to systemd: %s", e)
def max_request_body_size(config: HomeServerConfig) -> int:
"""Get a suitable maximum size for incoming HTTP requests"""
# Other than media uploads, the biggest request we expect to see is a fully-loaded
# /federation/v1/send request.
#
# The main thing in such a request is up to 50 PDUs, and up to 100 EDUs. PDUs are
# limited to 65536 bytes (possibly slightly more if the sender didn't use canonical
# json encoding); there is no specced limit to EDUs (see
# https://github.com/matrix-org/matrix-doc/issues/3121).
#
# in short, we somewhat arbitrarily limit requests to 200 * 64K (about 12.5M)
#
max_request_size = 200 * MAX_PDU_SIZE
# if we have a media repo enabled, we may need to allow larger uploads than that
if config.media.can_load_media_repo:
max_request_size = max(max_request_size, config.media.max_upload_size)
return max_request_size

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -71,12 +70,6 @@ class AdminCmdSlavedStore(
class AdminCmdServer(HomeServer):
DATASTORE_CLASS = AdminCmdSlavedStore
def _listen_http(self, listener_config):
pass
def start_listening(self, listeners):
pass
async def export_data_command(hs, args):
"""Export data for a user.
@@ -233,7 +226,7 @@ def start(config_options):
async def run():
with LoggingContext("command"):
_base.start(ss, [])
_base.start(ss)
await args.func(ss, args)
_base.start_worker_reactor(

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
@@ -14,12 +13,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import logging
import sys
from typing import Dict, Iterable, Optional, Set
from typing_extensions import ContextManager
from typing import Dict, Optional
from twisted.internet import address
from twisted.web.resource import IResource
@@ -36,29 +32,18 @@ from synapse.api.urls import (
SERVER_KEY_V2_PREFIX,
)
from synapse.app import _base
from synapse.app._base import register_start
from synapse.app._base import max_request_body_size, register_start
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.config.server import ListenerConfig
from synapse.federation import send_queue
from synapse.federation.transport.server import TransportLayerServer
from synapse.handlers.presence import (
BasePresenceHandler,
PresenceState,
get_interested_parties,
)
from synapse.http.server import JsonResource, OptionsResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.http.presence import (
ReplicationBumpPresenceActiveTime,
ReplicationPresenceSetState,
)
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -70,7 +55,6 @@ from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
@@ -78,21 +62,8 @@ from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationDataHandler
from synapse.replication.tcp.commands import ClearUserSyncsCommand
from synapse.replication.tcp.streams import (
AccountDataStream,
DeviceListsStream,
GroupServerStream,
PresenceStream,
PushersStream,
PushRulesStream,
ReceiptsStream,
TagAccountDataStream,
ToDeviceStream,
)
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.client.v1 import events, login, room
from synapse.rest.client.v1 import events, login, presence, room
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.rest.client.v1.profile import (
ProfileAvatarURLRestServlet,
@@ -129,7 +100,7 @@ from synapse.rest.client.versions import VersionsRestServlet
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.server import HomeServer, cache_in_self
from synapse.server import HomeServer
from synapse.storage.databases.main.censor_events import CensorEventsStore
from synapse.storage.databases.main.client_ips import ClientIpWorkerStore
from synapse.storage.databases.main.e2e_room_keys import EndToEndRoomKeyStore
@@ -138,41 +109,18 @@ from synapse.storage.databases.main.metrics import ServerMetricsStore
from synapse.storage.databases.main.monthly_active_users import (
MonthlyActiveUsersWorkerStore,
)
from synapse.storage.databases.main.presence import UserPresenceState
from synapse.storage.databases.main.presence import PresenceStore
from synapse.storage.databases.main.search import SearchWorkerStore
from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.transactions import TransactionWorkerStore
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
from synapse.storage.databases.main.user_directory import UserDirectoryStore
from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.generic_worker")
class PresenceStatusStubServlet(RestServlet):
"""If presence is disabled this servlet can be used to stub out setting
presence status.
"""
PATTERNS = client_patterns("/presence/(?P<user_id>[^/]*)/status")
def __init__(self, hs):
super().__init__()
self.auth = hs.get_auth()
async def on_GET(self, request, user_id):
await self.auth.get_user_by_req(request)
return 200, {"presence": "offline"}
async def on_PUT(self, request, user_id):
await self.auth.get_user_by_req(request)
return 200, {}
class KeyUploadServlet(RestServlet):
"""An implementation of the `KeyUploadServlet` that responds to read only
requests, but otherwise proxies through to the master instance.
@@ -266,207 +214,6 @@ class KeyUploadServlet(RestServlet):
return 200, {"one_time_key_counts": result}
class _NullContextManager(ContextManager[None]):
"""A context manager which does nothing."""
def __exit__(self, exc_type, exc_val, exc_tb):
pass
UPDATE_SYNCING_USERS_MS = 10 * 1000
class GenericWorkerPresence(BasePresenceHandler):
def __init__(self, hs):
super().__init__(hs)
self.hs = hs
self.is_mine_id = hs.is_mine_id
self._presence_enabled = hs.config.use_presence
# The number of ongoing syncs on this process, by user id.
# Empty if _presence_enabled is false.
self._user_to_num_current_syncs = {} # type: Dict[str, int]
self.notifier = hs.get_notifier()
self.instance_id = hs.get_instance_id()
# user_id -> last_sync_ms. Lists the users that have stopped syncing
# but we haven't notified the master of that yet
self.users_going_offline = {}
self._bump_active_client = ReplicationBumpPresenceActiveTime.make_client(hs)
self._set_state_client = ReplicationPresenceSetState.make_client(hs)
self._send_stop_syncing_loop = self.clock.looping_call(
self.send_stop_syncing, UPDATE_SYNCING_USERS_MS
)
hs.get_reactor().addSystemEventTrigger(
"before",
"shutdown",
run_as_background_process,
"generic_presence.on_shutdown",
self._on_shutdown,
)
def _on_shutdown(self):
if self._presence_enabled:
self.hs.get_tcp_replication().send_command(
ClearUserSyncsCommand(self.instance_id)
)
def send_user_sync(self, user_id, is_syncing, last_sync_ms):
if self._presence_enabled:
self.hs.get_tcp_replication().send_user_sync(
self.instance_id, user_id, is_syncing, last_sync_ms
)
def mark_as_coming_online(self, user_id):
"""A user has started syncing. Send a UserSync to the master, unless they
had recently stopped syncing.
Args:
user_id (str)
"""
going_offline = self.users_going_offline.pop(user_id, None)
if not going_offline:
# Safe to skip because we haven't yet told the master they were offline
self.send_user_sync(user_id, True, self.clock.time_msec())
def mark_as_going_offline(self, user_id):
"""A user has stopped syncing. We wait before notifying the master as
its likely they'll come back soon. This allows us to avoid sending
a stopped syncing immediately followed by a started syncing notification
to the master
Args:
user_id (str)
"""
self.users_going_offline[user_id] = self.clock.time_msec()
def send_stop_syncing(self):
"""Check if there are any users who have stopped syncing a while ago
and haven't come back yet. If there are poke the master about them.
"""
now = self.clock.time_msec()
for user_id, last_sync_ms in list(self.users_going_offline.items()):
if now - last_sync_ms > UPDATE_SYNCING_USERS_MS:
self.users_going_offline.pop(user_id, None)
self.send_user_sync(user_id, False, last_sync_ms)
async def user_syncing(
self, user_id: str, affect_presence: bool
) -> ContextManager[None]:
"""Record that a user is syncing.
Called by the sync and events servlets to record that a user has connected to
this worker and is waiting for some events.
"""
if not affect_presence or not self._presence_enabled:
return _NullContextManager()
curr_sync = self._user_to_num_current_syncs.get(user_id, 0)
self._user_to_num_current_syncs[user_id] = curr_sync + 1
# If we went from no in flight sync to some, notify replication
if self._user_to_num_current_syncs[user_id] == 1:
self.mark_as_coming_online(user_id)
def _end():
# We check that the user_id is in user_to_num_current_syncs because
# user_to_num_current_syncs may have been cleared if we are
# shutting down.
if user_id in self._user_to_num_current_syncs:
self._user_to_num_current_syncs[user_id] -= 1
# If we went from one in flight sync to non, notify replication
if self._user_to_num_current_syncs[user_id] == 0:
self.mark_as_going_offline(user_id)
@contextlib.contextmanager
def _user_syncing():
try:
yield
finally:
_end()
return _user_syncing()
async def notify_from_replication(self, states, stream_id):
parties = await get_interested_parties(self.store, states)
room_ids_to_states, users_to_states = parties
self.notifier.on_new_event(
"presence_key",
stream_id,
rooms=room_ids_to_states.keys(),
users=users_to_states.keys(),
)
async def process_replication_rows(self, token, rows):
states = [
UserPresenceState(
row.user_id,
row.state,
row.last_active_ts,
row.last_federation_update_ts,
row.last_user_sync_ts,
row.status_msg,
row.currently_active,
)
for row in rows
]
for state in states:
self.user_to_current_state[state.user_id] = state
stream_id = token
await self.notify_from_replication(states, stream_id)
def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
return [
user_id
for user_id, count in self._user_to_num_current_syncs.items()
if count > 0
]
async def set_state(self, target_user, state, ignore_status_msg=False):
"""Set the presence state of the user."""
presence = state["presence"]
valid_presence = (
PresenceState.ONLINE,
PresenceState.UNAVAILABLE,
PresenceState.OFFLINE,
)
if presence not in valid_presence:
raise SynapseError(400, "Invalid presence state")
user_id = target_user.to_string()
# If presence is disabled, no-op
if not self.hs.config.use_presence:
return
# Proxy request to master
await self._set_state_client(
user_id=user_id, state=state, ignore_status_msg=ignore_status_msg
)
async def bump_presence_active_time(self, user):
"""We've seen the user do something that indicates they're interacting
with the app.
"""
# If presence is disabled, no-op
if not self.hs.config.use_presence:
return
# Proxy request to master
user_id = user.to_string()
await self._bump_active_client(user_id=user_id)
class GenericWorkerSlavedStore(
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
# rather than going via the correct worker.
@@ -474,6 +221,7 @@ class GenericWorkerSlavedStore(
StatsStore,
UIAuthWorkerStore,
EndToEndRoomKeyStore,
PresenceStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedReceiptsStore,
@@ -492,7 +240,6 @@ class GenericWorkerSlavedStore(
SlavedTransactionStore,
SlavedProfileStore,
SlavedClientIpStore,
SlavedPresenceStore,
SlavedFilteringStore,
MonthlyActiveUsersWorkerStore,
MediaRepositoryStore,
@@ -560,10 +307,7 @@ class GenericWorkerServer(HomeServer):
user_directory.register_servlets(self, resource)
# If presence is disabled, use the stub servlet that does
# not allow sending presence
if not self.config.use_presence:
PresenceStatusStubServlet(self).register(resource)
presence.register_servlets(self, resource)
groups.register_servlets(self, resource)
@@ -623,26 +367,24 @@ class GenericWorkerServer(HomeServer):
listener_config,
root_resource,
self.version_string,
max_request_body_size=max_request_body_size(self.config),
reactor=self.get_reactor(),
),
reactor=self.get_reactor(),
)
logger.info("Synapse worker now listening on port %d", port)
def start_listening(self, listeners: Iterable[ListenerConfig]):
for listener in listeners:
def start_listening(self):
for listener in self.config.worker_listeners:
if listener.type == "http":
self._listen_http(listener)
elif listener.type == "manhole":
_base.listen_tcp(
listener.bind_addresses,
listener.port,
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
_base.listen_manhole(
listener.bind_addresses, listener.port, manhole_globals={"hs": self}
)
elif listener.type == "metrics":
if not self.get_config().enable_metrics:
if not self.config.enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
@@ -656,241 +398,6 @@ class GenericWorkerServer(HomeServer):
self.get_tcp_replication().start_replication(self)
@cache_in_self
def get_replication_data_handler(self):
return GenericWorkerReplicationHandler(self)
@cache_in_self
def get_presence_handler(self):
return GenericWorkerPresence(self)
class GenericWorkerReplicationHandler(ReplicationDataHandler):
def __init__(self, hs):
super().__init__(hs)
self.store = hs.get_datastore()
self.presence_handler = hs.get_presence_handler() # type: GenericWorkerPresence
self.notifier = hs.get_notifier()
self.notify_pushers = hs.config.start_pushers
self.pusher_pool = hs.get_pusherpool()
self.send_handler = None # type: Optional[FederationSenderHandler]
if hs.config.send_federation:
self.send_handler = FederationSenderHandler(hs)
async def on_rdata(self, stream_name, instance_name, token, rows):
await super().on_rdata(stream_name, instance_name, token, rows)
await self._process_and_notify(stream_name, instance_name, token, rows)
async def _process_and_notify(self, stream_name, instance_name, token, rows):
try:
if self.send_handler:
await self.send_handler.process_replication_rows(
stream_name, token, rows
)
if stream_name == PushRulesStream.NAME:
self.notifier.on_new_event(
"push_rules_key", token, users=[row.user_id for row in rows]
)
elif stream_name in (AccountDataStream.NAME, TagAccountDataStream.NAME):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows]
)
elif stream_name == ReceiptsStream.NAME:
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows]
)
await self.pusher_pool.on_new_receipts(
token, token, {row.room_id for row in rows}
)
elif stream_name == ToDeviceStream.NAME:
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
self.notifier.on_new_event("to_device_key", token, users=entities)
elif stream_name == DeviceListsStream.NAME:
all_room_ids = set() # type: Set[str]
for row in rows:
if row.entity.startswith("@"):
room_ids = await self.store.get_rooms_for_user(row.entity)
all_room_ids.update(room_ids)
self.notifier.on_new_event("device_list_key", token, rooms=all_room_ids)
elif stream_name == PresenceStream.NAME:
await self.presence_handler.process_replication_rows(token, rows)
elif stream_name == GroupServerStream.NAME:
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows]
)
elif stream_name == PushersStream.NAME:
for row in rows:
if row.deleted:
self.stop_pusher(row.user_id, row.app_id, row.pushkey)
else:
await self.start_pusher(row.user_id, row.app_id, row.pushkey)
except Exception:
logger.exception("Error processing replication")
async def on_position(self, stream_name: str, instance_name: str, token: int):
await super().on_position(stream_name, instance_name, token)
# Also call on_rdata to ensure that stream positions are properly reset.
await self.on_rdata(stream_name, instance_name, token, [])
def stop_pusher(self, user_id, app_id, pushkey):
if not self.notify_pushers:
return
key = "%s:%s" % (app_id, pushkey)
pushers_for_user = self.pusher_pool.pushers.get(user_id, {})
pusher = pushers_for_user.pop(key, None)
if pusher is None:
return
logger.info("Stopping pusher %r / %r", user_id, key)
pusher.on_stop()
async def start_pusher(self, user_id, app_id, pushkey):
if not self.notify_pushers:
return
key = "%s:%s" % (app_id, pushkey)
logger.info("Starting pusher %r / %r", user_id, key)
return await self.pusher_pool.start_pusher_by_id(app_id, pushkey, user_id)
def on_remote_server_up(self, server: str):
"""Called when get a new REMOTE_SERVER_UP command."""
# Let's wake up the transaction queue for the server in case we have
# pending stuff to send to it.
if self.send_handler:
self.send_handler.wake_destination(server)
class FederationSenderHandler:
"""Processes the fedration replication stream
This class is only instantiate on the worker responsible for sending outbound
federation transactions. It receives rows from the replication stream and forwards
the appropriate entries to the FederationSender class.
"""
def __init__(self, hs: GenericWorkerServer):
self.store = hs.get_datastore()
self._is_mine_id = hs.is_mine_id
self.federation_sender = hs.get_federation_sender()
self._hs = hs
# Stores the latest position in the federation stream we've gotten up
# to. This is always set before we use it.
self.federation_position = None
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
def on_start(self):
# There may be some events that are persisted but haven't been sent,
# so send them now.
self.federation_sender.notify_new_events(
self.store.get_room_max_stream_ordering()
)
def wake_destination(self, server: str):
self.federation_sender.wake_destination(server)
async def process_replication_rows(self, stream_name, token, rows):
# The federation stream contains things that we want to send out, e.g.
# presence, typing, etc.
if stream_name == "federation":
send_queue.process_rows_for_federation(self.federation_sender, rows)
await self.update_token(token)
# ... and when new receipts happen
elif stream_name == ReceiptsStream.NAME:
await self._on_new_receipts(rows)
# ... as well as device updates and messages
elif stream_name == DeviceListsStream.NAME:
# The entities are either user IDs (starting with '@') whose devices
# have changed, or remote servers that we need to tell about
# changes.
hosts = {row.entity for row in rows if not row.entity.startswith("@")}
for host in hosts:
self.federation_sender.send_device_messages(host)
elif stream_name == ToDeviceStream.NAME:
# The to_device stream includes stuff to be pushed to both local
# clients and remote servers, so we ignore entities that start with
# '@' (since they'll be local users rather than destinations).
hosts = {row.entity for row in rows if not row.entity.startswith("@")}
for host in hosts:
self.federation_sender.send_device_messages(host)
async def _on_new_receipts(self, rows):
"""
Args:
rows (Iterable[synapse.replication.tcp.streams.ReceiptsStream.ReceiptsStreamRow]):
new receipts to be processed
"""
for receipt in rows:
# we only want to send on receipts for our own users
if not self._is_mine_id(receipt.user_id):
continue
receipt_info = ReadReceipt(
receipt.room_id,
receipt.receipt_type,
receipt.user_id,
[receipt.event_id],
receipt.data,
)
await self.federation_sender.send_read_receipt(receipt_info)
async def update_token(self, token):
"""Update the record of where we have processed to in the federation stream.
Called after we have processed a an update received over replication. Sends
a FEDERATION_ACK back to the master, and stores the token that we have processed
in `federation_stream_position` so that we can restart where we left off.
"""
self.federation_position = token
# We save and send the ACK to master asynchronously, so we don't block
# processing on persistence. We don't need to do this operation for
# every single RDATA we receive, we just need to do it periodically.
if self._fed_position_linearizer.is_queued(None):
# There is already a task queued up to save and send the token, so
# no need to queue up another task.
return
run_as_background_process("_save_and_send_ack", self._save_and_send_ack)
async def _save_and_send_ack(self):
"""Save the current federation position in the database and send an ACK
to master with where we're up to.
"""
try:
# We linearize here to ensure we don't have races updating the token
#
# XXX this appears to be redundant, since the ReplicationCommandHandler
# has a linearizer which ensures that we only process one line of
# replication data at a time. Should we remove it, or is it doing useful
# service for robustness? Or could we replace it with an assertion that
# we're not being re-entered?
with (await self._fed_position_linearizer.queue(None)):
# We persist and ack the same position, so we take a copy of it
# here as otherwise it can get modified from underneath us.
current_position = self.federation_position
await self.store.update_federation_out_pos(
"federation", current_position
)
# We ACK this token over replication so that the master can drop
# its in memory queues
self._hs.get_tcp_replication().send_federation_ack(current_position)
except Exception:
logger.exception("Error updating federation stream position")
def start(config_options):
try:
@@ -947,6 +454,10 @@ def start(config_options):
config.server.update_user_directory = False
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
if config.server.gc_seconds:
synapse.metrics.MIN_TIME_BETWEEN_GCS = config.server.gc_seconds
hs = GenericWorkerServer(
config.server_name,
@@ -962,7 +473,7 @@ def start(config_options):
# streams. Will no-op if no streams can be written to by this worker.
hs.get_replication_streamer()
register_start(_base.start, hs, config.worker_listeners)
register_start(_base.start, hs)
_base.start_worker_reactor("synapse-generic-worker", config)

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2019 New Vector Ltd
#
@@ -18,7 +17,7 @@
import logging
import os
import sys
from typing import Iterable, Iterator
from typing import Iterator
from twisted.internet import reactor
from twisted.web.resource import EncodingResourceWrapper, IResource
@@ -37,7 +36,13 @@ from synapse.api.urls import (
WEB_CLIENT_PREFIX,
)
from synapse.app import _base
from synapse.app._base import listen_ssl, listen_tcp, quit_with_error, register_start
from synapse.app._base import (
listen_ssl,
listen_tcp,
max_request_body_size,
quit_with_error,
register_start,
)
from synapse.config._base import ConfigError
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.config.homeserver import HomeServerConfig
@@ -67,7 +72,6 @@ from synapse.storage import DataStore
from synapse.storage.engines import IncorrectDatabaseSetup
from synapse.storage.prepare_database import UpgradeDatabaseException
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module
from synapse.util.versionstring import get_version_string
@@ -128,19 +132,21 @@ class SynapseHomeServer(HomeServer):
else:
root_resource = OptionsResource()
root_resource = create_resource_tree(resources, root_resource)
site = SynapseSite(
"synapse.access.%s.%s" % ("https" if tls else "http", site_tag),
site_tag,
listener_config,
create_resource_tree(resources, root_resource),
self.version_string,
max_request_body_size=max_request_body_size(self.config),
reactor=self.get_reactor(),
)
if tls:
ports = listen_ssl(
bind_addresses,
port,
SynapseSite(
"synapse.access.https.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
site,
self.tls_server_context_factory,
reactor=self.get_reactor(),
)
@@ -150,13 +156,7 @@ class SynapseHomeServer(HomeServer):
ports = listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
site,
reactor=self.get_reactor(),
)
logger.info("Synapse now listening on TCP port %d", port)
@@ -193,7 +193,7 @@ class SynapseHomeServer(HomeServer):
}
)
if self.get_config().threepid_behaviour_email == ThreepidBehaviour.LOCAL:
if self.config.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
from synapse.rest.synapse.client.password_reset import (
PasswordResetSubmitTokenResource,
)
@@ -232,7 +232,7 @@ class SynapseHomeServer(HomeServer):
)
if name in ["media", "federation", "client"]:
if self.get_config().enable_media_repo:
if self.config.enable_media_repo:
media_repo = self.get_media_repository_resource()
resources.update(
{MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo}
@@ -246,7 +246,7 @@ class SynapseHomeServer(HomeServer):
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
if name == "webclient":
webclient_loc = self.get_config().web_client_location
webclient_loc = self.config.web_client_location
if webclient_loc is None:
logger.warning(
@@ -267,7 +267,7 @@ class SynapseHomeServer(HomeServer):
# https://twistedmatrix.com/trac/ticket/7678
resources[WEB_CLIENT_PREFIX] = File(webclient_loc)
if name == "metrics" and self.get_config().enable_metrics:
if name == "metrics" and self.config.enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
if name == "replication":
@@ -275,25 +275,21 @@ class SynapseHomeServer(HomeServer):
return resources
def start_listening(self, listeners: Iterable[ListenerConfig]):
config = self.get_config()
if config.redis_enabled:
def start_listening(self):
if self.config.redis_enabled:
# If redis is enabled we connect via the replication command handler
# in the same way as the workers (since we're effectively a client
# rather than a server).
self.get_tcp_replication().start_replication(self)
for listener in listeners:
for listener in self.config.server.listeners:
if listener.type == "http":
self._listening_services.extend(self._listener_http(config, listener))
self._listening_services.extend(
self._listener_http(self.config, listener)
)
elif listener.type == "manhole":
listen_tcp(
listener.bind_addresses,
listener.port,
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
_base.listen_manhole(
listener.bind_addresses, listener.port, manhole_globals={"hs": self}
)
elif listener.type == "replication":
services = listen_tcp(
@@ -304,7 +300,7 @@ class SynapseHomeServer(HomeServer):
for s in services:
reactor.addSystemEventTrigger("before", "shutdown", s.stopListening)
elif listener.type == "metrics":
if not self.get_config().enable_metrics:
if not self.config.enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
@@ -345,6 +341,10 @@ def setup(config_options):
sys.exit(0)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
if config.server.gc_seconds:
synapse.metrics.MIN_TIME_BETWEEN_GCS = config.server.gc_seconds
hs = SynapseHomeServer(
config.server_name,
@@ -418,7 +418,7 @@ def setup(config_options):
# Loading the provider metadata also ensures the provider config is valid.
await oidc.load_metadata()
await _base.start(hs, config.listeners)
await _base.start(hs)
hs.get_datastore().db_pool.updates.start_doing_background_updates()

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");

Some files were not shown because too many files have changed in this diff Show More