Compare commits

...

235 Commits

Author SHA1 Message Date
Erik Johnston
dcd574b89c Stuff 2023-01-03 09:42:37 +00:00
Erik Johnston
c03785e121 Implement {get,pop}_node 2022-12-24 13:44:17 +00:00
Erik Johnston
b9cdf3d85e String cache 2022-12-24 12:36:54 +00:00
Erik Johnston
18ac015ecd bindings 2022-12-05 14:03:28 +00:00
Erik Johnston
4874d6320a Add tree cache 2022-12-05 09:57:38 +00:00
dependabot[bot]
e863a99d8d Bump JasonEtco/create-an-issue from 2.5.0 to 2.8.1 (#14607)
* Bump JasonEtco/create-an-issue from 2.5.0 to 2.8.1

Bumps [JasonEtco/create-an-issue](https://github.com/JasonEtco/create-an-issue) from 2.5.0 to 2.8.1.
- [Release notes](https://github.com/JasonEtco/create-an-issue/releases)
- [Commits](5d9504915f...77399b6110)

---
updated-dependencies:
- dependency-name: JasonEtco/create-an-issue
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-12-02 18:36:12 +00:00
Patrick Cloke
f685318c2a Use ClientRestResource on both the main process and workers. (#14528)
Add logic to ClientRestResource to decide whether to mount servlets
or not based on whether the current process is a worker.

This is clearer to see what a worker runs than the completely separate /
copy & pasted list of servlets being mounted for workers.
2022-12-02 13:10:05 -05:00
Erik Johnston
890e5f610e Fix Rust lint CI (#14602) 2022-12-02 18:04:28 +00:00
Patrick Cloke
acea4d7a2f Add missing types to tests.util. (#14597)
Removes files under tests.util from the ignored by list, then
fully types all tests/util/*.py files.
2022-12-02 17:58:56 +00:00
Patrick Cloke
fac8a38525 Properly handle unknown results for the stream change cache. (#14592)
StreamChangeCache.get_all_changed_entities can return None to signify
it does not have information at the given stream position. Two callers (related
to device lists and presence) were treating this response the same as an empty
list (i.e. there being no updates).
2022-12-02 10:28:41 -05:00
realtyem
6acb6d772a Update worker docs to update preferred settings for pusher and federation_sender (#14493)
* Fix one typo on line 3700(and apparently do something to other lines, no idea)

* Update config_documentation.md with more information about how federation_senders and pushers settings can be handled.

Specifically, that the instance map style of config does not require the special other variables that enable and disable functionality and that a single worker CAN be added to the map not only just two or more.

* Extra line here for consistency and appearance.

* Add link to sygnal repo.

* Add deprecation notice to workers.md and point to the newer alternative method of defining this functionality.

* Changelog

* Correct version number of Synapse the deprecation is happening in.

* Update quiet deprecation with simple notice and suggestion.
2022-12-02 11:38:01 +00:00
dependabot[bot]
656dce4baf Bump jsonschema from 4.17.0 to 4.17.3 (#14591)
* Bump jsonschema from 4.17.0 to 4.17.3

Bumps [jsonschema](https://github.com/python-jsonschema/jsonschema) from 4.17.0 to 4.17.3.
- [Release notes](https://github.com/python-jsonschema/jsonschema/releases)
- [Changelog](https://github.com/python-jsonschema/jsonschema/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/python-jsonschema/jsonschema/compare/v4.17.0...v4.17.3)

---
updated-dependencies:
- dependency-name: jsonschema
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-12-01 19:05:08 +00:00
dependabot[bot]
058789bada Bump pyopenssl from 22.0.0 to 22.1.0 (#14561)
Bumps [pyopenssl](https://github.com/pyca/pyopenssl) from 22.0.0 to 22.1.0.
- [Release notes](https://github.com/pyca/pyopenssl/releases)
- [Changelog](https://github.com/pyca/pyopenssl/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/pyopenssl/compare/22.0.0...22.1.0)

---
updated-dependencies:
- dependency-name: pyopenssl
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-01 19:02:51 +00:00
dependabot[bot]
d32820c7be Bump sentry-sdk from 1.11.0 to 1.11.1 (#14562)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.11.0 to 1.11.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.11.0...1.11.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-01 18:54:41 +00:00
dependabot[bot]
6ac35667af Bump types-bleach from 5.0.3 to 5.0.3.1 (#14564)
Bumps [types-bleach](https://github.com/python/typeshed) from 5.0.3 to 5.0.3.1.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-bleach
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-01 14:24:08 +00:00
dependabot[bot]
c61f1ef716 Bump types-psycopg2 from 2.9.21.1 to 2.9.21.2 (#14558)
Bumps [types-psycopg2](https://github.com/python/typeshed) from 2.9.21.1 to 2.9.21.2.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-psycopg2
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-01 14:18:27 +00:00
Will Hunt
71f3e53ad0 Add push.enabled option to disable push notification calculation (#14551)
* Add initial option

* changelog

* Some more linting
2022-12-01 13:46:24 +00:00
David Robertson
781b14ec69 Merge branch 'release-v1.73' into develop 2022-12-01 13:43:30 +00:00
realtyem
854a6884d8 Modernize unit tests configuration settings for workers. (#14568)
Use the newer foo_instances configuration instead of the
deprecated flags to enable specific features (e.g. start_pushers).
2022-12-01 07:38:27 -05:00
David Robertson
6a41e5022e 1.73.0rc2 2022-12-01 10:02:56 +00:00
David Robertson
89ee169556 Fix MSC3202 link in changelog 2022-12-01 09:59:55 +00:00
David Robertson
7aefc7e9fc Cite launchpad bug that says ubuntu's pkgs are old (#14517)
* Cite launchpad bug that says ubuntu's pkgs are old

* Add some cross-references while I'm here

* Changelog
2022-11-30 18:33:35 +00:00
Nick Mills-Barrett
e8bce8999f Aggregate unread notif count query for badge count calculation (#14255)
Fetch the unread notification counts used by the badge counts
in push notifications for all rooms at once (instead of fetching
them per room).
2022-11-30 08:45:06 -05:00
Mathieu Velten
4569eda944 Use servers list approx to send read receipts when in partial state (#14549)
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
2022-11-30 13:39:47 +01:00
Richard van der Hoff
ecb6fe9d9c Stop using deprecated keyIds param on /key/v2/server (#14525)
Fixes #14523.
2022-11-30 11:59:57 +00:00
David Robertson
c29e2c6306 Revert "POC delete stale non-e2e devices for users (#14038)" (#14582) 2022-11-29 17:48:48 +00:00
Patrick Cloke
13aa29db1d Advertise support for Matrix v1.5. (#14576)
All features of Matrix v1.5 were already supported: this was
mostly a maintenance release.
2022-11-29 10:49:23 -05:00
David Robertson
99d1897078 Update changelog 2022-11-29 13:41:49 +00:00
David Robertson
807f077db2 Include fixup PR in changelog 2022-11-29 13:24:13 +00:00
David Robertson
e860316818 Fix UndefinedColumn: column "key_json" does not exist errors when handling users with more than 50 non-E2E devices (#14580) 2022-11-29 13:05:07 +00:00
David Robertson
8c5b8e6d40 1.73.0rc1 2022-11-29 12:32:02 +00:00
David Robertson
5b0dcda7f0 Fix GHA job for pushing the complement-synapse image (#14573)
Co-authored-by: Michael Kaye <1917473+michaelkaye@users.noreply.github.com>
2022-11-29 12:22:08 +00:00
Erik Johnston
c7e29ca277 POC delete stale non-e2e devices for users (#14038)
This should help reduce the number of devices e.g. simple bots the repeatedly login rack up.

We only delete non-e2e devices as they should be safe to delete, whereas if we delete e2e devices for a user we may accidentally break their ability to receive e2e keys for a message.

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
2022-11-29 10:36:41 +00:00
Shay
72f3e38137 Fix possible variable shadow in create_new_client_event (#14575) 2022-11-28 19:18:12 -08:00
Travis Ralston
9ccc09fe9e Support MSC1767's content.body behaviour; Add base rules from MSC3933 (#14524)
* Support MSC1767's `content.body` behaviour in push rules

* Add the base rules from MSC3933

* Changelog entry

* Flip condition around for finding `m.markup`

* Remove forgotten import
2022-11-28 18:02:41 -07:00
Travis Ralston
dd51828120 Create MSC1767 (extensible events) room version; Implement MSC3932 (#14521)
* Add MSC1767's dedicated room version, based on v10

* Only enable MSC1767 room version if the config flag is on

Using a similar technique to knocking:
https://github.com/matrix-org/synapse/pull/6739/files#diff-3af529eedb0e00279bafb7369370c9654b37792af8eafa0925400e9281d57f0a

* Support MSC3932: Extensible events room version feature flag

* Changelog entry
2022-11-28 17:22:34 -07:00
Travis Ralston
3da6450327 Initial support for MSC3931: Room version push rule feature flags (#14520)
* Add support for MSC3931: Room Version Supports push rule condition

* Create experimental flag for future work, and use it to gate MSC3931

* Changelog entry
2022-11-28 16:29:53 -07:00
Eric Eastwood
8f10c8b054 Move MSC3030 /timestamp_to_event endpoint to stable v1 location (#14471)
Fix https://github.com/matrix-org/synapse/issues/14390

 - Client API: `/_matrix/client/unstable/org.matrix.msc3030/rooms/<roomID>/timestamp_to_event?ts=<timestamp>&dir=<direction>` -> `/_matrix/client/v1/rooms/<roomID>/timestamp_to_event?ts=<timestamp>&dir=<direction>`
 - Federation API: `/_matrix/federation/unstable/org.matrix.msc3030/timestamp_to_event/<roomID>?ts=<timestamp>&dir=<direction>` -> `/_matrix/federation/v1/timestamp_to_event/<roomID>?ts=<timestamp>&dir=<direction>`

Complement test changes: https://github.com/matrix-org/complement/pull/559
2022-11-28 15:54:18 -06:00
Andrew Ferrazzutti
1183c372fa Use device_one_time_keys_count to match MSC3202 (#14565)
* Use `device_one_time_keys_count` to match MSC3202

Rename the `device_one_time_key_counts` key in responses to
`device_one_time_keys_count` to match the name specified by MSC3202.

Also change related variable/class names for consistency.

Signed-off-by: Andrew Ferrazzutti <andrewf@element.io>

* Update changelog.d/14565.misc

* Revert name change for `one_time_key_counts` key

as this is a different key altogether from `device_one_time_keys_count`,
which is used for `/sync` instead of appservice transactions.

Signed-off-by: Andrew Ferrazzutti <andrewf@element.io>
2022-11-28 16:17:29 +00:00
Sean Quah
d56f48038a Fix logging context warnings due to common usage metrics setup (#14574)
`setup()` is run under the sentinel context manager, so we wrap the
initial update in a background process. Before this change, Synapse
would log two warnings on startup:
    Starting db txn 'count_daily_users' from sentinel context
    Starting db connection from sentinel context: metrics will be lost

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-11-28 15:25:18 +00:00
Patrick Cloke
d748bbc8f8 Include thread information when sending receipts over federation. (#14466)
Include the thread_id field when sending read receipts over
federation. This might result in the same user having multiple
read receipts per-room, meaning multiple EDUs must be sent
to encapsulate those receipts.

This restructures the PerDestinationQueue APIs to support
multiple receipt EDUs, queue_read_receipt now becomes linear
time in the number of queued threaded receipts in the room for
the given user, it is expected this is a small number since receipt
EDUs are sent as filler in transactions.
2022-11-28 14:40:17 +00:00
Sean Quah
f792dd74e1 Remove option to skip locking of tables during emulated upserts (#14469)
To perform an emulated upsert into a table safely, we must either:
 * lock the table,
 * be the only writer upserting into the table
 * or rely on another unique index being present.

When the 2nd or 3rd cases were applicable, we previously avoided locking
the table as an optimization. However, as seen in #14406, it is easy to
slip up when adding new schema deltas and corrupt the database.

The only time we lock when performing emulated upserts is while waiting
for background updates on postgres. On sqlite, we do no locking at all.

Let's remove the option to skip locking tables, so that we don't shoot
ourselves in the foot again.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-11-28 13:42:06 +00:00
Michael Kaye
2dad42a9fb Push complement image to a docker registry (#14509)
* GHA workflow to build complement images of key branches.

* Add changelog.d

* GHA workflow to build complement images of key branches.

* Add changelog.d

* Update complement.yml

Remove special casing for michaelk branch.

* Update complement.yml

Should run on master, develop not main, develop

* Rename file to be more obvious

* Merge did not go correctly.

* Setup 5am builds of develop, limit to one run at once.

* Fix crontab---run once at 5AM, not very minute between 5 and 6

* Fix cron syntax again?

* Tweak workflow name

* Allow manual debug runs

* Tweak indentation

Ctrl-Alt-L in PyCharm

Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
Co-authored-by: David Robertson <davidr@element.io>
2022-11-28 12:51:40 +00:00
dependabot[bot]
58383c18bd Bump serde_json from 1.0.88 to 1.0.89 (#14560)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-28 12:45:58 +00:00
dependabot[bot]
7a7ee3d6b8 Bump serde from 1.0.147 to 1.0.148 (#14559)
* Bump serde from 1.0.147 to 1.0.148

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.147 to 1.0.148.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.147...v1.0.148)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-28 12:30:12 +00:00
David Robertson
105ab1c3d2 Run Rust CI when Cargo.lock changes too (#14571)
* Run Rust CI when Cargo.lock changes too

* Changelog
2022-11-28 11:47:16 +00:00
dependabot[bot]
7d24662fdd Bump dtolnay/rust-toolchain from 55c7845fad90d0ae8b2e83715cb900e5e861e8cb to e645b0cf01249a964ec099494d38d2da0f0b349f (#14557)
* Bump dtolnay/rust-toolchain

Bumps [dtolnay/rust-toolchain](https://github.com/dtolnay/rust-toolchain) from 55c7845fad90d0ae8b2e83715cb900e5e861e8cb to e645b0cf01249a964ec099494d38d2da0f0b349f.
- [Release notes](https://github.com/dtolnay/rust-toolchain/releases)
- [Commits](55c7845fad...e645b0cf01)

---
updated-dependencies:
- dependency-name: dtolnay/rust-toolchain
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-28 11:08:15 +00:00
Ashish Kumar
09de2aecb0 Add support for handling avatar with SSO login (#13917)
This commit adds support for handling a provided avatar picture URL
when logging in via SSO.

Signed-off-by: Ashish Kumar <ashfame@users.noreply.github.com>

Fixes #9357.
2022-11-25 15:16:50 +00:00
Mathieu Velten
39cde585bf Faster joins: use initial list of servers if we don't have the full state yet (#14408)
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
2022-11-24 18:09:47 +01:00
schmop
c2e06c36d4 Fix crash admin media list api when info is None (#14537)
Fixes https://github.com/matrix-org/synapse/issues/14536
2022-11-24 10:49:04 +00:00
Benjamin Kampmann
f6c74d1cb2 Implement message forward pagination from start when no from is given, fixes #12383 (#14149)
Fixes https://github.com/matrix-org/synapse/issues/12383
2022-11-24 09:10:51 +00:00
reivilibre
9af2be192a Remove legacy Prometheus metrics names. They were deprecated in Synapse v1.69.0 and disabled by default in Synapse v1.71.0. (#14538) 2022-11-24 09:09:17 +00:00
Mathieu Velten
3b4e150868 Faster joins: use servers list approximation in assert_host_in_room (#14515)
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
2022-11-24 09:10:47 +01:00
Erik Johnston
f38d7d79c8 Add another index to device_lists_changes_in_room (#14534)
This helps avoid reading unnecessarily large amounts of data from the
table when querying with a set of room IDs.
2022-11-23 14:09:00 +00:00
Patrick Cloke
4ae967cf63 Add missing type hints to test.util.caches (#14529) 2022-11-22 17:35:54 -05:00
Eric Eastwood
7f78b383ca Optimize filter_events_for_client for faster /messages - v2 (#14527)
Fix #14108
2022-11-22 21:56:28 +00:00
realtyem
df390a8e67 Refactor federation_sender and pusher configuration loading. (#14496)
To avoid duplicating the same logic for handling legacy configuration
settings.

This should help in applying similar logic to other worker types.
2022-11-22 21:33:58 +00:00
David Robertson
972743051b Add more prompts to bug report form (#14522) 2022-11-22 21:23:22 +00:00
Patrick Cloke
6d47b7e325 Add a type hint for get_device_handler() and fix incorrect types. (#14055)
This was the last untyped handler from the HomeServer object. Since
it was being treated as Any (and thus unchecked) it was being used
incorrectly in a few places.
2022-11-22 14:08:04 -05:00
Brendan Abolivier
9b4cb1e2ed Apply correct editorconfig to .pyi files (#14526)
The current configuration might cause some editors to misbehave when editing stub files.
2022-11-22 18:33:28 +00:00
Sean Quah
9cae44f49e Track unconverted device list outbound pokes using a position instead (#14516)
When a local device list change is added to
`device_lists_changes_in_room`, the `converted_to_destinations` flag is
set to `FALSE` and the `_handle_new_device_update_async` background
process is started. This background process looks for unconverted rows
in `device_lists_changes_in_room`, copies them to
`device_lists_outbound_pokes` and updates the flag.

To update the `converted_to_destinations` flag, the database performs a
`DELETE` and `INSERT` internally, which fragments the table. To avoid
this, track unconverted rows using a `(stream ID, room ID)` position
instead of the flag.

From now on, the `converted_to_destinations` column indicates rows that
need converting to outbound pokes, but does not indicate whether the
conversion has already taken place.

Closes #14037.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-11-22 16:46:52 +00:00
Patrick Cloke
7eb7460042 Parallelize calls to fetch bundled aggregations. (#14510)
The bundled aggregations for annotations, references, and edits
can be parallelized.
2022-11-22 09:47:32 -05:00
Patrick Cloke
6d7523ef14 Batch fetch bundled references (#14508)
Avoid an n+1 query problem and fetch the bundled aggregations for
m.reference relations in a single query instead of a query per event.

This applies similar logic for as was previously done for edits in
8b309adb43 (#11660; threads
in b65acead42 (#11752); and
annotations in 1799a54a54 (#14491).
2022-11-22 09:41:09 -05:00
Patrick Cloke
1799a54a54 Batch fetch bundled annotations (#14491)
Avoid an n+1 query problem and fetch the bundled aggregations for
m.annotation relations in a single query instead of a query per event.

This applies similar logic for as was previously done for edits in
8b309adb43 (#11660) and threads
in b65acead42 (#11752).
2022-11-22 07:26:11 -05:00
David Robertson
da933bfc3f Merge branch 'master' into develop 2022-11-22 12:22:01 +00:00
David Robertson
ececb2d6cb tweak postgres dep notice 2022-11-22 11:10:01 +00:00
David Robertson
7c005b279e Move postgres warning banner to top of readme 2022-11-22 11:00:31 +00:00
David Robertson
706b6a1ebb 1.72.0 2022-11-22 10:59:39 +00:00
reivilibre
a6514792b2 Update forgotten references to legacy metrics in the included Grafana dashboard. (#14477)
Fixes https://github.com/matrix-org/synapse/issues/14465
2022-11-22 10:51:01 +00:00
Mathieu Velten
1526ff389f Faster joins: filter out non local events when a room doesn't have its full state (#14404)
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
2022-11-21 16:46:14 +01:00
Brennan Chapman
640cb3c81c Fix broken admin API request recommendation link (#14499)
Signed-off-by: Brennan Chapman <brennan@chapmanb.com>
2022-11-21 12:40:25 +01:00
dependabot[bot]
22036f038e Bump serde_json from 1.0.87 to 1.0.88 (#14505)
* Bump serde_json from 1.0.87 to 1.0.88

Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.87 to 1.0.88.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.87...v1.0.88)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-21 10:30:18 +00:00
dependabot[bot]
6e0cb8de79 Bump phonenumbers from 8.12.56 to 8.13.0 (#14504)
* Bump phonenumbers from 8.12.56 to 8.13.0

Bumps [phonenumbers](https://github.com/daviddrysdale/python-phonenumbers) from 8.12.56 to 8.13.0.
- [Release notes](https://github.com/daviddrysdale/python-phonenumbers/releases)
- [Commits](https://github.com/daviddrysdale/python-phonenumbers/compare/v8.12.56...v8.13.0)

---
updated-dependencies:
- dependency-name: phonenumbers
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-21 10:30:05 +00:00
dependabot[bot]
d988fb5e7b Bump towncrier from 21.9.0 to 22.8.0 (#14503)
* Bump towncrier from 21.9.0 to 22.8.0

Bumps [towncrier](https://github.com/hawkowl/towncrier) from 21.9.0 to 22.8.0.
- [Release notes](https://github.com/hawkowl/towncrier/releases)
- [Changelog](https://github.com/twisted/towncrier/blob/trunk/NEWS.rst)
- [Commits](https://github.com/hawkowl/towncrier/compare/21.9.0...22.8.0)

---
updated-dependencies:
- dependency-name: towncrier
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-21 10:29:54 +00:00
dependabot[bot]
8f77418edd Bump pygithub from 1.56 to 1.57 (#14500)
* Bump pygithub from 1.56 to 1.57

Bumps [pygithub](https://github.com/pygithub/pygithub) from 1.56 to 1.57.
- [Release notes](https://github.com/pygithub/pygithub/releases)
- [Changelog](https://github.com/PyGithub/PyGithub/blob/master/doc/changes.rst)
- [Commits](https://github.com/pygithub/pygithub/compare/v1.56...v1.57)

---
updated-dependencies:
- dependency-name: pygithub
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-21 10:29:42 +00:00
dependabot[bot]
78867f302f Bump types-pillow from 9.2.2.1 to 9.3.0.1 (#14502)
* Bump types-pillow from 9.2.2.1 to 9.3.0.1

Bumps [types-pillow](https://github.com/python/typeshed) from 9.2.2.1 to 9.3.0.1.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pillow
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-21 10:29:24 +00:00
dependabot[bot]
8718322130 Bump sentry-sdk from 1.10.1 to 1.11.0 (#14501)
* Bump sentry-sdk from 1.10.1 to 1.11.0

Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.10.1 to 1.11.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.10.1...1.11.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-21 10:28:57 +00:00
Richard van der Hoff
8d133a8464 Fixes to federation_client dev script (#14479)
* Attempt to fix federation-client devscript handling of .well-known

The script was setting the wrong value in the Host header

* Fix TLS verification

Turns out that actually doing TLS verification isn't that hard. Let's enable
it.
2022-11-20 17:41:17 +00:00
David Robertson
e1b15f25f3 Fix /key/v2/server calls with URL-unsafe key IDs (#14490)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-11-18 19:56:42 +00:00
Sean Quah
78e23eea05 Reduce default third party invite rate limit to 216 invites per day (#14487)
The previous default was the same as the `rc_message` rate limit, which
defaults to 17,280 per day.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-11-18 18:10:01 +00:00
Andrew Morgan
ae22e6e94f Enable 'strict_equality' checking for mypy (#14452) 2022-11-17 18:34:09 +00:00
David Robertson
01a0527892 Fix version that worker_main_http_uri is redundant from (#14476)
* Fix version that `worker_main_http_uri` is redundant from

* Changelog
2022-11-17 16:11:08 +00:00
Andrew Morgan
e7132c3f81 Fix check to ignore blank lines in incoming TCP replication (#14449) 2022-11-17 16:09:56 +00:00
Mathieu Velten
75888c2b1f Faster joins: do not wait for full state when creating events to send (#14403)
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
2022-11-17 17:01:14 +01:00
David Robertson
115f0eb233 Reintroduce #14376, with bugfix for monoliths (#14468)
* Add tests for StreamIdGenerator

* Drive-by: annotate all defs

* Revert "Revert "Remove slaved id tracker (#14376)" (#14463)"

This reverts commit d63814fd73, which in
turn reverted 36097e88c4. This restores
the latter.

* Fix StreamIdGenerator not handling unpersisted IDs

Spotted by @erikjohnston.

Closes #14456.

* Changelog

Co-authored-by: Nick Mills-Barrett <nick@fizzadar.com>
Co-authored-by: Erik Johnston <erik@matrix.org>
2022-11-16 22:16:46 +00:00
realtyem
c15e9a0edb Remove need for worker_main_http_uri setting to use /keys/upload. (#14400) 2022-11-16 22:16:25 +00:00
Erik Johnston
a84744fba0 Merge branch 'release-v1.72' into develop 2022-11-16 18:22:04 +00:00
Erik Johnston
7f44f3aee3 Update changelog 2022-11-16 16:58:03 +00:00
Erik Johnston
f0d18772f3 Point to our deprecation policy 2022-11-16 16:37:22 +00:00
Erik Johnston
e6b5ca1a9f Update changelog 2022-11-16 16:32:56 +00:00
Andrew Morgan
618e4ab81b Fix an invalid comparison of UserPresenceState to str (#14393) 2022-11-16 15:25:35 +00:00
Patrick Cloke
d8cc86eff4 Remove redundant types from comments. (#14412)
Remove type hints from comments which have been added
as Python type hints. This helps avoid drift between comments
and reality, as well as removing redundant information.

Also adds some missing type hints which were simple to fill in.
2022-11-16 15:25:24 +00:00
Erik Johnston
1a8cd8bec0 1.72.0rc1 2022-11-16 15:11:06 +00:00
Sean Quah
882277008c Fix background updates failing to add unique indexes on receipts (#14453)
As part of the database migration to support threaded receipts, there is
a possible window in between
`73/08thread_receipts_non_null.sql.postgres` removing the original
unique constraints on `receipts_linearized` and `receipts_graph` and the
`reeipts_linearized_unique_index` and `receipts_graph_unique_index`
background updates from `72/08thread_receipts.sql` completing where
the unique constraints on `receipts_linearized` and `receipts_graph` are
missing. Any emulated upserts on these tables must therefore be
performed with a lock held, otherwise duplicate rows can end up in the
tables when there are concurrent emulated upserts. Fix the missing lock.

Note that emulated upserts no longer happen by default on sqlite, since
the minimum supported version of sqlite supports native upserts by
default now.

Finally, clean up any duplicate receipts that may have crept in before
trying to create the `receipts_graph_unique_index` and
`receipts_linearized_unique_index` unique indexes.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-11-16 15:01:22 +00:00
Erik Johnston
d63814fd73 Revert "Remove slaved id tracker (#14376)" (#14463)
This reverts commit 36097e88c4.
2022-11-16 13:50:07 +00:00
Erik Johnston
945a0928c7 Don't filter state in /context response (#14461)
We don't filter state usually, so doing so here is a waste of time. This is not much of an issue for clients that enable lazy loading of members, since there will be fewer state events.
2022-11-16 12:09:33 +00:00
Andrew Morgan
f844b470f6 Fix stub return type of PushRuleEvaluator.run (#14451) 2022-11-16 12:03:05 +00:00
Erik Johnston
5cb6ad3b87 Fix HTML templates missing correct HTML tags (#14448) 2022-11-16 11:14:38 +00:00
David Robertson
1eed795fc5 Include heroes in partial join responses' state (#14442)
* Pull out hero selection logic

* Include heroes in partial join response's state

* Changelog

* Fixup trial test

* Remove TODO
2022-11-15 17:35:19 +00:00
David Robertson
258b5285b6 Fix typechecking errors introduced in #14128 (#14455)
* Fix typechecking errors introduced in #14128

* Changelog

* Correct annotations

so that context_factory works if you don't use TLS
2022-11-15 16:36:43 +00:00
DeepBlueV7.X
63cc56affa Send content rules with pattern_type to clients (#14356) 2022-11-15 15:29:30 +00:00
Tuomas Ojamies
b5ab2c428a Support using SSL on worker endpoints. (#14128)
* Fix missing SSL support in worker endpoints.

* Add changelog

* SSL for Replication endpoint

* Remove unit test change

* Refactor listener creation to reduce duplicated code

* Fix the logger message

* Update synapse/app/_base.py

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* Update synapse/app/_base.py

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* Update synapse/app/_base.py

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* Add config documentation for new TLS option

Co-authored-by: Tuomas Ojamies <tojamies@palantir.com>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2022-11-15 12:55:00 +00:00
reivilibre
634359b083 Update docstring to clarify that get_partial_state_events_batch does not just give you completely arbitrary partial-state events. (#14417) 2022-11-15 10:43:17 +00:00
sando38
64dd8a9c6e Include additional TURN server example into documentation (#14293)
* Include eturnal TURN server configuration example

and moving specific configuration examples into sub folders.

* Update docs/turn-howto.md

Co-authored-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>

* Update docs/setup/turn/coturn.md

Co-authored-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>

* Update docs/setup/turn/eturnal.md

Co-authored-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>

* Fix TURN relaying public IP address hint

* lint eturnal installation commands

* Adjust synapse setup to link to existing documentation

..avoid redundant information.

* remove redundant text

* include alpine linux package link

* Create 14293.doc

* Update 14293.doc

add missing dot

* Update docs/setup/turn/eturnal.md

Co-authored-by: reivilibre <olivier@librepush.net>

* Update docs/setup/turn/eturnal.md

Co-authored-by: reivilibre <olivier@librepush.net>

* Update docs/setup/turn/coturn.md

Co-authored-by: Moritz Dietz <moritzdietz@users.noreply.github.com>

* Update docs/setup/turn/coturn.md

Co-authored-by: Moritz Dietz <moritzdietz@users.noreply.github.com>

* Update docs/setup/turn/coturn.md

Co-authored-by: Moritz Dietz <moritzdietz@users.noreply.github.com>

* Update docs/setup/turn/eturnal.md

Co-authored-by: reivilibre <olivier@librepush.net>

* Update docs/setup/turn/coturn.md

Co-authored-by: Moritz Dietz <moritzdietz@users.noreply.github.com>

* Update docs/setup/turn/coturn.md

Co-authored-by: Moritz Dietz <moritzdietz@users.noreply.github.com>

* Update eturnal.md to link to official documentation

... and to simplify some aspects

* Adjust coturn to link to default prefix

* Mention eturnalctl location

* Update docs/turn-howto.md

Co-authored-by: Saarko <sandomir@tutanotal.com>
Co-authored-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>
Co-authored-by: reivilibre <olivier@librepush.net>
Co-authored-by: Moritz Dietz <moritzdietz@users.noreply.github.com>
2022-11-14 17:55:10 +00:00
Nick Mills-Barrett
36097e88c4 Remove slaved id tracker (#14376)
This matches the multi instance writer ID generator class which can
both handle advancing the current token over replication and by calling
the database.
2022-11-14 17:31:36 +00:00
dependabot[bot]
e226513c0f Bump jsonschema from 4.16.0 to 4.17.0 (#14439)
* Bump jsonschema from 4.16.0 to 4.17.0

Bumps [jsonschema](https://github.com/python-jsonschema/jsonschema) from 4.16.0 to 4.17.0.
- [Release notes](https://github.com/python-jsonschema/jsonschema/releases)
- [Changelog](https://github.com/python-jsonschema/jsonschema/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/python-jsonschema/jsonschema/compare/v4.16.0...v4.17.0)

---
updated-dependencies:
- dependency-name: jsonschema
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 17:17:29 +00:00
dependabot[bot]
4d1de6a944 Bump flake8-comprehensions from 3.8.0 to 3.10.1 (#14438)
* Bump flake8-comprehensions from 3.8.0 to 3.10.1

Bumps [flake8-comprehensions](https://github.com/adamchainz/flake8-comprehensions) from 3.8.0 to 3.10.1.
- [Release notes](https://github.com/adamchainz/flake8-comprehensions/releases)
- [Changelog](https://github.com/adamchainz/flake8-comprehensions/blob/main/HISTORY.rst)
- [Commits](https://github.com/adamchainz/flake8-comprehensions/compare/3.8.0...3.10.1)

---
updated-dependencies:
- dependency-name: flake8-comprehensions
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 17:17:19 +00:00
dependabot[bot]
4a333d638b Bump types-pyopenssl from 22.0.10 to 22.1.0.2 (#14437)
* Bump types-pyopenssl from 22.0.10 to 22.1.0.2

Bumps [types-pyopenssl](https://github.com/python/typeshed) from 22.0.10 to 22.1.0.2.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyopenssl
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 17:16:53 +00:00
dependabot[bot]
2cecb782c4 Bump canonicaljson from 1.6.3 to 1.6.4 (#14440)
* Bump canonicaljson from 1.6.3 to 1.6.4

Bumps [canonicaljson](https://github.com/matrix-org/python-canonicaljson) from 1.6.3 to 1.6.4.
- [Release notes](https://github.com/matrix-org/python-canonicaljson/releases)
- [Changelog](https://github.com/matrix-org/python-canonicaljson/blob/main/CHANGES.md)
- [Commits](https://github.com/matrix-org/python-canonicaljson/compare/v1.6.3...v1.6.4)

---
updated-dependencies:
- dependency-name: canonicaljson
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 16:35:08 +00:00
dependabot[bot]
ae54a94063 Bump types-setuptools from 65.5.0.2 to 65.5.0.3 (#14436)
* Bump types-setuptools from 65.5.0.2 to 65.5.0.3

Bumps [types-setuptools](https://github.com/python/typeshed) from 65.5.0.2 to 65.5.0.3.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-setuptools
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 16:34:00 +00:00
Erik Johnston
6816300588 Make Dependabot only bump Rust deps in the lock file (#14434)
This is to help downstream packagers.
2022-11-14 14:45:17 +00:00
David Robertson
2cc592584a Remove unused type-ignores (#14433)
* Remove unused type-ignores

Oversights in #14427 and #14429.

* Changelog
2022-11-14 13:46:29 +00:00
Patrick Cloke
fb66fae84b Clean-up events persistance code (#14411)
By removing unused variables and making some arguments
required which are always provided.
2022-11-14 08:13:11 -05:00
dependabot[bot]
95f7a65a56 Bump gitpython from 3.1.27 to 3.1.29 (#14429)
* Bump gitpython from 3.1.27 to 3.1.29

Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.27 to 3.1.29.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/main/CHANGES)
- [Commits](https://github.com/gitpython-developers/GitPython/compare/3.1.27...3.1.29)

---
updated-dependencies:
- dependency-name: gitpython
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 12:15:35 +00:00
dependabot[bot]
683bf4af4b Bump types-pyyaml from 6.0.12.1 to 6.0.12.2 (#14428)
* Bump types-pyyaml from 6.0.12.1 to 6.0.12.2

Bumps [types-pyyaml](https://github.com/python/typeshed) from 6.0.12.1 to 6.0.12.2.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyyaml
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 12:10:40 +00:00
dependabot[bot]
8e38d74313 Bump attrs from 21.4.0 to 22.1.0 (#14427)
* Bump attrs from 21.4.0 to 22.1.0

Bumps [attrs](https://github.com/python-attrs/attrs) from 21.4.0 to 22.1.0.
- [Release notes](https://github.com/python-attrs/attrs/releases)
- [Changelog](https://github.com/python-attrs/attrs/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/python-attrs/attrs/compare/21.4.0...22.1.0)

---
updated-dependencies:
- dependency-name: attrs
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 12:07:44 +00:00
dependabot[bot]
b7f5a3aaa6 Bump flake8 from 4.0.1 to 5.0.4 (#14431)
* Bump flake8 from 4.0.1 to 5.0.4

Bumps [flake8](https://github.com/pycqa/flake8) from 4.0.1 to 5.0.4.
- [Release notes](https://github.com/pycqa/flake8/releases)
- [Commits](https://github.com/pycqa/flake8/compare/4.0.1...5.0.4)

---
updated-dependencies:
- dependency-name: flake8
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 10:41:55 +00:00
dependabot[bot]
cc45808ea3 Bump types-jsonschema from 4.17.0.0 to 4.17.0.1 (#14430)
* Bump types-jsonschema from 4.17.0.0 to 4.17.0.1

Bumps [types-jsonschema](https://github.com/python/typeshed) from 4.17.0.0 to 4.17.0.1.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-jsonschema
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 10:41:31 +00:00
dependabot[bot]
fec1e2cb52 Bump blake2 from 0.10.4 to 0.10.5 (#14426)
* Bump blake2 from 0.10.4 to 0.10.5

Bumps [blake2](https://github.com/RustCrypto/hashes) from 0.10.4 to 0.10.5.
- [Release notes](https://github.com/RustCrypto/hashes/releases)
- [Commits](https://github.com/RustCrypto/hashes/compare/blake2-v0.10.4...blake2-v0.10.5)

---
updated-dependencies:
- dependency-name: blake2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 10:39:55 +00:00
dependabot[bot]
639780fc15 Bump actions/upload-artifact from 2 to 3 (#14425)
* Bump actions/upload-artifact from 2 to 3

Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 2 to 3.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 10:39:38 +00:00
dependabot[bot]
2e7c86c129 Bump dawidd6/action-download-artifact from 2.24.1 to 2.24.2 (#14424)
* Bump dawidd6/action-download-artifact from 2.24.1 to 2.24.2

Bumps [dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact) from 2.24.1 to 2.24.2.
- [Release notes](https://github.com/dawidd6/action-download-artifact/releases)
- [Commits](b12b127cf2...e6e25ac3a2)

---
updated-dependencies:
- dependency-name: dawidd6/action-download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-14 10:39:09 +00:00
Brad Jones
334a8324d3 Update sample Nginx configuration to HTTP 1.1 (#14414)
Signed-off-by: Brad Jones <brad@kinksters.dating>
2022-11-11 17:28:05 +00:00
Ashish Kumar
a3623af74e Add an Admin API endpoint for looking up users based on 3PID (#14405) 2022-11-11 15:38:17 +00:00
Nick Mills-Barrett
3a4f80f8c6 Merge/remove Slaved* stores into WorkerStores (#14375) 2022-11-11 10:51:49 +00:00
Patrick Cloke
13ca8bb2fc Remove duplicated code to evict entries. (#14410)
This code was factored out to a method, but also left in-place.

Calling this twice in a row makes no sense: the first call will reduce
the size appropriately, but the loop will immediately exit since the
cache size was already reduced.
2022-11-10 15:33:34 -05:00
Sean Quah
b2c2b03079 Fix PostgreSQL sometimes using table scans for event_search (#14409)
PostgreSQL may underestimate the number of distinct `room_id`s in
`event_search`, which can cause it to use table scans for queries for
multiple rooms.

Fix this by setting `n_distinct` on the column.

Resolves #14402.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-11-10 19:02:27 +00:00
David Robertson
d10a85ec9e Quieter logging for stateres failure at missing prev events (#14346) 2022-11-10 12:17:46 +00:00
Patrick Cloke
e9a4343cb2 Drop support for Postgres 10 in full text search code. (#14397) 2022-11-09 09:55:34 -05:00
dependabot[bot]
21447c9102 Bump dawidd6/action-download-artifact from 2.24.0 to 2.24.1 (#14398)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
Co-authored-by: reivilibre <oliverw@matrix.org>
2022-11-09 12:16:12 +00:00
realtyem
e9cbddc8e7 Modernize configure_workers_and_start.py bootstrapping script for Dockerfile-workers. (#14294) 2022-11-09 12:02:15 +00:00
Sean Quah
0cf48f2d5f Build Debian packages for Ubuntu 22.10 Kinetic Kudu (#14396)
Signed-off-by: Sean Quah <seanq@matrix.org>
2022-11-09 10:33:13 +00:00
Sean Quah
22d46db0ea Test against PostgreSQL 15 in CI (#14394)
Resolves #14170.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-11-09 10:32:52 +00:00
Sean Quah
a5fcdea090 Remove support for PostgreSQL 10 (#14392)
Signed-off-by: Sean Quah <seanq@matrix.org>
2022-11-08 17:17:13 +00:00
realtyem
d85cba1aa0 Add all Stream Writer worker types to configure_workers_and_start.py (#14197)
Co-authored-by: reivilibre <oliverw@matrix.org>
2022-11-08 13:14:00 +00:00
Sean Quah
5853d798a1 Merge branch 'master' into develop 2022-11-08 13:07:27 +00:00
realtyem
69814eb282 Allow override for requesting specific worker types for Complement on command line. (#14324)
* Expose getting SYNAPSE_WORKER_TYPES from external, allowing override of workers requested.

* Add WORKER_TYPES variable option to complement.sh script that passes requested workers into start_for_complement.sh entrypoint.

* Update docs to reflect this new ability.

* Changelog

* Don't rely on soft wrapping to format long strings

Good idea dklimpel. Thanks for catching that.

Co-authored-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>

* Small nits just noticed in docs.

* Fixup new line in docs.

Co-authored-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>
2022-11-08 12:34:09 +00:00
Sean Quah
f0dec49f01 Update CHANGES.md to mention PostgreSQL 10 end of life 2022-11-08 10:59:36 +00:00
Sean Quah
1d1ab0e41f Update CHANGES.md 2022-11-08 10:40:34 +00:00
Sean Quah
404404733c 1.71.0 2022-11-08 10:38:16 +00:00
Shay
7894251bce Correctly create power level event during initial room creation (#14361) 2022-11-07 13:38:50 -08:00
Richard van der Hoff
2193513346 Fix background update table-scanning events (#14374)
When this background update did its last batch, it would try to update all the
events that had been inserted since the bgupdate started, which could cause a
table-scan. Make sure we limit the update correctly.
2022-11-07 14:28:00 +00:00
aceArt-GmbH
42f9d414c2 Add example on how to load balance /sync requests (#14297)
Signed-off-by: lukas <lukas.walter@aceart.de>

Signed-off-by: lukas <lukas.walter@aceart.de>
2022-11-07 13:51:53 +00:00
Sean Quah
e980982b59 Do not reject /sync requests with unrecognised filter fields (#14369)
For forward compatibility, Synapse needs to ignore fields it does not
recognise instead of raising an error.

Fixes #14365.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-11-07 13:49:31 +00:00
dependabot[bot]
233fc6e279 Bump types-jsonschema from 4.4.6 to 4.17.0.0 (#14386)
* Bump types-jsonschema from 4.4.6 to 4.17.0.0

Bumps [types-jsonschema](https://github.com/python/typeshed) from 4.4.6 to 4.17.0.0.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-jsonschema
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-07 10:29:26 +00:00
dependabot[bot]
bd70fc1a3c Bump types-pyyaml from 6.0.12 to 6.0.12.1 (#14385)
* Bump types-pyyaml from 6.0.12 to 6.0.12.1

Bumps [types-pyyaml](https://github.com/python/typeshed) from 6.0.12 to 6.0.12.1.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyyaml
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-07 10:29:16 +00:00
dependabot[bot]
a2a44e53a6 Bump cryptography from 36.0.1 to 38.0.3 (#14384)
* Bump cryptography from 36.0.1 to 38.0.3

Bumps [cryptography](https://github.com/pyca/cryptography) from 36.0.1 to 38.0.3.
- [Release notes](https://github.com/pyca/cryptography/releases)
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/36.0.1...38.0.3)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-07 10:29:08 +00:00
dependabot[bot]
6ac9b5c9a5 Bump pillow from 9.2.0 to 9.3.0 (#14383)
* Bump pillow from 9.2.0 to 9.3.0

Bumps [pillow](https://github.com/python-pillow/Pillow) from 9.2.0 to 9.3.0.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](https://github.com/python-pillow/Pillow/compare/9.2.0...9.3.0)

---
updated-dependencies:
- dependency-name: pillow
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-07 10:28:50 +00:00
dependabot[bot]
7deee6763c Bump types-setuptools from 65.5.0.1 to 65.5.0.2 (#14382)
* Bump types-setuptools from 65.5.0.1 to 65.5.0.2

Bumps [types-setuptools](https://github.com/python/typeshed) from 65.5.0.1 to 65.5.0.2.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-setuptools
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-07 10:28:29 +00:00
dependabot[bot]
b03b5a5a4f Bump pyo3 from 0.17.2 to 0.17.3 (#14381)
* Bump pyo3 from 0.17.2 to 0.17.3

Bumps [pyo3](https://github.com/pyo3/pyo3) from 0.17.2 to 0.17.3.
- [Release notes](https://github.com/pyo3/pyo3/releases)
- [Changelog](https://github.com/PyO3/pyo3/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pyo3/pyo3/compare/v0.17.2...v0.17.3)

---
updated-dependencies:
- dependency-name: pyo3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-07 10:28:19 +00:00
dependabot[bot]
1df4260620 Bump regex from 1.6.0 to 1.7.0 (#14380)
* Bump regex from 1.6.0 to 1.7.0

Bumps [regex](https://github.com/rust-lang/regex) from 1.6.0 to 1.7.0.
- [Release notes](https://github.com/rust-lang/regex/releases)
- [Changelog](https://github.com/rust-lang/regex/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/regex/compare/1.6.0...1.7.0)

---
updated-dependencies:
- dependency-name: regex
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-07 10:28:08 +00:00
dependabot[bot]
04359f92f2 Bump peaceiris/actions-mdbook from 1.1.14 to 1.2.0 (#14379)
* Bump peaceiris/actions-mdbook from 1.1.14 to 1.2.0

Bumps [peaceiris/actions-mdbook](https://github.com/peaceiris/actions-mdbook) from 1.1.14 to 1.2.0.
- [Release notes](https://github.com/peaceiris/actions-mdbook/releases)
- [Changelog](https://github.com/peaceiris/actions-mdbook/blob/main/CHANGELOG.md)
- [Commits](https://github.com/peaceiris/actions-mdbook/compare/v1.1.14...adeb05db28a0c0004681db83893d56c0388ea9ea)

---
updated-dependencies:
- dependency-name: peaceiris/actions-mdbook
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-07 10:27:52 +00:00
dependabot[bot]
b2a1e75431 Bump dawidd6/action-download-artifact from 2.15.0 to 2.24.0 (#14378)
* Bump dawidd6/action-download-artifact from 2.15.0 to 2.24.0

Bumps [dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact) from 2.15.0 to 2.24.0.
- [Release notes](https://github.com/dawidd6/action-download-artifact/releases)
- [Commits](af92a8455a...46b4ae883b)

---
updated-dependencies:
- dependency-name: dawidd6/action-download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-07 10:27:39 +00:00
dependabot[bot]
8bcdd712b8 Bump flake8-bugbear from 22.9.23 to 22.10.27 (#14329)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2022-11-04 18:43:14 +00:00
Brendan Abolivier
bb39fc4366 Fix the trigger path for deploying documentation PRs (#14370)
This was missed from #12947
2022-11-04 18:33:01 +00:00
Michael Telatynski
79b6c19321 Upload documentation PRs to Netlify (#12947)
Signed-off-by: Michael Telatynski <7t3chguy@gmail.com>
Co-authored-by: Erik Johnston <erik@matrix.org>
Co-authored-by: David Robertson <davidr@element.io>
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
2022-11-04 17:08:11 +00:00
Tulir Asokan
a4b1f64562 Fix /refresh endpoint version (#14364) 2022-11-04 16:43:51 +00:00
Sean Quah
e5d18956b9 Merge tag 'v1.71.0rc2' into develop
Synapse 1.71.0rc2 (2022-11-04)
==============================

Please note that, as announced in the release notes for Synapse 1.69.0, legacy Prometheus metric names are now disabled by default.
They will be removed altogether in Synapse 1.73.0.
If not already done, server administrators should update their dashboards and alerting rules to avoid using the deprecated metric names.
See the [upgrade notes](https://matrix-org.github.io/synapse/v1.71/upgrade.html#upgrading-to-v1710) for more details.

Improved Documentation
----------------------

- Document the changes to monthly active user metrics due to deprecation of legacy Prometheus metric names. ([\#14358](https://github.com/matrix-org/synapse/issues/14358), [\#14360](https://github.com/matrix-org/synapse/issues/14360))

Deprecations and Removals
-------------------------

- Disable legacy Prometheus metric names by default. They can still be re-enabled for now, but they will be removed altogether in Synapse 1.73.0. ([\#14353](https://github.com/matrix-org/synapse/issues/14353))

Internal Changes
----------------

- Run unit tests against Python 3.11. ([\#13812](https://github.com/matrix-org/synapse/issues/13812))
2022-11-04 15:22:06 +00:00
Sean Quah
af592d7d4c Update CHANGES.md 2022-11-04 12:13:10 +00:00
Sean Quah
b00294b8b1 1.71.0rc2 2022-11-04 12:01:17 +00:00
David Robertson
78909f5028 Include monthly active user metrics in the list of legacy metrics names (#14360) 2022-11-04 10:45:01 +00:00
David Robertson
2e2cffe1a2 Cherry-pick "Run trial tests against Python 3.11 (#13812)" and fixup commit
4f5d492cd6a9438de03d1b768f4c220cb662ac06

The release branch CI is failing because poetry seems unable to install
wrapt 1.13.3 when run under CPython 3.11. Develop has already bumped
wrapt for 3.11 compatibility. Cherry-pick that commit here to try and
get CI going again.
2022-11-03 21:37:17 +00:00
Will Hunt
b1379a7ca8 Update legacy synapse_admin_mau: metric names in docs (#14358)
* Rename legacy metrics in MAU docs

* changelog
2022-11-03 20:47:20 +00:00
Brendan Abolivier
86c5a710d8 Implement MSC3912: Relation-based redactions (#14260)
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
2022-11-03 16:21:31 +00:00
David Robertson
e5cd278f3f Use maintained action to install Rust in latest deps/twisted trunk jobs (#14351)
* Use maintained action to install Rust

Part of #14203. Like the changes in #14313.

* Changelog
2022-11-02 23:19:57 +00:00
reivilibre
6546308c1e Disable legacy Prometheus metric names by default. They can still be re-enabled for now, but they will be removed altogether in Synapse 1.73.0. (#14353) 2022-11-02 17:33:45 +00:00
Kat Gerasimova
19a57f4a37 Fix issue automation for Needs-Info (#14343)
Run when an issue is labelled with X-Needs-Info only. Add to triage board.

Use itemId which is output by actions/add-to-project to run the mutation to update the field value (i.e. move to the right column).
2022-11-01 19:26:15 +00:00
David Robertson
d4fac8a3e2 Fix typo in #13320 which could cause log spam (#14347) 2022-11-01 19:20:35 +00:00
Patrick Cloke
59ca73006c Enable testing MSC3874 in complement. (#14339) 2022-11-01 13:26:28 -04:00
David Robertson
2bd7f3eeab Allow PUT/GET of aliases during faster join (#14292)
without blocking on full state.
2022-11-01 15:02:39 +00:00
David Robertson
2b56aaa0b8 Merge branch 'release-v1.71' into develop 2022-11-01 14:43:52 +00:00
dependabot[bot]
1dd16e96c8 Bump twisted from 22.8.0 to 22.10.0 (#14340)
* Bump twisted from 22.8.0 to 22.10.0

Bumps [twisted](https://github.com/twisted/twisted) from 22.8.0 to 22.10.0.
- [Release notes](https://github.com/twisted/twisted/releases)
- [Changelog](https://github.com/twisted/twisted/blob/trunk/NEWS.rst)
- [Commits](https://github.com/twisted/twisted/compare/twisted-22.8.0...twisted-22.10.0)

---
updated-dependencies:
- dependency-name: twisted
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-11-01 14:31:12 +00:00
David Robertson
a62c796f63 Deal with another batch of GHA warning messages (#14313) 2022-11-01 13:58:39 +00:00
David Robertson
efdcb24328 Revert a testing commit from #13812
It (4f5d492cd6a9438de03d1b768f4c220cb662ac06) should have been reverted before the merge to develop.
2022-11-01 13:12:22 +00:00
David Robertson
5905ba12d0 Run trial tests against Python 3.11 (#13812) 2022-11-01 13:07:54 +00:00
David Robertson
051402d1df Adjust changelog 2022-11-01 12:33:19 +00:00
David Robertson
ddbba28d52 1.71.0rc1 2022-11-01 12:10:51 +00:00
David Robertson
9473ebb9e7 Revert "Fix event size checks (#13710)"
This reverts commit fab495a9e1.

As noted in
https://github.com/matrix-org/synapse/pull/13710#issuecomment-1298396007:

> We want to see this change land for the protocol's sake (and plan to
  un-revert it) but want to give this a little more time before releasing
  this.
2022-11-01 11:47:09 +00:00
reivilibre
b922b54b61 Fix type annotation causing import time error in the Complement forking launcher. (#14084)
Co-authored-by: David Robertson <davidr@element.io>
2022-11-01 10:30:43 +00:00
David Robertson
dbfc9b803e Fix dehydrated device REST checks (#14336) 2022-10-31 20:31:43 +00:00
Quentin Gliech
cc3a52b33d Support OIDC backchannel logouts (#11414)
If configured an OIDC IdP can log a user's session out of
Synapse when they log out of the identity provider.

The IdP sends a request directly to Synapse (and must be
configured with an endpoint) when a user logs out.
2022-10-31 13:07:30 -04:00
dependabot[bot]
15bdb0da52 Bump sentry-sdk from 1.5.11 to 1.10.1 (#14330)
* Bump sentry-sdk from 1.5.11 to 1.10.1

Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.5.11 to 1.10.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.5.11...1.10.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-31 13:43:19 +00:00
dependabot[bot]
b2890369cd Bump psycopg2 from 2.9.4 to 2.9.5 (#14331)
* Bump psycopg2 from 2.9.4 to 2.9.5

Bumps [psycopg2](https://github.com/psycopg/psycopg2) from 2.9.4 to 2.9.5.
- [Release notes](https://github.com/psycopg/psycopg2/releases)
- [Changelog](https://github.com/psycopg/psycopg2/blob/master/NEWS)
- [Commits](https://github.com/psycopg/psycopg2/commits)

---
updated-dependencies:
- dependency-name: psycopg2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-31 13:34:00 +00:00
dependabot[bot]
278f8543be Bump twine from 3.8.0 to 4.0.1 (#14332)
* Bump twine from 3.8.0 to 4.0.1

Bumps [twine](https://github.com/pypa/twine) from 3.8.0 to 4.0.1.
- [Release notes](https://github.com/pypa/twine/releases)
- [Changelog](https://github.com/pypa/twine/blob/main/docs/changelog.rst)
- [Commits](https://github.com/pypa/twine/compare/3.8.0...4.0.1)

---
updated-dependencies:
- dependency-name: twine
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-31 13:32:04 +00:00
dependabot[bot]
00d108fce4 Bump black from 22.3.0 to 22.10.0 (#14328)
* Bump black from 22.3.0 to 22.10.0

Bumps [black](https://github.com/psf/black) from 22.3.0 to 22.10.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/22.3.0...22.10.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-31 13:29:14 +00:00
David Robertson
2bb2c32e8e Avoid incrementing bg process utime/stime counters by negative durations (#14323) 2022-10-31 13:02:07 +00:00
Andrew Morgan
7911e2835d Prevent federation user keys query from returning device names if disallowed (#14304) 2022-10-28 18:06:02 +01:00
David Robertson
730b13dbc9 Improve RawHeaders type hints (#14303) 2022-10-28 16:04:02 +00:00
Patrick Cloke
81815e0561 Switch search SQL to triple-quote strings. (#14311)
For ease of reading we switch from concatenated strings to
triple quote strings.
2022-10-28 11:44:10 -04:00
Andrew Morgan
453914b472 Merge branch 'master' into develop 2022-10-28 16:30:54 +01:00
Olivier Wilkinson (reivilibre)
1335367ca7 Merge branch 'master' into develop 2022-10-28 15:59:51 +01:00
Dirk Klimpel
44f0d573cf Add docs for an empty trusted_key_servers config option (#13999)
* Add docs for an empty `trusted_key_servers` config option

* small rewording

* Tweak changelog
2022-10-28 13:55:03 +01:00
Eric Eastwood
aa70556699 Check appservice user interest against the local users instead of all users (get_users_in_room mis-use) (#13958) 2022-10-27 18:29:23 +00:00
Patrick Cloke
67583281e3 Fix tests for change in PostgreSQL 14 behavior change. (#14310)
PostgreSQL 14 changed the behavior of `websearch_to_tsquery` to
improve some behaviour.

The tests were hitting those edge-cases about handling of hanging double
quotes. This fixes the tests to take into account the PostgreSQL version.
2022-10-27 13:58:12 +00:00
Dirk Klimpel
1357ae869f Add workers settings to configuration manual (#14086)
* Add workers settings to configuration manual
* Update `pusher_instances`
* update url to python logger
* update headlines
* update links after headline change
* remove link from `daemon process`

There is no docs in Synapse for this

* extend example for `federation_sender_instances` and `pusher_instances`
* more infos about stream writers
* add link to DAG
* update `pusher_instances`
* update `worker_listeners`
* update `stream_writers`
* Update `worker_name`

Co-authored-by: David Robertson <davidr@element.io>
2022-10-27 14:39:47 +01:00
Mathieu Velten
4dc05f3019 Fix presence bug introduced in 1.64 by #13313 (#14243)
* Fix presence bug introduced in 1.64 by #13313

Signed-off-by: Mathieu Velten <mathieuv@matrix.org>

* Add changelog

* Add DISTINCT

* Apply suggestions from code review

Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
2022-10-27 13:16:00 +01:00
David Robertson
cbe01ccc3f Reject history insertion during partial joins (#14291) 2022-10-27 10:52:23 +01:00
Eric Eastwood
40fa8294e3 Refactor MSC3030 /timestamp_to_event to move away from our snowflake pull from destination pattern (#14096)
1. `federation_client.timestamp_to_event(...)` now handles all `destination` looping and uses our generic `_try_destination_list(...)` helper.
 2. Consistently handling `NotRetryingDestination` and `FederationDeniedError` across `get_pdu` , backfill, and the generic `_try_destination_list` which is used for many places we use this pattern.
 3. `get_pdu(...)` now returns `PulledPduInfo` so we know which `destination` we ended up pulling the PDU from
2022-10-26 16:10:55 -05:00
David Robertson
0d59ae706a Use poetry 1.2 for complement in latest deps (#14305) 2022-10-26 17:22:26 +01:00
Ashish Kumar
0cfbb35131 fix broken avatar checks when server_name contains a port (#13927)
Fixes check_avatar_size_and_mime_type() to successfully update avatars on homeservers running on non-default ports which it would mistakenly treat as remote homeserver while validating the avatar's size and mime type.

Signed-off-by: Ashish Kumar ashfame@users.noreply.github.com
2022-10-26 15:51:23 +01:00
Olivier Wilkinson (reivilibre)
86b7d9b886 Merge branch 'master' into develop 2022-10-26 13:05:09 +01:00
Quentin Gliech
8756d5c87e Save login tokens in database (#13844)
* Save login tokens in database

Signed-off-by: Quentin Gliech <quenting@element.io>

* Add upgrade notes

* Track login token reuse in a Prometheus metric

Signed-off-by: Quentin Gliech <quenting@element.io>
2022-10-26 11:45:41 +01:00
James Salter
d902181de9 Unified search query syntax using the full-text search capabilities of the underlying DB. (#11635)
Support a unified search query syntax which leverages more of the full-text
search of each database supported by Synapse.

Supports, with the same syntax across Postgresql 11+ and Sqlite:

- quoted "search terms"
- `AND`, `OR`, `-` (negation) operators
- Matching words based on their stem, e.g. searches for "dog" matches
  documents containing "dogs". 

This is achieved by 

- If on postgresql 11+, pass the user input to `websearch_to_tsquery`
- If on sqlite, manually parse the query and transform it into the sqlite-specific
  query syntax.

Note that postgresql 10, which is close to end-of-life, falls back to using
`phraseto_tsquery`, which only supports a subset of the features.

Multiple terms separated by a space are implicitly ANDed.

Note that:

1. There is no escaping of full-text syntax that might be supported by the database;
  e.g. `NOT`, `NEAR`, `*` in sqlite. This runs the risk that people might discover this
  as accidental functionality and depend on something we don't guarantee.
2. English text is assumed for stemming. To support other languages, either the target
  language needs to be known at the time of indexing the message (via room metadata,
  or otherwise), or a separate index for each language supported could be created.

Sqlite docs: https://www.sqlite.org/fts3.html#full_text_index_queries
Postgres docs: https://www.postgresql.org/docs/11/textsearch-controls.html
2022-10-25 14:05:22 -04:00
Olivier Wilkinson (reivilibre)
85fcbba595 Merge branch 'release-v1.70' into develop 2022-10-25 15:39:35 +01:00
Quentin Gliech
9192d74b0b Refactor OIDC tests to better mimic an actual OIDC provider. (#13910)
This implements a fake OIDC server, which intercepts calls to the HTTP client.
Improves accuracy of tests by covering more internal methods.

One particular example was the ID token validation, which previously mocked.

This uncovered an incorrect dependency: Synapse actually requires at least
authlib 0.15.1, not 0.14.0.
2022-10-25 14:25:02 +00:00
DeepBlueV7.X
2d0ba3f89a Implementation for MSC3664: Pushrules for relations (#11804) 2022-10-25 14:38:01 +01:00
Nick Mills-Barrett
c9dffd5b33 Remove unused @lru_cache decorator (#13595)
* Remove unused `@lru_cache` decorator

Spotted this working on something else.

Co-authored-by: David Robertson <davidr@element.io>
2022-10-25 11:39:25 +01:00
Erik Johnston
d125919963 Cache rust build deps in trial CI (#14287) 2022-10-25 11:27:56 +01:00
asymmetric
8c94dd3a27 Enable WAL for SQLite (#13897)
Signed-off-by: Lorenzo Manacorda <lorenzo@mailbox.org>
2022-10-25 10:22:55 +01:00
Ryan Miguel
19c0e55ef7 Return NOT_JSON if decode fails and defer set_timeline_upper_limit ca… (#14262)
* Return NOT_JSON if decode fails and defer set_timeline_upper_limit call until after check_valid_filter. Fixes #13661. Signed-off-by: Ryan Miguel <miguel.ryanj@gmail.com>.

* Reword changelog
2022-10-24 16:55:06 +01:00
dependabot[bot]
872ea2f4de Bump serde_json from 1.0.86 to 1.0.87 (#14279) 2022-10-24 14:08:22 +01:00
dependabot[bot]
386e72a22d Bump peaceiris/actions-gh-pages from 3.8.0 to 3.9.0 (#14276) 2022-10-24 10:16:33 +00:00
dependabot[bot]
c6987f65fe Bump peaceiris/actions-mdbook from 1.1.14 to 1.2.0 (#14275) 2022-10-24 10:13:29 +00:00
Richard van der Hoff
1469fed0e3 Add debugging to help diagnose lost device-list-update (#14268) 2022-10-24 10:45:10 +01:00
dependabot[bot]
6c82b3759f Bump pysaml2 from 7.1.2 to 7.2.1 (#14270) 2022-10-24 10:40:30 +01:00
dependabot[bot]
94f239d911 Bump jinja2 from 3.0.3 to 3.1.2 (#14271) 2022-10-24 10:40:08 +01:00
dependabot[bot]
673970bb5a Bump types-requests from 2.28.11 to 2.28.11.2 (#14272) 2022-10-24 10:39:16 +01:00
dependabot[bot]
cb76892c7d Bump setuptools-rust from 1.5.1 to 1.5.2 (#14273) 2022-10-24 10:39:00 +01:00
dependabot[bot]
cd02bfc026 Bump prometheus-client from 0.14.0 to 0.15.0 (#14274) 2022-10-24 10:38:40 +01:00
dependabot[bot]
5f06488418 Bump anyhow from 1.0.65 to 1.0.66 (#14278) 2022-10-24 10:20:13 +01:00
dependabot[bot]
278b530875 Bump serde from 1.0.145 to 1.0.147 (#14277) 2022-10-24 10:19:55 +01:00
Shay
b7a7ff6ee3 Add initial power level event to batch of bulk persisted events when creating a new room. (#14228) 2022-10-21 10:46:22 -07:00
Germain
1d45ad8b2a Improve aesthetics and reusability of HTML templates. (#13652)
Use a base template to create a cohesive feel across the HTML
templates provided by Synapse.

Adds basic styling to the base template for a more user-friendly
look and feel.
2022-10-21 17:44:00 +00:00
Richard van der Hoff
d24346f530 Fix logging error on SIGHUP (#14258) 2022-10-21 16:03:44 +01:00
Tadeusz Sośnierz
1433b5d5b6 Show erasure status when listing users in the Admin API (#14205)
* Show erasure status when listing users in the Admin API

* Use USING when joining erased_users

* Add changelog entry

* Revert "Use USING when joining erased_users"

This reverts commit 30bd2bf106415caadcfdbdd1b234ef2b106cc394.

* Make the erased check work on postgres

* Add a testcase for showing erased user status

* Appease the style linter

* Explicitly convert `erased` to bool to make SQLite consistent with Postgres

This also adds us an easy way in to fix the other accidentally integered columns.

* Move erasure status test to UsersListTestCase

* Include user erased status when fetching user info via the admin API

* Document the erase status in user_admin_api

* Appease the linter and mypy

* Signpost comments in tests

Co-authored-by: Tadeusz Sośnierz <tadeusz@sosnierz.com>
Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
2022-10-21 13:52:44 +01:00
DeepBlueV7.X
fab495a9e1 Fix event size checks (#13710) 2022-10-21 09:49:47 +01:00
David Robertson
cacda2d1f5 Build wheels on macos 11, not 10.15 (#14249) 2022-10-20 22:01:08 +00:00
Patrick Cloke
755bfeee3a Use servlets for /key/ endpoints. (#14229)
To fix the response for unknown endpoints under that prefix.

See MSC3743.
2022-10-20 11:32:47 -04:00
Andrew Morgan
da2c93d4b6 Stop returning unsigned.invite_room_state in PUT /_matrix/federation/v2/invite/{roomId}/{eventId} responses (#14064)
Co-authored-by: David Robertson <davidr@element.io>
2022-10-20 15:17:45 +01:00
Erik Johnston
09c602b558 Merge branch 'release-v1.70' into develop 2022-10-20 09:47:04 +01:00
Eric Eastwood
70b3396506 Explain SynapseError and FederationError better (#14191)
Explain `SynapseError` and `FederationError` better

Spawning from https://github.com/matrix-org/synapse/pull/13816#discussion_r993262622
2022-10-19 15:39:43 -05:00
dependabot[bot]
3841900aaa Bump types-opentracing from 2.4.7 to 2.4.10 (#14133)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
Co-authored-by: reivilibre <oliverw@matrix.org>
2022-10-19 20:04:40 +00:00
dependabot[bot]
0b7830e457 Bump flake8-bugbear from 21.3.2 to 22.9.23 (#14042)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Erik Johnston <erik@matrix.org>
Co-authored-by: David Robertson <davidr@element.io>
2022-10-19 19:38:24 +00:00
Matthew Hodgson
695a85d1bc Document encryption_enabled_by_default_for_room_type under the right name (#14110)
* document encryption_enabled_by_default_for_room_type under the right name

* add changelog

* Update changelog.d/14110.doc
2022-10-19 20:17:37 +01:00
Finn
fe50738e59 let update_synapse_database run on a multi-database configurations (#13422)
* Allow sharded database in db migrate script

Signed-off-by: Finn Herzfeld <finn@beeper.com>

* Update changelog.d/13422.bugfix

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* Remove check entirely

* remove unused import

Signed-off-by: Finn Herzfeld <finn@beeper.com>
Co-authored-by: finn <finn@beeper.com>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-10-19 19:08:40 +01:00
Will Hunt
04d7f56f53 Use backend-meta edition of issue triage workflow (#14230) 2022-10-19 11:41:25 +01:00
351 changed files with 13716 additions and 5965 deletions

View File

@@ -46,7 +46,7 @@ if not IS_PR:
"database": "sqlite",
"extras": "all",
}
for version in ("3.8", "3.9", "3.10")
for version in ("3.8", "3.9", "3.10", "3.11")
)
@@ -54,7 +54,7 @@ trial_postgres_tests = [
{
"python-version": "3.7",
"database": "postgres",
"postgres-version": "10",
"postgres-version": "11",
"extras": "all",
}
]
@@ -62,9 +62,9 @@ trial_postgres_tests = [
if not IS_PR:
trial_postgres_tests.append(
{
"python-version": "3.10",
"python-version": "3.11",
"database": "postgres",
"postgres-version": "14",
"postgres-version": "15",
"extras": "all",
}
)

View File

@@ -4,7 +4,7 @@
root = true
# 4 space indentation
[*.py]
[*.{py,pyi}]
indent_style = space
indent_size = 4
max_line_length = 88

View File

@@ -8,4 +8,11 @@
# E203: whitespace before ':' (which is contrary to pep8?)
# E731: do not assign a lambda expression, use a def
# E501: Line too long (black enforces this for us)
ignore=W503,W504,E203,E731,E501
#
# flake8-bugbear runs extra checks. Its error codes are described at
# https://github.com/PyCQA/flake8-bugbear#list-of-warnings
# B019: Use of functools.lru_cache or functools.cache on methods can lead to memory leaks
# B023: Functions defined inside a loop must not use variables redefined in the loop
# B024: Abstract base class with no abstract method.
ignore=W503,W504,E203,E731,E501,B019,B023,B024

View File

@@ -74,6 +74,36 @@ body:
- Debian packages from packages.matrix.org
- pip (from PyPI)
- Other (please mention below)
- I don't know
validations:
required: true
- type: input
id: database
attributes:
label: Database
description: |
Are you using SQLite or PostgreSQL? What's the version of your database?
If PostgreSQL, please also answer the following:
- are you using a single PostgreSQL server
or [separate servers for `main` and `state`](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#databases)?
- have you previously ported from SQLite using the Synapse "portdb" script?
- have you previously restored from a backup?
validations:
required: true
- type: dropdown
id: workers
attributes:
label: Workers
description: |
Are you running a single Synapse process, or are you running
[2 or more workers](https://matrix-org.github.io/synapse/latest/workers.html)?
options:
- Single process
- Multiple workers
- I don't know
validations:
required: true
- type: textarea
id: platform
attributes:
@@ -83,17 +113,28 @@ body:
e.g. distro, hardware, if it's running in a vm/container, etc.
validations:
required: true
- type: textarea
id: config
attributes:
label: Configuration
description: |
Do you have any unusual config options turned on? If so, please provide details.
- Experimental or undocumented features
- [Presence](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#presence)
- [Message retention](https://matrix-org.github.io/synapse/latest/message_retention_policies.html)
- [Synapse modules](https://matrix-org.github.io/synapse/latest/modules/index.html)
- type: textarea
id: logs
attributes:
label: Relevant log output
description: |
Please copy and paste any relevant log output, ideally at INFO or DEBUG log level.
This will be automatically formatted into code, so there is no need for backticks.
This will be automatically formatted into code, so there is no need for backticks (`\``).
Please be careful to remove any personal or private data.
**Bug reports are usually very difficult to diagnose without logging.**
**Bug reports are usually impossible to diagnose without logging.**
render: shell
validations:
required: true

View File

@@ -18,5 +18,6 @@ updates:
- package-ecosystem: "cargo"
directory: "/"
versioning-strategy: "lockfile-only"
schedule:
interval: "weekly"

34
.github/workflows/docs-pr-netlify.yaml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: Deploy documentation PR preview
on:
workflow_run:
workflows: [ "Prepare documentation PR preview" ]
types:
- completed
jobs:
netlify:
if: github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.event == 'pull_request'
runs-on: ubuntu-latest
steps:
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
- name: 📥 Download artifact
uses: dawidd6/action-download-artifact@e6e25ac3a2b93187502a8be1ef9e9603afc34925 # v2.24.2
with:
workflow: docs-pr.yaml
run_id: ${{ github.event.workflow_run.id }}
name: book
path: book
- name: 📤 Deploy to Netlify
uses: matrix-org/netlify-pr-preview@v1
with:
path: book
owner: ${{ github.event.workflow_run.head_repository.owner.login }}
branch: ${{ github.event.workflow_run.head_branch }}
revision: ${{ github.event.workflow_run.head_sha }}
token: ${{ secrets.NETLIFY_AUTH_TOKEN }}
site_id: ${{ secrets.NETLIFY_SITE_ID }}
desc: Documentation preview
deployment_env: PR Documentation Preview

34
.github/workflows/docs-pr.yaml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: Prepare documentation PR preview
on:
pull_request:
paths:
- docs/**
jobs:
pages:
name: GitHub Pages
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup mdbook
uses: peaceiris/actions-mdbook@adeb05db28a0c0004681db83893d56c0388ea9ea # v1.2.0
with:
mdbook-version: '0.4.17'
- name: Build the documentation
# mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.
# However, we're using docs/README.md for other purposes and need to pick a new page
# as the default. Let's opt for the welcome page instead.
run: |
mdbook build
cp book/welcome_and_overview.html book/index.html
- name: Upload Artifact
uses: actions/upload-artifact@v3
with:
name: book
path: book
# We'll only use this in a workflow_run, then we're done with it
retention-days: 1

View File

@@ -20,7 +20,7 @@ jobs:
- uses: actions/checkout@v3
- name: Setup mdbook
uses: peaceiris/actions-mdbook@4b5ef36b314c2599664ca107bb8c02412548d79d # v1.1.14
uses: peaceiris/actions-mdbook@adeb05db28a0c0004681db83893d56c0388ea9ea # v1.2.0
with:
mdbook-version: '0.4.17'
@@ -58,7 +58,7 @@ jobs:
# Deploy to the target directory.
- name: Deploy to gh pages
uses: peaceiris/actions-gh-pages@068dc23d9710f1ba62e86896f84735d869951305 # v3.8.0
uses: peaceiris/actions-gh-pages@de7ea6f8efb354206b205ef54722213d99067935 # v3.9.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./book

View File

@@ -27,10 +27,9 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: stable
override: true
toolchain: stable
- uses: Swatinem/rust-cache@v2
# The dev dependencies aren't exposed in the wheel metadata (at least with current
@@ -62,10 +61,9 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: stable
override: true
toolchain: stable
- uses: Swatinem/rust-cache@v2
- run: sudo apt-get -qq install xmlsec1
@@ -136,10 +134,9 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: stable
override: true
toolchain: stable
- uses: Swatinem/rust-cache@v2
- name: Ensure sytest runs `pip install`
@@ -211,7 +208,7 @@ jobs:
steps:
- uses: actions/checkout@v3
- uses: JasonEtco/create-an-issue@5d9504915f79f9cc6d791934b8ef34f2353dd74d # v2.5.0, 2020-12-06
- uses: JasonEtco/create-an-issue@77399b6110ef82b94c1c9f9f615acf9e604f7f56 # v2.5.0, 2020-12-06
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -0,0 +1,74 @@
# This task does not run complement tests, see tests.yaml instead.
# This task does not build docker images for synapse for use on docker hub, see docker.yaml instead
name: Store complement-synapse image in ghcr.io
on:
push:
branches: [ "master" ]
schedule:
- cron: '0 5 * * *'
workflow_dispatch:
inputs:
branch:
required: true
default: 'develop'
type: choice
options:
- develop
- master
# Only run this action once per pull request/branch; restart if a new commit arrives.
# C.f. https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#concurrency
# and https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#github-context
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
name: Build and push complement image
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout specific branch (debug build)
uses: actions/checkout@v3
if: github.event_name == 'workflow_dispatch'
with:
ref: ${{ inputs.branch }}
- name: Checkout clean copy of develop (scheduled build)
uses: actions/checkout@v3
if: github.event_name == 'schedule'
with:
ref: develop
- name: Checkout clean copy of master (on-push)
uses: actions/checkout@v3
if: github.event_name == 'push'
with:
ref: master
- name: Login to registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Work out labels for complement image
id: meta
uses: docker/metadata-action@v4
with:
images: ghcr.io/${{ github.repository }}/complement-synapse
tags: |
type=schedule,pattern=nightly,enable=${{ github.event_name == 'schedule'}}
type=raw,value=develop,enable=${{ github.event_name == 'schedule' || inputs.branch == 'develop' }}
type=raw,value=latest,enable=${{ github.event_name == 'push' || inputs.branch == 'master' }}
type=sha,format=long
- name: Run scripts-dev/complement.sh to generate complement-synapse:latest image.
run: scripts-dev/complement.sh --build-only
- name: Tag and push generated image
run: |
for TAG in ${{ join(fromJson(steps.meta.outputs.json).tags, ' ') }}; do
echo "tag and push $TAG"
docker tag complement-synapse $TAG
docker push $TAG
done

View File

@@ -99,7 +99,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-20.04, macos-10.15]
os: [ubuntu-20.04, macos-11]
arch: [x86_64, aarch64]
# is_pr is a flag used to exclude certain jobs from the matrix on PRs.
# It is not read by the rest of the workflow.
@@ -109,9 +109,9 @@ jobs:
exclude:
# Don't build macos wheels on PR CI.
- is_pr: true
os: "macos-10.15"
os: "macos-11"
# Don't build aarch64 wheels on mac.
- os: "macos-10.15"
- os: "macos-11"
arch: aarch64
# Don't build aarch64 wheels on PR CI.
- is_pr: true

View File

@@ -27,12 +27,15 @@ jobs:
rust:
- 'rust/**'
- 'Cargo.toml'
- 'Cargo.lock'
check-sampleconfig:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: "3.x"
- uses: matrix-org/setup-python-poetry@v1
with:
extras: "all"
@@ -44,6 +47,8 @@ jobs:
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: "3.x"
- run: "pip install 'click==8.1.1' 'GitPython>=3.1.20'"
- run: scripts-dev/check_schema_delta.py --force-colors
@@ -68,6 +73,8 @@ jobs:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: actions/setup-python@v4
with:
python-version: "3.x"
- run: "pip install 'towncrier>=18.6.0rc1'"
- run: scripts-dev/check-newsfragment.sh
env:
@@ -93,14 +100,38 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: 1.58.1
override: true
components: clippy
- uses: Swatinem/rust-cache@v2
- run: cargo clippy
- run: cargo clippy -- -D warnings
# We also lint against a nightly rustc so that we can lint the benchmark
# suite, which requires a nightly compiler.
lint-clippy-nightly:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@v3
- name: Install Rust
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: nightly-2022-12-01
components: clippy
- uses: Swatinem/rust-cache@v2
- run: cargo clippy --all-features -- -D warnings
lint-rustfmt:
runs-on: ubuntu-latest
@@ -111,11 +142,13 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: 1.58.1
override: true
components: rustfmt
toolchain: 1.58.1
components: rustfmt
- uses: Swatinem/rust-cache@v2
- run: cargo fmt --check
@@ -143,6 +176,8 @@ jobs:
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: "3.x"
- id: get-matrix
run: .ci/scripts/calculate_jobs.py
outputs:
@@ -167,6 +202,16 @@ jobs:
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.job.postgres-version }}
- name: Install Rust
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: 1.58.1
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.job.python-version }}
@@ -203,10 +248,12 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: 1.58.1
override: true
- uses: Swatinem/rust-cache@v2
# There aren't wheels for some of the older deps, so we need to install
@@ -319,10 +366,12 @@ jobs:
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Install Rust
uses: actions-rs/toolchain@v1
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: 1.58.1
override: true
- uses: Swatinem/rust-cache@v2
- name: Run SyTest
@@ -383,10 +432,10 @@ jobs:
matrix:
include:
- python-version: "3.7"
postgres-version: "10"
postgres-version: "11"
- python-version: "3.10"
postgres-version: "14"
- python-version: "3.11"
postgres-version: "15"
services:
postgres:
@@ -404,6 +453,15 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Add PostgreSQL apt repository
# We need a version of pg_dump that can handle the version of
# PostgreSQL being tested against. The Ubuntu package repository lags
# behind new releases, so we have to use the PostreSQL apt repository.
# Steps taken from https://www.postgresql.org/download/linux/ubuntu/
run: |
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@v1
with:
@@ -451,10 +509,12 @@ jobs:
path: synapse
- name: Install Rust
uses: actions-rs/toolchain@v1
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: 1.58.1
override: true
- uses: Swatinem/rust-cache@v2
- name: Prepare Complement's Prerequisites
@@ -477,10 +537,12 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: 1.58.1
override: true
- uses: Swatinem/rust-cache@v2
- run: cargo test

View File

@@ -5,24 +5,11 @@ on:
types: [ opened ]
jobs:
add_new_issues:
name: Add new issues to the triage board
runs-on: ubuntu-latest
steps:
- uses: octokit/graphql-action@v2.x
id: add_to_project
with:
headers: '{"GraphQL-Features": "projects_next_graphql"}'
query: |
mutation add_to_project($projectid:ID!,$contentid:ID!) {
addProjectV2ItemById(input: {projectId: $projectid contentId: $contentid}) {
item {
id
}
}
}
projectid: ${{ env.PROJECT_ID }}
contentid: ${{ github.event.issue.node_id }}
env:
PROJECT_ID: "PVT_kwDOAIB0Bs4AFDdZ"
GITHUB_TOKEN: ${{ secrets.ELEMENT_BOT_TOKEN }}
triage:
uses: matrix-org/backend-meta/.github/workflows/triage-incoming.yml@v1
with:
project_id: 'PVT_kwDOAIB0Bs4AFDdZ'
content_id: ${{ github.event.issue.node_id }}
secrets:
github_access_token: ${{ secrets.ELEMENT_BOT_TOKEN }}

View File

@@ -11,34 +11,34 @@ jobs:
if: >
contains(github.event.issue.labels.*.name, 'X-Needs-Info')
steps:
- uses: octokit/graphql-action@v2.x
id: add_to_project
- uses: actions/add-to-project@main
id: add_project
with:
headers: '{"GraphQL-Features": "projects_next_graphql"}'
query: |
mutation {
updateProjectV2ItemFieldValue(
input: {
projectId: $projectid
itemId: $contentid
fieldId: $fieldid
value: {
singleSelectOptionId: "Todo"
project-url: "https://github.com/orgs/matrix-org/projects/67"
github-token: ${{ secrets.ELEMENT_BOT_TOKEN }}
- name: Set status
env:
GITHUB_TOKEN: ${{ secrets.ELEMENT_BOT_TOKEN }}
run: |
gh api graphql -f query='
mutation(
$project: ID!
$item: ID!
$fieldid: ID!
$columnid: String!
) {
updateProjectV2ItemFieldValue(
input: {
projectId: $project
itemId: $item
fieldId: $fieldid
value: {
singleSelectOptionId: $columnid
}
}
) {
projectV2Item {
id
}
}
) {
projectV2Item {
id
}
}
projectid: ${{ env.PROJECT_ID }}
contentid: ${{ github.event.issue.node_id }}
fieldid: ${{ env.FIELD_ID }}
optionid: ${{ env.OPTION_ID }}
env:
PROJECT_ID: "PVT_kwDOAIB0Bs4AFDdZ"
GITHUB_TOKEN: ${{ secrets.ELEMENT_BOT_TOKEN }}
FIELD_ID: "PVTSSF_lADOAIB0Bs4AFDdZzgC6ZA4"
OPTION_ID: "ba22e43c"
}' -f project="PVT_kwDOAIB0Bs4AFDdZ" -f item=${{ steps.add_project.outputs.itemId }} -f fieldid="PVTSSF_lADOAIB0Bs4AFDdZzgC6ZA4" -f columnid=ba22e43c --silent

View File

@@ -18,10 +18,9 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: stable
override: true
toolchain: stable
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
@@ -44,10 +43,9 @@ jobs:
- run: sudo apt-get -qq install xmlsec1
- name: Install Rust
uses: actions-rs/toolchain@v1
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: stable
override: true
toolchain: stable
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
@@ -84,10 +82,9 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
with:
toolchain: stable
override: true
toolchain: stable
- uses: Swatinem/rust-cache@v2
- name: Patch dependencies
@@ -151,12 +148,11 @@ jobs:
run: |
set -x
DEBIAN_FRONTEND=noninteractive sudo apt-get install -yqq python3 pipx
pipx install poetry==1.1.14
pipx install poetry==1.2.0
poetry remove -n twisted
poetry add -n --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry lock --no-update
# NOT IN 1.1.14 poetry lock --check
working-directory: synapse
- run: |
@@ -178,7 +174,7 @@ jobs:
steps:
- uses: actions/checkout@v3
- uses: JasonEtco/create-an-issue@5d9504915f79f9cc6d791934b8ef34f2353dd74d # v2.5.0, 2020-12-06
- uses: JasonEtco/create-an-issue@77399b6110ef82b94c1c9f9f615acf9e604f7f56 # v2.5.0, 2020-12-06
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -1,3 +1,274 @@
Synapse 1.73.0rc2 (2022-12-01)
==============================
Please note that legacy Prometheus metric names have been removed in this release; see [the upgrade notes](https://github.com/matrix-org/synapse/blob/release-v1.73/docs/upgrade.md#legacy-prometheus-metric-names-have-now-been-removed) for more details.
Bugfixes
--------
- Fix a regression in Synapse 1.73.0rc1 where Synapse's main process would stop responding to HTTP requests when a user with a large number of devices logs in. ([\#14582](https://github.com/matrix-org/synapse/issues/14582))
Synapse 1.73.0rc1 (2022-11-29)
==============================
Features
--------
- Speed-up `/messages` with `filter_events_for_client` optimizations. ([\#14527](https://github.com/matrix-org/synapse/issues/14527))
- Improve DB performance by reducing amount of data that gets read in `device_lists_changes_in_room`. ([\#14534](https://github.com/matrix-org/synapse/issues/14534))
- Adds support for handling avatar in SSO login. Contributed by @ashfame. ([\#13917](https://github.com/matrix-org/synapse/issues/13917))
- Move MSC3030 `/timestamp_to_event` endpoints to stable `v1` location (`/_matrix/client/v1/rooms/<roomID>/timestamp_to_event?ts=<timestamp>&dir=<direction>`, `/_matrix/federation/v1/timestamp_to_event/<roomID>?ts=<timestamp>&dir=<direction>`). ([\#14471](https://github.com/matrix-org/synapse/issues/14471))
- Reduce database load of [Client-Server endpoints](https://spec.matrix.org/v1.5/client-server-api/#aggregations) which return bundled aggregations. ([\#14491](https://github.com/matrix-org/synapse/issues/14491), [\#14508](https://github.com/matrix-org/synapse/issues/14508), [\#14510](https://github.com/matrix-org/synapse/issues/14510))
- Add unstable support for an Extensible Events room version (`org.matrix.msc1767.10`) via [MSC1767](https://github.com/matrix-org/matrix-spec-proposals/pull/1767), [MSC3931](https://github.com/matrix-org/matrix-spec-proposals/pull/3931), [MSC3932](https://github.com/matrix-org/matrix-spec-proposals/pull/3932), and [MSC3933](https://github.com/matrix-org/matrix-spec-proposals/pull/3933). ([\#14520](https://github.com/matrix-org/synapse/issues/14520), [\#14521](https://github.com/matrix-org/synapse/issues/14521), [\#14524](https://github.com/matrix-org/synapse/issues/14524))
- Prune user's old devices on login if they have too many. ([\#14038](https://github.com/matrix-org/synapse/issues/14038), [\#14580](https://github.com/matrix-org/synapse/issues/14580))
Bugfixes
--------
- Fix a long-standing bug where paginating from the start of a room did not work. Contributed by @gnunicorn. ([\#14149](https://github.com/matrix-org/synapse/issues/14149))
- Fix a bug introduced in Synapse 1.58.0 where a user with presence state `org.matrix.msc3026.busy` would mistakenly be set to `online` when calling `/sync` or `/events` on a worker process. ([\#14393](https://github.com/matrix-org/synapse/issues/14393))
- Fix a bug introduced in Synapse 1.70.0 where a receipt's thread ID was not sent over federation. ([\#14466](https://github.com/matrix-org/synapse/issues/14466))
- Fix a long-standing bug where the [List media admin API](https://matrix-org.github.io/synapse/latest/admin_api/media_admin_api.html#list-all-media-in-a-room) would fail when processing an image with broken thumbnail information. ([\#14537](https://github.com/matrix-org/synapse/issues/14537))
- Fix a bug introduced in Synapse 1.67.0 where two logging context warnings would be logged on startup. ([\#14574](https://github.com/matrix-org/synapse/issues/14574))
- In application service transactions that include the experimental `org.matrix.msc3202.device_one_time_key_counts` key, include a duplicate key of `org.matrix.msc3202.device_one_time_keys_count` to match the name proposed by [MSC3202](https://github.com/matrix-org/matrix-spec-proposals/pull/3202). ([\#14565](https://github.com/matrix-org/synapse/issues/14565))
- Fix a bug introduced in Synapse 0.9 where Synapse would fail to fetch server keys whose IDs contain a forward slash. ([\#14490](https://github.com/matrix-org/synapse/issues/14490))
Improved Documentation
----------------------
- Fixed link to 'Synapse administration endpoints'. ([\#14499](https://github.com/matrix-org/synapse/issues/14499))
Deprecations and Removals
-------------------------
- Remove legacy Prometheus metrics names. They were deprecated in Synapse v1.69.0 and disabled by default in Synapse v1.71.0. ([\#14538](https://github.com/matrix-org/synapse/issues/14538))
Internal Changes
----------------
- Improve type hinting throughout Synapse. ([\#14055](https://github.com/matrix-org/synapse/issues/14055), [\#14412](https://github.com/matrix-org/synapse/issues/14412), [\#14529](https://github.com/matrix-org/synapse/issues/14529), [\#14452](https://github.com/matrix-org/synapse/issues/14452)).
- Remove old stream ID tracking code. Contributed by Nick @Beeper (@fizzadar). ([\#14376](https://github.com/matrix-org/synapse/issues/14376), [\#14468](https://github.com/matrix-org/synapse/issues/14468))
- Remove the `worker_main_http_uri` configuration setting. This is now handled via internal replication. ([\#14400](https://github.com/matrix-org/synapse/issues/14400), [\#14476](https://github.com/matrix-org/synapse/issues/14476))
- Refactor `federation_sender` and `pusher` configuration loading. ([\#14496](https://github.com/matrix-org/synapse/issues/14496))
([\#14509](https://github.com/matrix-org/synapse/issues/14509), [\#14573](https://github.com/matrix-org/synapse/issues/14573))
- Faster joins: do not wait for full state when creating events to send. ([\#14403](https://github.com/matrix-org/synapse/issues/14403))
- Faster joins: filter out non local events when a room doesn't have its full state. ([\#14404](https://github.com/matrix-org/synapse/issues/14404))
- Faster joins: send events to initial list of servers if we don't have the full state yet. ([\#14408](https://github.com/matrix-org/synapse/issues/14408))
- Faster joins: use servers list approximation received during `send_join` (potentially updated with received membership events) in `assert_host_in_room`. ([\#14515](https://github.com/matrix-org/synapse/issues/14515))
- Fix type logic in TCP replication code that prevented correctly ignoring blank commands. ([\#14449](https://github.com/matrix-org/synapse/issues/14449))
- Remove option to skip locking of tables when performing emulated upserts, to avoid a class of bugs in future. ([\#14469](https://github.com/matrix-org/synapse/issues/14469))
- `scripts-dev/federation_client`: Fix routing on servers with `.well-known` files. ([\#14479](https://github.com/matrix-org/synapse/issues/14479))
- Reduce default third party invite rate limit to 216 invites per day. ([\#14487](https://github.com/matrix-org/synapse/issues/14487))
- Refactor conversion of device list changes in room to outbound pokes to track unconverted rows using a `(stream ID, room ID)` position instead of updating the `converted_to_destinations` flag on every row. ([\#14516](https://github.com/matrix-org/synapse/issues/14516))
- Add more prompts to the bug report form. ([\#14522](https://github.com/matrix-org/synapse/issues/14522))
- Extend editorconfig rules on indent and line length to `.pyi` files. ([\#14526](https://github.com/matrix-org/synapse/issues/14526))
- Run Rust CI when `Cargo.lock` changes. This is particularly useful for dependabot updates. ([\#14571](https://github.com/matrix-org/synapse/issues/14571))
- Fix a possible variable shadow in `create_new_client_event`. ([\#14575](https://github.com/matrix-org/synapse/issues/14575))
- Bump various dependencies in the `poetry.lock` file and in CI scripts. ([\#14557](https://github.com/matrix-org/synapse/issues/14557), [\#14559](https://github.com/matrix-org/synapse/issues/14559), [\#14560](https://github.com/matrix-org/synapse/issues/14560), [\#14500](https://github.com/matrix-org/synapse/issues/14500), [\#14501](https://github.com/matrix-org/synapse/issues/14501), [\#14502](https://github.com/matrix-org/synapse/issues/14502), [\#14503](https://github.com/matrix-org/synapse/issues/14503), [\#14504](https://github.com/matrix-org/synapse/issues/14504), [\#14505](https://github.com/matrix-org/synapse/issues/14505)).
Synapse 1.72.0 (2022-11-22)
===========================
Please note that Synapse now only supports PostgreSQL 11+, because PostgreSQL 10 has reached end-of-life, c.f. our [Deprecation Policy](https://github.com/matrix-org/synapse/blob/develop/docs/deprecation_policy.md).
Bugfixes
--------
- Update forgotten references to legacy metrics in the included Grafana dashboard. ([\#14477](https://github.com/matrix-org/synapse/issues/14477))
Synapse 1.72.0rc1 (2022-11-16)
==============================
Features
--------
- Add experimental support for [MSC3912](https://github.com/matrix-org/matrix-spec-proposals/pull/3912): Relation-based redactions. ([\#14260](https://github.com/matrix-org/synapse/issues/14260))
- Build Debian packages for Ubuntu 22.10 (Kinetic Kudu). ([\#14396](https://github.com/matrix-org/synapse/issues/14396))
- Add an [Admin API](https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/index.html) endpoint for user lookup based on third-party ID (3PID). Contributed by @ashfame. ([\#14405](https://github.com/matrix-org/synapse/issues/14405))
- Faster joins: include heroes' membership events in the partial join response, for rooms without a name or canonical alias. ([\#14442](https://github.com/matrix-org/synapse/issues/14442))
Bugfixes
--------
- Faster joins: do not block creation of or queries for room aliases during the resync. ([\#14292](https://github.com/matrix-org/synapse/issues/14292))
- Fix a bug introduced in Synapse 1.64.0rc1 which could cause log spam when fetching events from other homeservers. ([\#14347](https://github.com/matrix-org/synapse/issues/14347))
- Fix a bug introduced in 1.66 which would not send certain pushrules to clients. Contributed by Nico. ([\#14356](https://github.com/matrix-org/synapse/issues/14356))
- Fix a bug introduced in v1.71.0rc1 where the power level event was incorrectly created during initial room creation. ([\#14361](https://github.com/matrix-org/synapse/issues/14361))
- Fix the refresh token endpoint to be under /r0 and /v3 instead of /v1. Contributed by Tulir @ Beeper. ([\#14364](https://github.com/matrix-org/synapse/issues/14364))
- Fix a long-standing bug where Synapse would raise an error when encountering an unrecognised field in a `/sync` filter, instead of ignoring it for forward compatibility. ([\#14369](https://github.com/matrix-org/synapse/issues/14369))
- Fix a background database update, introduced in Synapse 1.64.0, which could cause poor database performance. ([\#14374](https://github.com/matrix-org/synapse/issues/14374))
- Fix PostgreSQL sometimes using table scans for queries against the `event_search` table, taking a long time and a large amount of IO. ([\#14409](https://github.com/matrix-org/synapse/issues/14409))
- Fix rendering of some HTML templates (including emails). Introduced in v1.71.0. ([\#14448](https://github.com/matrix-org/synapse/issues/14448))
- Fix a bug introduced in Synapse 1.70.0 where the background updates to add non-thread unique indexes on receipts could fail when upgrading from 1.67.0 or earlier. ([\#14453](https://github.com/matrix-org/synapse/issues/14453))
Updates to the Docker image
---------------------------
- Add all Stream Writer worker types to `configure_workers_and_start.py`. ([\#14197](https://github.com/matrix-org/synapse/issues/14197))
- Remove references to legacy worker types in the multi-worker Dockerfile. ([\#14294](https://github.com/matrix-org/synapse/issues/14294))
Improved Documentation
----------------------
- Upload documentation PRs to Netlify. ([\#12947](https://github.com/matrix-org/synapse/issues/12947), [\#14370](https://github.com/matrix-org/synapse/issues/14370))
- Add addtional TURN server configuration example based on [eturnal](https://github.com/processone/eturnal) and adjust general TURN server doc structure. ([\#14293](https://github.com/matrix-org/synapse/issues/14293))
- Add example on how to load balance /sync requests. Contributed by [aceArt](https://aceart.de). ([\#14297](https://github.com/matrix-org/synapse/issues/14297))
- Edit sample Nginx reverse proxy configuration to use HTTP/1.1. Contributed by Brad Jones. ([\#14414](https://github.com/matrix-org/synapse/issues/14414))
Deprecations and Removals
-------------------------
- Remove support for PostgreSQL 10. ([\#14392](https://github.com/matrix-org/synapse/issues/14392), [\#14397](https://github.com/matrix-org/synapse/issues/14397))
Internal Changes
----------------
- Run unit tests against Python 3.11. ([\#13812](https://github.com/matrix-org/synapse/issues/13812))
- Add TLS support for generic worker endpoints. ([\#14128](https://github.com/matrix-org/synapse/issues/14128), [\#14455](https://github.com/matrix-org/synapse/issues/14455))
- Switch to a maintained action for installing Rust in CI. ([\#14313](https://github.com/matrix-org/synapse/issues/14313))
- Add override ability to `complement.sh` command line script to request certain types of workers. ([\#14324](https://github.com/matrix-org/synapse/issues/14324))
- Enabling testing of [MSC3874](https://github.com/matrix-org/matrix-spec-proposals/pull/3874) (filtering of `/messages` by relation type) in complement. ([\#14339](https://github.com/matrix-org/synapse/issues/14339))
- Concisely log a failure to resolve state due to missing `prev_events`. ([\#14346](https://github.com/matrix-org/synapse/issues/14346))
- Use a maintained Github action to install Rust. ([\#14351](https://github.com/matrix-org/synapse/issues/14351))
- Cleanup old worker datastore classes. Contributed by Nick @ Beeper (@fizzadar). ([\#14375](https://github.com/matrix-org/synapse/issues/14375))
- Test against PostgreSQL 15 in CI. ([\#14394](https://github.com/matrix-org/synapse/issues/14394))
- Remove unreachable code. ([\#14410](https://github.com/matrix-org/synapse/issues/14410))
- Clean-up event persistence code. ([\#14411](https://github.com/matrix-org/synapse/issues/14411))
- Update docstring to clarify that `get_partial_state_events_batch` does not just give you completely arbitrary partial-state events. ([\#14417](https://github.com/matrix-org/synapse/issues/14417))
- Fix mypy errors introduced by bumping the locked version of `attrs` and `gitpython`. ([\#14433](https://github.com/matrix-org/synapse/issues/14433))
- Make Dependabot only bump Rust deps in the lock file. ([\#14434](https://github.com/matrix-org/synapse/issues/14434))
- Fix an incorrect stub return type for `PushRuleEvaluator.run`. ([\#14451](https://github.com/matrix-org/synapse/issues/14451))
- Improve performance of `/context` in large rooms. ([\#14461](https://github.com/matrix-org/synapse/issues/14461))
Synapse 1.71.0 (2022-11-08)
===========================
Please note that, as announced in the release notes for Synapse 1.69.0, legacy Prometheus metric names are now disabled by default.
They will be removed altogether in Synapse 1.73.0.
If not already done, server administrators should update their dashboards and alerting rules to avoid using the deprecated metric names.
See the [upgrade notes](https://matrix-org.github.io/synapse/v1.71/upgrade.html#upgrading-to-v1710) for more details.
**Note:** in line with our [deprecation policy](https://matrix-org.github.io/synapse/latest/deprecation_policy.html) for platform dependencies, this will be the last release to support PostgreSQL 10, which reaches upstream end-of-life on November 10th, 2022. Future releases of Synapse will require PostgreSQL 11+.
No significant changes since 1.71.0rc2.
Synapse 1.71.0rc2 (2022-11-04)
==============================
Improved Documentation
----------------------
- Document the changes to monthly active user metrics due to deprecation of legacy Prometheus metric names. ([\#14358](https://github.com/matrix-org/synapse/issues/14358), [\#14360](https://github.com/matrix-org/synapse/issues/14360))
Deprecations and Removals
-------------------------
- Disable legacy Prometheus metric names by default. They can still be re-enabled for now, but they will be removed altogether in Synapse 1.73.0. ([\#14353](https://github.com/matrix-org/synapse/issues/14353))
Internal Changes
----------------
- Run unit tests against Python 3.11. ([\#13812](https://github.com/matrix-org/synapse/issues/13812))
Synapse 1.71.0rc1 (2022-11-01)
==============================
Features
--------
- Support back-channel logouts from OpenID Connect providers. ([\#11414](https://github.com/matrix-org/synapse/issues/11414))
- Allow use of Postgres and SQLlite full-text search operators in search queries. ([\#11635](https://github.com/matrix-org/synapse/issues/11635), [\#14310](https://github.com/matrix-org/synapse/issues/14310), [\#14311](https://github.com/matrix-org/synapse/issues/14311))
- Implement [MSC3664](https://github.com/matrix-org/matrix-doc/pull/3664), Pushrules for relations. Contributed by Nico. ([\#11804](https://github.com/matrix-org/synapse/issues/11804))
- Improve aesthetics of HTML templates. Note that these changes do not retroactively apply to templates which have been [customised](https://matrix-org.github.io/synapse/latest/templates.html#templates) by server admins. ([\#13652](https://github.com/matrix-org/synapse/issues/13652))
- Enable write-ahead logging for SQLite installations. Contributed by [@asymmetric](https://github.com/asymmetric). ([\#13897](https://github.com/matrix-org/synapse/issues/13897))
- Show erasure status when [listing users](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html#query-user-account) in the Admin API. ([\#14205](https://github.com/matrix-org/synapse/issues/14205))
- Provide a specific error code when a `/sync` request provides a filter which doesn't represent a JSON object. ([\#14262](https://github.com/matrix-org/synapse/issues/14262))
Bugfixes
--------
- Fix a long-standing bug where the `update_synapse_database` script could not be run with multiple databases. Contributed by @thefinn93 @ Beeper. ([\#13422](https://github.com/matrix-org/synapse/issues/13422))
- Fix a bug which prevented setting an avatar on homeservers which have an explicit port in their `server_name` and have `max_avatar_size` and/or `allowed_avatar_mimetypes` configuration. Contributed by @ashfame. ([\#13927](https://github.com/matrix-org/synapse/issues/13927))
- Check appservice user interest against the local users instead of all users in the room to align with [MSC3905](https://github.com/matrix-org/matrix-spec-proposals/pull/3905). ([\#13958](https://github.com/matrix-org/synapse/issues/13958))
- Fix a long-standing bug where Synapse would accidentally include extra information in the response to [`PUT /_matrix/federation/v2/invite/{roomId}/{eventId}`](https://spec.matrix.org/v1.4/server-server-api/#put_matrixfederationv2inviteroomideventid). ([\#14064](https://github.com/matrix-org/synapse/issues/14064))
- Fix a bug introduced in Synapse 1.64.0 where presence updates could be missing from `/sync` responses. ([\#14243](https://github.com/matrix-org/synapse/issues/14243))
- Fix a bug introduced in Synapse 1.60.0 which caused an error to be logged when Synapse received a SIGHUP signal if debug logging was enabled. ([\#14258](https://github.com/matrix-org/synapse/issues/14258))
- Prevent history insertion ([MSC2716](https://github.com/matrix-org/matrix-spec-proposals/pull/2716)) during an partial join ([MSC3706](https://github.com/matrix-org/matrix-spec-proposals/pull/3706)). ([\#14291](https://github.com/matrix-org/synapse/issues/14291))
- Fix a bug introduced in Synapse 1.34.0 where device names would be returned via a federation user key query request when `allow_device_name_lookup_over_federation` was set to `false`. ([\#14304](https://github.com/matrix-org/synapse/issues/14304))
- Fix a bug introduced in Synapse 0.34.0 where logs could include error spam when background processes are measured as taking a negative amount of time. ([\#14323](https://github.com/matrix-org/synapse/issues/14323))
- Fix a bug introduced in Synapse 1.70.0 where clients were unable to PUT new [dehydrated devices](https://github.com/matrix-org/matrix-spec-proposals/pull/2697). ([\#14336](https://github.com/matrix-org/synapse/issues/14336))
Improved Documentation
----------------------
- Explain how to disable the use of [`trusted_key_servers`](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#trusted_key_servers). ([\#13999](https://github.com/matrix-org/synapse/issues/13999))
- Add workers settings to [configuration manual](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#individual-worker-configuration). ([\#14086](https://github.com/matrix-org/synapse/issues/14086))
- Correct the name of the config option [`encryption_enabled_by_default_for_room_type`](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#encryption_enabled_by_default_for_room_type). ([\#14110](https://github.com/matrix-org/synapse/issues/14110))
- Update docstrings of `SynapseError` and `FederationError` to bettter describe what they are used for and the effects of using them are. ([\#14191](https://github.com/matrix-org/synapse/issues/14191))
Internal Changes
----------------
- Remove unused `@lru_cache` decorator. ([\#13595](https://github.com/matrix-org/synapse/issues/13595))
- Save login tokens in database and prevent login token reuse. ([\#13844](https://github.com/matrix-org/synapse/issues/13844))
- Refactor OIDC tests to better mimic an actual OIDC provider. ([\#13910](https://github.com/matrix-org/synapse/issues/13910))
- Fix type annotation causing import time error in the Complement forking launcher. ([\#14084](https://github.com/matrix-org/synapse/issues/14084))
- Refactor [MSC3030](https://github.com/matrix-org/matrix-spec-proposals/pull/3030) `/timestamp_to_event` endpoint to loop over federation destinations with standard pattern and error handling. ([\#14096](https://github.com/matrix-org/synapse/issues/14096))
- Add initial power level event to batch of bulk persisted events when creating a new room. ([\#14228](https://github.com/matrix-org/synapse/issues/14228))
- Refactor `/key/` endpoints to use `RestServlet` classes. ([\#14229](https://github.com/matrix-org/synapse/issues/14229))
- Switch to using the `matrix-org/backend-meta` version of `triage-incoming` for new issues in CI. ([\#14230](https://github.com/matrix-org/synapse/issues/14230))
- Build wheels on macos 11, not 10.15. ([\#14249](https://github.com/matrix-org/synapse/issues/14249))
- Add debugging to help diagnose lost device list updates. ([\#14268](https://github.com/matrix-org/synapse/issues/14268))
- Add Rust cache to CI for `trial` runs. ([\#14287](https://github.com/matrix-org/synapse/issues/14287))
- Improve type hinting of `RawHeaders`. ([\#14303](https://github.com/matrix-org/synapse/issues/14303))
- Use Poetry 1.2.0 in the Twisted Trunk CI job. ([\#14305](https://github.com/matrix-org/synapse/issues/14305))
<details>
<summary>Dependency updates</summary>
Runtime:
- Bump anyhow from 1.0.65 to 1.0.66. ([\#14278](https://github.com/matrix-org/synapse/issues/14278))
- Bump jinja2 from 3.0.3 to 3.1.2. ([\#14271](https://github.com/matrix-org/synapse/issues/14271))
- Bump prometheus-client from 0.14.0 to 0.15.0. ([\#14274](https://github.com/matrix-org/synapse/issues/14274))
- Bump psycopg2 from 2.9.4 to 2.9.5. ([\#14331](https://github.com/matrix-org/synapse/issues/14331))
- Bump pysaml2 from 7.1.2 to 7.2.1. ([\#14270](https://github.com/matrix-org/synapse/issues/14270))
- Bump sentry-sdk from 1.5.11 to 1.10.1. ([\#14330](https://github.com/matrix-org/synapse/issues/14330))
- Bump serde from 1.0.145 to 1.0.147. ([\#14277](https://github.com/matrix-org/synapse/issues/14277))
- Bump serde_json from 1.0.86 to 1.0.87. ([\#14279](https://github.com/matrix-org/synapse/issues/14279))
Tooling and CI:
- Bump black from 22.3.0 to 22.10.0. ([\#14328](https://github.com/matrix-org/synapse/issues/14328))
- Bump flake8-bugbear from 21.3.2 to 22.9.23. ([\#14042](https://github.com/matrix-org/synapse/issues/14042))
- Bump peaceiris/actions-gh-pages from 3.8.0 to 3.9.0. ([\#14276](https://github.com/matrix-org/synapse/issues/14276))
- Bump peaceiris/actions-mdbook from 1.1.14 to 1.2.0. ([\#14275](https://github.com/matrix-org/synapse/issues/14275))
- Bump setuptools-rust from 1.5.1 to 1.5.2. ([\#14273](https://github.com/matrix-org/synapse/issues/14273))
- Bump twine from 3.8.0 to 4.0.1. ([\#14332](https://github.com/matrix-org/synapse/issues/14332))
- Bump types-opentracing from 2.4.7 to 2.4.10. ([\#14133](https://github.com/matrix-org/synapse/issues/14133))
- Bump types-requests from 2.28.11 to 2.28.11.2. ([\#14272](https://github.com/matrix-org/synapse/issues/14272))
</details>
Synapse 1.70.1 (2022-10-28)
===========================

48
Cargo.lock generated
View File

@@ -13,9 +13,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.65"
version = "1.0.66"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "98161a4e3e2184da77bb14f02184cdd111e83bbbcc9979dfee3c44b9a85f5602"
checksum = "216261ddc8289130e551ddcd5ce8a064710c0d064a4d2895c67151c92b5443f6"
[[package]]
name = "arc-swap"
@@ -37,9 +37,9 @@ checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
[[package]]
name = "blake2"
version = "0.10.4"
version = "0.10.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b9cf849ee05b2ee5fba5e36f97ff8ec2533916700fc0758d40d92136a42f3388"
checksum = "b12e5fd123190ce1c2e559308a94c9bacad77907d4c6005d9e58fe1a0689e55e"
dependencies = [
"digest",
]
@@ -194,9 +194,9 @@ dependencies = [
[[package]]
name = "pyo3"
version = "0.17.2"
version = "0.17.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "201b6887e5576bf2f945fe65172c1fcbf3fcf285b23e4d71eb171d9736e38d32"
checksum = "268be0c73583c183f2b14052337465768c07726936a260f480f0857cb95ba543"
dependencies = [
"anyhow",
"cfg-if",
@@ -212,9 +212,9 @@ dependencies = [
[[package]]
name = "pyo3-build-config"
version = "0.17.2"
version = "0.17.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bf0708c9ed01692635cbf056e286008e5a2927ab1a5e48cdd3aeb1ba5a6fef47"
checksum = "28fcd1e73f06ec85bf3280c48c67e731d8290ad3d730f8be9dc07946923005c8"
dependencies = [
"once_cell",
"target-lexicon",
@@ -222,9 +222,9 @@ dependencies = [
[[package]]
name = "pyo3-ffi"
version = "0.17.2"
version = "0.17.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "90352dea4f486932b72ddf776264d293f85b79a1d214de1d023927b41461132d"
checksum = "0f6cb136e222e49115b3c51c32792886defbfb0adead26a688142b346a0b9ffc"
dependencies = [
"libc",
"pyo3-build-config",
@@ -243,9 +243,9 @@ dependencies = [
[[package]]
name = "pyo3-macros"
version = "0.17.2"
version = "0.17.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7eb24b804a2d9e88bfcc480a5a6dd76f006c1e3edaf064e8250423336e2cd79d"
checksum = "94144a1266e236b1c932682136dc35a9dee8d3589728f68130c7c3861ef96b28"
dependencies = [
"proc-macro2",
"pyo3-macros-backend",
@@ -255,9 +255,9 @@ dependencies = [
[[package]]
name = "pyo3-macros-backend"
version = "0.17.2"
version = "0.17.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f22bb49f6a7348c253d7ac67a6875f2dc65f36c2ae64a82c381d528972bea6d6"
checksum = "c8df9be978a2d2f0cdebabb03206ed73b11314701a5bfe71b0d753b81997777f"
dependencies = [
"proc-macro2",
"quote",
@@ -294,9 +294,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.6.0"
version = "1.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4c4eb3267174b8c6c2f654116623910a0fef09c4753f8dd83db29c48a0df988b"
checksum = "e076559ef8e241f2ae3479e36f97bd5741c0330689e217ad51ce2c76808b868a"
dependencies = [
"aho-corasick",
"memchr",
@@ -323,18 +323,18 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "serde"
version = "1.0.145"
version = "1.0.148"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "728eb6351430bccb993660dfffc5a72f91ccc1295abaa8ce19b27ebe4f75568b"
checksum = "e53f64bb4ba0191d6d0676e1b141ca55047d83b74f5607e6d8eb88126c52c2dc"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.145"
version = "1.0.148"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "81fa1584d3d1bcacd84c277a0dfe21f5b0f6accf4a23d04d4c6d61f1af522b4c"
checksum = "a55492425aa53521babf6137309e7d34c20bbfbbfcfe2c7f3a047fd1f6b92c0c"
dependencies = [
"proc-macro2",
"quote",
@@ -343,9 +343,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.86"
version = "1.0.89"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "41feea4228a6f1cd09ec7a3593a682276702cd67b5273544757dae23c096f074"
checksum = "020ff22c755c2ed3f8cf162dbb41a7268d934702f3ed3631656ea597e08fc3db"
dependencies = [
"itoa",
"ryu",
@@ -366,9 +366,9 @@ checksum = "6bdef32e8150c2a081110b42772ffe7d7c9032b606bc226c8260fd97e0976601"
[[package]]
name = "syn"
version = "1.0.102"
version = "1.0.104"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fcd952facd492f9be3ef0d0b7032a6e442ee9b361d4acc2b1d0c4aaa5f613a1"
checksum = "4ae548ec36cf198c0ef7710d3c230987c2d6d7bd98ad6edc0274462724c585ce"
dependencies = [
"proc-macro2",
"quote",

View File

@@ -3,3 +3,7 @@
[workspace]
members = ["rust"]
[profile.dbgrelease]
inherits = "release"
debug = true

1
changelog.d/14255.misc Normal file
View File

@@ -0,0 +1 @@
Optimise push badge count calculations. Contributed by Nick @ Beeper (@fizzadar).

View File

@@ -0,0 +1 @@
Stop using deprecated `keyIds` parameter when calling `/_matrix/key/v2/server`.

1
changelog.d/14493.doc Normal file
View File

@@ -0,0 +1 @@
Update worker settings for `pusher` and `federation_sender` functionality.

1
changelog.d/14517.doc Normal file
View File

@@ -0,0 +1 @@
Add links to third party package repositories, and point to the bug which highlights Ubuntu's out-of-date packages.

View File

@@ -0,0 +1 @@
Stop using deprecated `keyIds` parameter when calling `/_matrix/key/v2/server`.

1
changelog.d/14528.misc Normal file
View File

@@ -0,0 +1 @@
Share the `ClientRestResource` for both workers and the main process.

1
changelog.d/14549.misc Normal file
View File

@@ -0,0 +1 @@
Faster joins: use servers list approximation to send read receipts when in partial state instead of waiting for the full state of the room.

View File

@@ -0,0 +1 @@
Add new `push.enabled` config option to allow opting out of push notification calculation.

1
changelog.d/14568.misc Normal file
View File

@@ -0,0 +1 @@
Modernize unit tests configuration related to workers.

View File

@@ -0,0 +1 @@
Advertise support for Matrix 1.5 on `/_matrix/client/versions`.

1
changelog.d/14591.misc Normal file
View File

@@ -0,0 +1 @@
Bump jsonschema from 4.17.0 to 4.17.3.

1
changelog.d/14592.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug where a device list update might not be sent to clients in certain circumstances.

1
changelog.d/14597.misc Normal file
View File

@@ -0,0 +1 @@
Add missing type hints.

1
changelog.d/14602.misc Normal file
View File

@@ -0,0 +1 @@
Fix Rust lint CI.

1
changelog.d/14607.misc Normal file
View File

@@ -0,0 +1 @@
Bump JasonEtco/create-an-issue from 2.5.0 to 2.8.1.

File diff suppressed because it is too large Load Diff

42
debian/changelog vendored
View File

@@ -1,3 +1,45 @@
matrix-synapse-py3 (1.73.0~rc2) stable; urgency=medium
* New Synapse release 1.73.0rc2.
-- Synapse Packaging team <packages@matrix.org> Thu, 01 Dec 2022 10:02:19 +0000
matrix-synapse-py3 (1.73.0~rc1) stable; urgency=medium
* New Synapse release 1.73.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 29 Nov 2022 12:28:13 +0000
matrix-synapse-py3 (1.72.0) stable; urgency=medium
* New Synapse release 1.72.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Nov 2022 10:57:30 +0000
matrix-synapse-py3 (1.72.0~rc1) stable; urgency=medium
* New Synapse release 1.72.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 16 Nov 2022 15:10:59 +0000
matrix-synapse-py3 (1.71.0) stable; urgency=medium
* New Synapse release 1.71.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 08 Nov 2022 10:38:10 +0000
matrix-synapse-py3 (1.71.0~rc2) stable; urgency=medium
* New Synapse release 1.71.0rc2.
-- Synapse Packaging team <packages@matrix.org> Fri, 04 Nov 2022 12:00:33 +0000
matrix-synapse-py3 (1.71.0~rc1) stable; urgency=medium
* New Synapse release 1.71.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Nov 2022 12:10:17 +0000
matrix-synapse-py3 (1.70.1) stable; urgency=medium
* New Synapse release 1.70.1.

View File

@@ -45,7 +45,12 @@ esac
if [[ -n "$SYNAPSE_COMPLEMENT_USE_WORKERS" ]]; then
# Specify the workers to test with
export SYNAPSE_WORKER_TYPES="\
# Allow overriding by explicitly setting SYNAPSE_WORKER_TYPES outside, while still
# utilizing WORKERS=1 for backwards compatibility.
# -n True if the length of string is non-zero.
# -z True if the length of string is zero.
if [[ -z "$SYNAPSE_WORKER_TYPES" ]]; then
export SYNAPSE_WORKER_TYPES="\
event_persister, \
event_persister, \
background_worker, \
@@ -61,6 +66,8 @@ if [[ -n "$SYNAPSE_COMPLEMENT_USE_WORKERS" ]]; then
appservice, \
pusher"
fi
log "Workers requested: $SYNAPSE_WORKER_TYPES"
# Improve startup times by using a launcher based on fork()
export SYNAPSE_USE_EXPERIMENTAL_FORKING_LAUNCHER=1
else

View File

@@ -92,8 +92,6 @@ allow_device_name_lookup_over_federation: true
## Experimental Features ##
experimental_features:
# Enable spaces support
spaces_enabled: true
# Enable history backfilling support
msc2716_enabled: true
# server-side support for partial state in /send_join responses
@@ -102,8 +100,8 @@ experimental_features:
# client-side support for partial state in /send_join responses
faster_joins: true
{% endif %}
# Enable jump to date endpoint
msc3030_enabled: true
# Filtering /messages by relation type.
msc3874_enabled: true
server_notices:
system_mxid_localpart: _server

View File

@@ -20,7 +20,7 @@
# * SYNAPSE_SERVER_NAME: The desired server_name of the homeserver.
# * SYNAPSE_REPORT_STATS: Whether to report stats.
# * SYNAPSE_WORKER_TYPES: A comma separated list of worker names as specified in WORKER_CONFIG
# below. Leave empty for no workers, or set to '*' for all possible workers.
# below. Leave empty for no workers.
# * SYNAPSE_AS_REGISTRATION_DIR: If specified, a directory in which .yaml and .yml files
# will be treated as Application Service registration files.
# * SYNAPSE_TLS_CERT: Path to a TLS certificate in PEM format.
@@ -50,13 +50,18 @@ from jinja2 import Environment, FileSystemLoader
MAIN_PROCESS_HTTP_LISTENER_PORT = 8080
# Workers with exposed endpoints needs either "client", "federation", or "media" listener_resources
# Watching /_matrix/client needs a "client" listener
# Watching /_matrix/federation needs a "federation" listener
# Watching /_matrix/media and related needs a "media" listener
# Stream Writers require "client" and "replication" listeners because they
# have to attach by instance_map to the master process and have client endpoints.
WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"pusher": {
"app": "synapse.app.pusher",
"app": "synapse.app.generic_worker",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": {"start_pushers": False},
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"user_dir": {
@@ -79,7 +84,11 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"^/_synapse/admin/v1/media/.*$",
"^/_synapse/admin/v1/quarantine_media/.*$",
],
"shared_extra_conf": {"enable_media_repo": False},
# The first configured media worker will run the media background jobs
"shared_extra_conf": {
"enable_media_repo": False,
"media_instance_running_background_jobs": "media_repository1",
},
"worker_extra_conf": "enable_media_repo: true",
},
"appservice": {
@@ -90,10 +99,10 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"worker_extra_conf": "",
},
"federation_sender": {
"app": "synapse.app.federation_sender",
"app": "synapse.app.generic_worker",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": {"send_federation": False},
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"synchrotron": {
@@ -131,6 +140,7 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event",
"^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms",
"^/_matrix/client/(api/v1|r0|v3|unstable/.*)/rooms/.*/aliases",
"^/_matrix/client/v1/rooms/.*/timestamp_to_event$",
"^/_matrix/client/(api/v1|r0|v3|unstable)/search",
],
"shared_extra_conf": {},
@@ -154,6 +164,7 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"^/_matrix/federation/(v1|v2)/invite/",
"^/_matrix/federation/(v1|v2)/query_auth/",
"^/_matrix/federation/(v1|v2)/event_auth/",
"^/_matrix/federation/v1/timestamp_to_event/",
"^/_matrix/federation/(v1|v2)/exchange_third_party_invite/",
"^/_matrix/federation/(v1|v2)/user/devices/",
"^/_matrix/federation/(v1|v2)/get_groups_publicised$",
@@ -200,14 +211,54 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"worker_extra_conf": "",
},
"frontend_proxy": {
"app": "synapse.app.frontend_proxy",
"app": "synapse.app.generic_worker",
"listener_resources": ["client", "replication"],
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload"],
"shared_extra_conf": {},
"worker_extra_conf": (
"worker_main_http_uri: http://127.0.0.1:%d"
% (MAIN_PROCESS_HTTP_LISTENER_PORT,)
),
"worker_extra_conf": "",
},
"account_data": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client", "replication"],
"endpoint_patterns": [
"^/_matrix/client/(r0|v3|unstable)/.*/tags",
"^/_matrix/client/(r0|v3|unstable)/.*/account_data",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"presence": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client", "replication"],
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|v3|unstable)/presence/"],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"receipts": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client", "replication"],
"endpoint_patterns": [
"^/_matrix/client/(r0|v3|unstable)/rooms/.*/receipt",
"^/_matrix/client/(r0|v3|unstable)/rooms/.*/read_markers",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"to_device": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client", "replication"],
"endpoint_patterns": ["^/_matrix/client/(r0|v3|unstable)/sendToDevice/"],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"typing": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client", "replication"],
"endpoint_patterns": [
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/typing"
],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
}
@@ -271,14 +322,14 @@ def convert(src: str, dst: str, **template_vars: object) -> None:
outfile.write(rendered)
def add_sharding_to_shared_config(
def add_worker_roles_to_shared_config(
shared_config: dict,
worker_type: str,
worker_name: str,
worker_port: int,
) -> None:
"""Given a dictionary representing a config file shared across all workers,
append sharded worker information to it for the current worker_type instance.
append appropriate worker information to it for the current worker_type instance.
Args:
shared_config: The config dict that all worker instances share (after being converted to YAML)
@@ -309,9 +360,19 @@ def add_sharding_to_shared_config(
"port": worker_port,
}
elif worker_type == "media_repository":
# The first configured media worker will run the media background jobs
shared_config.setdefault("media_instance_running_background_jobs", worker_name)
elif worker_type in ["account_data", "presence", "receipts", "to_device", "typing"]:
# Update the list of stream writers
# It's convenient that the name of the worker type is the same as the stream to write
shared_config.setdefault("stream_writers", {}).setdefault(
worker_type, []
).append(worker_name)
# Map of stream writer instance names to host/ports combos
# For now, all stream writers need http replication ports
instance_map[worker_name] = {
"host": "localhost",
"port": worker_port,
}
def generate_base_homeserver_config() -> None:
@@ -421,8 +482,7 @@ def generate_worker_files(
if worker_config:
worker_config = worker_config.copy()
else:
log(worker_type + " is an unknown worker type! It will be ignored")
continue
error(worker_type + " is an unknown worker type! Please fix!")
new_worker_count = worker_type_counter.setdefault(worker_type, 0) + 1
worker_type_counter[worker_type] = new_worker_count
@@ -441,11 +501,11 @@ def generate_worker_files(
# Check if more than one instance of this worker type has been specified
worker_type_total_count = worker_types.count(worker_type)
if worker_type_total_count > 1:
# Update the shared config with sharding-related options if necessary
add_sharding_to_shared_config(
shared_config, worker_type, worker_name, worker_port
)
# Update the shared config with sharding-related options if necessary
add_worker_roles_to_shared_config(
shared_config, worker_type, worker_name, worker_port
)
# Enable the worker in supervisord
worker_descriptors.append(worker_config)

View File

@@ -9,6 +9,8 @@
- [Configuring a Reverse Proxy](reverse_proxy.md)
- [Configuring a Forward/Outbound Proxy](setup/forward_proxy.md)
- [Configuring a Turn Server](turn-howto.md)
- [coturn TURN server](setup/turn/coturn.md)
- [eturnal TURN server](setup/turn/eturnal.md)
- [Delegation](delegate.md)
# Upgrading

View File

@@ -37,6 +37,7 @@ It returns a JSON body like the following:
"is_guest": 0,
"admin": 0,
"deactivated": 0,
"erased": false,
"shadow_banned": 0,
"creation_ts": 1560432506,
"appservice_id": null,
@@ -167,6 +168,7 @@ A response body like the following is returned:
"admin": 0,
"user_type": null,
"deactivated": 0,
"erased": false,
"shadow_banned": 0,
"displayname": "<User One>",
"avatar_url": null,
@@ -177,6 +179,7 @@ A response body like the following is returned:
"admin": 1,
"user_type": null,
"deactivated": 0,
"erased": false,
"shadow_banned": 0,
"displayname": "<User Two>",
"avatar_url": "<avatar_url>",
@@ -247,6 +250,7 @@ The following fields are returned in the JSON response body:
- `user_type` - string - Type of the user. Normal users are type `None`.
This allows user type specific behaviour. There are also types `support` and `bot`.
- `deactivated` - bool - Status if that user has been marked as deactivated.
- `erased` - bool - Status if that user has been marked as erased.
- `shadow_banned` - bool - Status if that user has been marked as shadow banned.
- `displayname` - string - The user's display name if they have set one.
- `avatar_url` - string - The user's avatar URL if they have set one.
@@ -1193,3 +1197,42 @@ Returns a `404` HTTP status code if no user was found, with a response body like
```
_Added in Synapse 1.68.0._
### Find a user based on their Third Party ID (ThreePID or 3PID)
The API is:
```
GET /_synapse/admin/v1/threepid/$medium/users/$address
```
When a user matched the given address for the given medium, an HTTP code `200` with a response body like the following is returned:
```json
{
"user_id": "@hello:example.org"
}
```
**Parameters**
The following parameters should be set in the URL:
- `medium` - Kind of third-party ID, either `email` or `msisdn`.
- `address` - Value of the third-party ID.
The `address` may have characters that are not URL-safe, so it is advised to URL-encode those parameters.
**Errors**
Returns a `404` HTTP status code if no user was found, with a response body like this:
```json
{
"errcode":"M_NOT_FOUND",
"error":"User not found"
}
```
_Added in Synapse 1.72.0._

View File

@@ -324,6 +324,12 @@ The above will run a monolithic (single-process) Synapse with SQLite as the data
- Passing `POSTGRES=1` as an environment variable to use the Postgres database instead.
- Passing `WORKERS=1` as an environment variable to use a workerised setup instead. This option implies the use of Postgres.
- If setting `WORKERS=1`, optionally set `WORKER_TYPES=` to declare which worker
types you wish to test. A simple comma-delimited string containing the worker types
defined from the `WORKERS_CONFIG` template in
[here](https://github.com/matrix-org/synapse/blob/develop/docker/configure_workers_and_start.py#L54).
A safe example would be `WORKER_TYPES="federation_inbound, federation_sender, synchrotron"`.
See the [worker documentation](../workers.md) for additional information on workers.
To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`, e.g:
```sh

View File

@@ -209,6 +209,9 @@ altogether in Synapse v1.73.0.**
| synapse_http_httppusher_http_pushes_failed_total | synapse_http_httppusher_http_pushes_failed |
| synapse_http_httppusher_badge_updates_processed_total | synapse_http_httppusher_badge_updates_processed |
| synapse_http_httppusher_badge_updates_failed_total | synapse_http_httppusher_badge_updates_failed |
| synapse_admin_mau_current | synapse_admin_mau:current |
| synapse_admin_mau_max | synapse_admin_mau:max |
| synapse_admin_mau_registered_reserved_users | synapse_admin_mau:registered_reserved_users |
Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
---------------------------------------------------------------------------------

View File

@@ -49,6 +49,13 @@ setting in your configuration file.
See the [configuration manual](usage/configuration/config_documentation.md#oidc_providers) for some sample settings, as well as
the text below for example configurations for specific providers.
## OIDC Back-Channel Logout
Synapse supports receiving [OpenID Connect Back-Channel Logout](https://openid.net/specs/openid-connect-backchannel-1_0.html) notifications.
This lets the OpenID Connect Provider notify Synapse when a user logs out, so that Synapse can end that user session.
This feature can be enabled by setting the `backchannel_logout_enabled` property to `true` in the provider configuration, and setting the following URL as destination for Back-Channel Logout notifications in your OpenID Connect Provider: `[synapse public baseurl]/_synapse/client/oidc/backchannel_logout`
## Sample configs
Here are a few configs for providers that should work with Synapse.
@@ -123,6 +130,9 @@ oidc_providers:
[Keycloak][keycloak-idp] is an opensource IdP maintained by Red Hat.
Keycloak supports OIDC Back-Channel Logout, which sends logout notification to Synapse, so that Synapse users get logged out when they log out from Keycloak.
This can be optionally enabled by setting `backchannel_logout_enabled` to `true` in the Synapse configuration, and by setting the "Backchannel Logout URL" in Keycloak.
Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to install Keycloak and set up a realm.
1. Click `Clients` in the sidebar and click `Create`
@@ -144,6 +154,8 @@ Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to
| Client Protocol | `openid-connect` |
| Access Type | `confidential` |
| Valid Redirect URIs | `[synapse public baseurl]/_synapse/client/oidc/callback` |
| Backchannel Logout URL (optional) | `[synapse public baseurl]/_synapse/client/oidc/backchannel_logout` |
| Backchannel Logout Session Required (optional) | `On` |
5. Click `Save`
6. On the Credentials tab, update the fields:
@@ -167,7 +179,9 @@ oidc_providers:
config:
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
backchannel_logout_enabled: true # Optional
```
### Auth0
[Auth0][auth0] is a hosted SaaS IdP solution.

View File

@@ -79,6 +79,9 @@ server {
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 50M;
# Synapse responses may be chunked, which is an HTTP/1.1 feature.
proxy_http_version 1.1;
}
}
```

View File

@@ -6,7 +6,7 @@
# Synapse also supports structured logging for machine readable logs which can
# be ingested by ELK stacks. See [2] for details.
#
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
# [1]: https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema
# [2]: https://matrix-org.github.io/synapse/latest/structured_logging.html
version: 1

View File

@@ -84,7 +84,9 @@ file when you upgrade the Debian package to a later version.
##### Downstream Debian packages
Andrej Shadura maintains a `matrix-synapse` package in the Debian repositories.
Andrej Shadura maintains a
[`matrix-synapse`](https://packages.debian.org/sid/matrix-synapse) package in
the Debian repositories.
For `bookworm` and `sid`, it can be installed simply with:
```sh
@@ -100,23 +102,27 @@ for information on how to use backports.
##### Downstream Ubuntu packages
We do not recommend using the packages in the default Ubuntu repository
at this time, as they are old and suffer from known security vulnerabilities.
at this time, as they are [old and suffer from known security vulnerabilities](
https://bugs.launchpad.net/ubuntu/+source/matrix-synapse/+bug/1848709
).
The latest version of Synapse can be installed from [our repository](#matrixorg-packages).
#### Fedora
Synapse is in the Fedora repositories as `matrix-synapse`:
Synapse is in the Fedora repositories as
[`matrix-synapse`](https://src.fedoraproject.org/rpms/matrix-synapse):
```sh
sudo dnf install matrix-synapse
```
Oleg Girko provides Fedora RPMs at
Additionally, Oleg Girko provides Fedora RPMs at
<https://obs.infoserver.lv/project/monitor/matrix-synapse>
#### OpenSUSE
Synapse is in the OpenSUSE repositories as `matrix-synapse`:
Synapse is in the OpenSUSE repositories as
[`matrix-synapse`](https://software.opensuse.org/package/matrix-synapse):
```sh
sudo zypper install matrix-synapse
@@ -151,7 +157,8 @@ sudo pip install py-bcrypt
#### Void Linux
Synapse can be found in the void repositories as 'synapse':
Synapse can be found in the void repositories as
['synapse'](https://github.com/void-linux/void-packages/tree/master/srcpkgs/synapse):
```sh
xbps-install -Su

188
docs/setup/turn/coturn.md Normal file
View File

@@ -0,0 +1,188 @@
# coturn TURN server
The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API).
## `coturn` setup
### Initial installation
The TURN daemon `coturn` is available from a variety of sources such as native package managers, or installation from source.
#### Debian and Ubuntu based distributions
Just install the debian package:
```sh
sudo apt install coturn
```
This will install and start a systemd service called `coturn`.
#### Source installation
1. Download the [latest release](https://github.com/coturn/coturn/releases/latest) from github. Unpack it and `cd` into the directory.
1. Configure it:
```sh
./configure
```
You may need to install `libevent2`: if so, you should do so in
the way recommended by your operating system. You can ignore
warnings about lack of database support: a database is unnecessary
for this purpose.
1. Build and install it:
```sh
make
sudo make install
```
### Configuration
1. Create or edit the config file in `/etc/turnserver.conf`. The relevant
lines, with example values, are:
```
use-auth-secret
static-auth-secret=[your secret key here]
realm=turn.myserver.org
```
See `turnserver.conf` for explanations of the options. One way to generate
the `static-auth-secret` is with `pwgen`:
```sh
pwgen -s 64 1
```
A `realm` must be specified, but its value is somewhat arbitrary. (It is
sent to clients as part of the authentication flow.) It is conventional to
set it to be your server name.
1. You will most likely want to configure `coturn` to write logs somewhere. The
easiest way is normally to send them to the syslog:
```sh
syslog
```
(in which case, the logs will be available via `journalctl -u coturn` on a
systemd system). Alternatively, `coturn` can be configured to write to a
logfile - check the example config file supplied with `coturn`.
1. Consider your security settings. TURN lets users request a relay which will
connect to arbitrary IP addresses and ports. The following configuration is
suggested as a minimum starting point:
```
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
# recommended additional local peers to block, to mitigate external access to internal services.
# https://www.rtcsec.com/article/slack-webrtc-turn-compromise-and-bug-bounty/#how-to-fix-an-open-turn-relay-to-address-this-vulnerability
no-multicast-peers
denied-peer-ip=0.0.0.0-0.255.255.255
denied-peer-ip=100.64.0.0-100.127.255.255
denied-peer-ip=127.0.0.0-127.255.255.255
denied-peer-ip=169.254.0.0-169.254.255.255
denied-peer-ip=192.0.0.0-192.0.0.255
denied-peer-ip=192.0.2.0-192.0.2.255
denied-peer-ip=192.88.99.0-192.88.99.255
denied-peer-ip=198.18.0.0-198.19.255.255
denied-peer-ip=198.51.100.0-198.51.100.255
denied-peer-ip=203.0.113.0-203.0.113.255
denied-peer-ip=240.0.0.0-255.255.255.255
# special case the turn server itself so that client->TURN->TURN->client flows work
# this should be one of the turn server's listening IPs
allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200
```
1. Also consider supporting TLS/DTLS. To do this, add the following settings
to `turnserver.conf`:
```
# TLS certificates, including intermediate certs.
# For Let's Encrypt certificates, use `fullchain.pem` here.
cert=/path/to/fullchain.pem
# TLS private key file
pkey=/path/to/privkey.pem
# Ensure the configuration lines that disable TLS/DTLS are commented-out or removed
#no-tls
#no-dtls
```
In this case, replace the `turn:` schemes in the `turn_uris` settings below
with `turns:`.
We recommend that you only try to set up TLS/DTLS once you have set up a
basic installation and got it working.
NB: If your TLS certificate was provided by Let's Encrypt, TLS/DTLS will
not work with any Matrix client that uses Chromium's WebRTC library. This
currently includes Element Android & iOS; for more details, see their
[respective](https://github.com/vector-im/element-android/issues/1533)
[issues](https://github.com/vector-im/element-ios/issues/2712) as well as the underlying
[WebRTC issue](https://bugs.chromium.org/p/webrtc/issues/detail?id=11710).
Consider using a ZeroSSL certificate for your TURN server as a working alternative.
1. Ensure your firewall allows traffic into the TURN server on the ports
you've configured it to listen on (By default: 3478 and 5349 for TURN
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
for the UDP relay.)
1. If your TURN server is behind NAT, the NAT gateway must have an external,
publicly-reachable IP address. You must configure `coturn` to advertise that
address to connecting clients:
```
external-ip=EXTERNAL_NAT_IPv4_ADDRESS
```
You may optionally limit the TURN server to listen only on the local
address that is mapped by NAT to the external address:
```
listening-ip=INTERNAL_TURNSERVER_IPv4_ADDRESS
```
If your NAT gateway is reachable over both IPv4 and IPv6, you may
configure `coturn` to advertise each available address:
```
external-ip=EXTERNAL_NAT_IPv4_ADDRESS
external-ip=EXTERNAL_NAT_IPv6_ADDRESS
```
When advertising an external IPv6 address, ensure that the firewall and
network settings of the system running your TURN server are configured to
accept IPv6 traffic, and that the TURN server is listening on the local
IPv6 address that is mapped by NAT to the external IPv6 address.
1. (Re)start the turn server:
* If you used the Debian package (or have set up a systemd unit yourself):
```sh
sudo systemctl restart coturn
```
* If you built from source:
```sh
/usr/local/bin/turnserver -o
```

170
docs/setup/turn/eturnal.md Normal file
View File

@@ -0,0 +1,170 @@
# eturnal TURN server
The following sections describe how to install [eturnal](<https://github.com/processone/eturnal>)
(which implements the TURN REST API).
## `eturnal` setup
### Initial installation
The `eturnal` TURN server implementation is available from a variety of sources
such as native package managers, binary packages, installation from source or
[container image](https://eturnal.net/documentation/code/docker.html). They are
all described [here](https://github.com/processone/eturnal#installation).
Quick-Test instructions in a [Linux Shell](https://github.com/processone/eturnal/blob/master/QUICK-TEST.md)
or with [Docker](https://github.com/processone/eturnal/blob/master/docker-k8s/QUICK-TEST.md)
are available as well.
### Configuration
After installation, `eturnal` usually ships a [default configuration file](https://github.com/processone/eturnal/blob/master/config/eturnal.yml)
here: `/etc/eturnal.yml` (and, if not found there, there is a backup file here:
`/opt/eturnal/etc/eturnal.yml`). It uses the (indentation-sensitive!) [YAML](https://en.wikipedia.org/wiki/YAML)
format. The file contains further explanations.
Here are some hints how to configure eturnal on your [host machine](https://github.com/processone/eturnal#configuration)
or when using e.g. [Docker](https://eturnal.net/documentation/code/docker.html).
You may also further deep dive into the [reference documentation](https://eturnal.net/documentation/).
`eturnal` runs out of the box with the default configuration. To enable TURN and
to integrate it with your homeserver, some aspects in `eturnal`'s default configuration file
must be edited:
1. Homeserver's [`turn_shared_secret`](../../usage/configuration/config_documentation.md#turn_shared_secret)
and eturnal's shared `secret` for authentication
Both need to have the same value. Uncomment and adjust this line in `eturnal`'s
configuration file:
```yaml
secret: "long-and-cryptic" # Shared secret, CHANGE THIS.
```
One way to generate a `secret` is with `pwgen`:
```sh
pwgen -s 64 1
```
1. Public IP address
If your TURN server is behind NAT, the NAT gateway must have an external,
publicly-reachable IP address. `eturnal` tries to autodetect the public IP address,
however, it may also be configured by uncommenting and adjusting this line, so
`eturnal` advertises that address to connecting clients:
```yaml
relay_ipv4_addr: "203.0.113.4" # The server's public IPv4 address.
```
If your NAT gateway is reachable over both IPv4 and IPv6, you may
configure `eturnal` to advertise each available address:
```yaml
relay_ipv4_addr: "203.0.113.4" # The server's public IPv4 address.
relay_ipv6_addr: "2001:db8::4" # The server's public IPv6 address (optional).
```
When advertising an external IPv6 address, ensure that the firewall and
network settings of the system running your TURN server are configured to
accept IPv6 traffic, and that the TURN server is listening on the local
IPv6 address that is mapped by NAT to the external IPv6 address.
1. Logging
If `eturnal` was started by systemd, log files are written into the
`/var/log/eturnal` directory by default. In order to log to the [journal](https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html)
instead, the `log_dir` option can be set to `stdout` in the configuration file.
1. Security considerations
Consider your security settings. TURN lets users request a relay which will
connect to arbitrary IP addresses and ports. The following configuration is
suggested as a minimum starting point, [see also the official documentation](https://eturnal.net/documentation/#blacklist):
```yaml
## Reject TURN relaying from/to the following addresses/networks:
blacklist: # This is the default blacklist.
- "127.0.0.0/8" # IPv4 loopback.
- "::1" # IPv6 loopback.
- recommended # Expands to a number of networks recommended to be
# blocked, but includes private networks. Those
# would have to be 'whitelist'ed if eturnal serves
# local clients/peers within such networks.
```
To whitelist IP addresses or specific (private) networks, you need to **add** a
whitelist part into the configuration file, e.g.:
```yaml
whitelist:
- "192.168.0.0/16"
- "203.0.113.113"
- "2001:db8::/64"
```
The more specific, the better.
1. TURNS (TURN via TLS/DTLS)
Also consider supporting TLS/DTLS. To do this, adjust the following settings
in the `eturnal.yml` configuration file (TLS parts should not be commented anymore):
```yaml
listen:
- ip: "::"
port: 3478
transport: udp
- ip: "::"
port: 3478
transport: tcp
- ip: "::"
port: 5349
transport: tls
## TLS certificate/key files (must be readable by 'eturnal' user!):
tls_crt_file: /etc/eturnal/tls/crt.pem
tls_key_file: /etc/eturnal/tls/key.pem
```
In this case, replace the `turn:` schemes in homeserver's `turn_uris` settings
with `turns:`. More is described [here](../../usage/configuration/config_documentation.md#turn_uris).
We recommend that you only try to set up TLS/DTLS once you have set up a
basic installation and got it working.
NB: If your TLS certificate was provided by Let's Encrypt, TLS/DTLS will
not work with any Matrix client that uses Chromium's WebRTC library. This
currently includes Element Android & iOS; for more details, see their
[respective](https://github.com/vector-im/element-android/issues/1533)
[issues](https://github.com/vector-im/element-ios/issues/2712) as well as the underlying
[WebRTC issue](https://bugs.chromium.org/p/webrtc/issues/detail?id=11710).
Consider using a ZeroSSL certificate for your TURN server as a working alternative.
1. Firewall
Ensure your firewall allows traffic into the TURN server on the ports
you've configured it to listen on (By default: 3478 and 5349 for TURN
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
for the UDP relay.)
1. Reload/ restarting `eturnal`
Changes in the configuration file require `eturnal` to reload/ restart, this
can be achieved by:
```sh
eturnalctl reload
```
`eturnal` performs a configuration check before actually reloading/ restarting
and provides hints, if something is not correctly configured.
### eturnalctl opterations script
`eturnal` offers a handy [operations script](https://eturnal.net/documentation/#Operation)
which can be called e.g. to check, whether the service is up, to restart the service,
to query how many active sessions exist, to change logging behaviour and so on.
Hint: If `eturnalctl` is not part of your `$PATH`, consider either sym-linking it (e.g. ´ln -s /opt/eturnal/bin/eturnalctl /usr/local/bin/eturnalctl´) or call it from the default `eturnal` directory directly: e.g. `/opt/eturnal/bin/eturnalctl info`

View File

@@ -9,222 +9,28 @@ allows the homeserver to generate credentials that are valid for use on the
TURN server through the use of a secret shared between the homeserver and the
TURN server.
The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API) and integrate it with synapse.
This documentation provides two TURN server configuration examples:
* [coturn](setup/turn/coturn.md)
* [eturnal](setup/turn/eturnal.md)
## Requirements
For TURN relaying with `coturn` to work, it must be hosted on a server/endpoint with a public IP.
For TURN relaying to work, the TURN service must be hosted on a server/endpoint with a public IP.
Hosting TURN behind NAT requires port forwaring and for the NAT gateway to have a public IP.
However, even with appropriate configuration, NAT is known to cause issues and to often not work.
## `coturn` setup
### Initial installation
The TURN daemon `coturn` is available from a variety of sources such as native package managers, or installation from source.
#### Debian installation
Just install the debian package:
```sh
apt install coturn
```
This will install and start a systemd service called `coturn`.
#### Source installation
1. Download the [latest release](https://github.com/coturn/coturn/releases/latest) from github. Unpack it and `cd` into the directory.
1. Configure it:
```sh
./configure
```
You may need to install `libevent2`: if so, you should do so in
the way recommended by your operating system. You can ignore
warnings about lack of database support: a database is unnecessary
for this purpose.
1. Build and install it:
```sh
make
make install
```
### Configuration
1. Create or edit the config file in `/etc/turnserver.conf`. The relevant
lines, with example values, are:
```
use-auth-secret
static-auth-secret=[your secret key here]
realm=turn.myserver.org
```
See `turnserver.conf` for explanations of the options. One way to generate
the `static-auth-secret` is with `pwgen`:
```sh
pwgen -s 64 1
```
A `realm` must be specified, but its value is somewhat arbitrary. (It is
sent to clients as part of the authentication flow.) It is conventional to
set it to be your server name.
1. You will most likely want to configure coturn to write logs somewhere. The
easiest way is normally to send them to the syslog:
```sh
syslog
```
(in which case, the logs will be available via `journalctl -u coturn` on a
systemd system). Alternatively, coturn can be configured to write to a
logfile - check the example config file supplied with coturn.
1. Consider your security settings. TURN lets users request a relay which will
connect to arbitrary IP addresses and ports. The following configuration is
suggested as a minimum starting point:
```
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
# recommended additional local peers to block, to mitigate external access to internal services.
# https://www.rtcsec.com/article/slack-webrtc-turn-compromise-and-bug-bounty/#how-to-fix-an-open-turn-relay-to-address-this-vulnerability
no-multicast-peers
denied-peer-ip=0.0.0.0-0.255.255.255
denied-peer-ip=100.64.0.0-100.127.255.255
denied-peer-ip=127.0.0.0-127.255.255.255
denied-peer-ip=169.254.0.0-169.254.255.255
denied-peer-ip=192.0.0.0-192.0.0.255
denied-peer-ip=192.0.2.0-192.0.2.255
denied-peer-ip=192.88.99.0-192.88.99.255
denied-peer-ip=198.18.0.0-198.19.255.255
denied-peer-ip=198.51.100.0-198.51.100.255
denied-peer-ip=203.0.113.0-203.0.113.255
denied-peer-ip=240.0.0.0-255.255.255.255
# special case the turn server itself so that client->TURN->TURN->client flows work
# this should be one of the turn server's listening IPs
allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200
```
1. Also consider supporting TLS/DTLS. To do this, add the following settings
to `turnserver.conf`:
```
# TLS certificates, including intermediate certs.
# For Let's Encrypt certificates, use `fullchain.pem` here.
cert=/path/to/fullchain.pem
# TLS private key file
pkey=/path/to/privkey.pem
# Ensure the configuration lines that disable TLS/DTLS are commented-out or removed
#no-tls
#no-dtls
```
In this case, replace the `turn:` schemes in the `turn_uris` settings below
with `turns:`.
We recommend that you only try to set up TLS/DTLS once you have set up a
basic installation and got it working.
NB: If your TLS certificate was provided by Let's Encrypt, TLS/DTLS will
not work with any Matrix client that uses Chromium's WebRTC library. This
currently includes Element Android & iOS; for more details, see their
[respective](https://github.com/vector-im/element-android/issues/1533)
[issues](https://github.com/vector-im/element-ios/issues/2712) as well as the underlying
[WebRTC issue](https://bugs.chromium.org/p/webrtc/issues/detail?id=11710).
Consider using a ZeroSSL certificate for your TURN server as a working alternative.
1. Ensure your firewall allows traffic into the TURN server on the ports
you've configured it to listen on (By default: 3478 and 5349 for TURN
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
for the UDP relay.)
1. If your TURN server is behind NAT, the NAT gateway must have an external,
publicly-reachable IP address. You must configure coturn to advertise that
address to connecting clients:
```
external-ip=EXTERNAL_NAT_IPv4_ADDRESS
```
You may optionally limit the TURN server to listen only on the local
address that is mapped by NAT to the external address:
```
listening-ip=INTERNAL_TURNSERVER_IPv4_ADDRESS
```
If your NAT gateway is reachable over both IPv4 and IPv6, you may
configure coturn to advertise each available address:
```
external-ip=EXTERNAL_NAT_IPv4_ADDRESS
external-ip=EXTERNAL_NAT_IPv6_ADDRESS
```
When advertising an external IPv6 address, ensure that the firewall and
network settings of the system running your TURN server are configured to
accept IPv6 traffic, and that the TURN server is listening on the local
IPv6 address that is mapped by NAT to the external IPv6 address.
1. (Re)start the turn server:
* If you used the Debian package (or have set up a systemd unit yourself):
```sh
systemctl restart coturn
```
* If you installed from source:
```sh
bin/turnserver -o
```
Afterwards, the homeserver needs some further configuration.
## Synapse setup
Your homeserver configuration file needs the following extra keys:
1. "`turn_uris`": This needs to be a yaml list of public-facing URIs
for your TURN server to be given out to your clients. Add separate
entries for each transport your TURN server supports.
2. "`turn_shared_secret`": This is the secret shared between your
homeserver and your TURN server, so you should set it to the same
string you used in turnserver.conf.
3. "`turn_user_lifetime`": This is the amount of time credentials
generated by your homeserver are valid for (in milliseconds).
Shorter times offer less potential for abuse at the expense of
increased traffic between web clients and your homeserver to
refresh credentials. The TURN REST API specification recommends
one day (86400000).
4. "`turn_allow_guests`": Whether to allow guest users to use the
TURN server. This is enabled by default, as otherwise VoIP will
not work reliably for guests. However, it does introduce a
security risk as it lets guests connect to arbitrary endpoints
without having gone through a CAPTCHA or similar to register a
real account.
1. [`turn_uris`](usage/configuration/config_documentation.md#turn_uris)
2. [`turn_shared_secret`](usage/configuration/config_documentation.md#turn_shared_secret)
3. [`turn_user_lifetime`](usage/configuration/config_documentation.md#turn_user_lifetime)
4. [`turn_allow_guests`](usage/configuration/config_documentation.md#turn_allow_guests)
As an example, here is the relevant section of the config file for `matrix.org`. The
`turn_uris` are appropriate for TURN servers listening on the default ports, with no TLS.
@@ -263,7 +69,7 @@ Here are a few things to try:
* Check that you have opened your firewall to allow UDP traffic to the UDP
relay ports (49152-65535 by default).
* Try disabling `coturn`'s TLS/DTLS listeners and enable only its (unencrypted)
* Try disabling TLS/DTLS listeners and enable only its (unencrypted)
TCP/UDP listeners. (This will only leave signaling traffic unencrypted;
voice & video WebRTC traffic is always encrypted.)
@@ -288,12 +94,19 @@ Here are a few things to try:
* ensure that your TURN server uses the NAT gateway as its default route.
* Enable more verbose logging in coturn via the `verbose` setting:
* Enable more verbose logging, in `coturn` via the `verbose` setting:
```
verbose
```
or with `eturnal` with the shell command `eturnalctl loglevel debug` or in the configuration file (the service needs to [reload](https://eturnal.net/documentation/#Operation) for it to become effective):
```yaml
## Logging configuration:
log_level: debug
```
... and then see if there are any clues in its logs.
* If you are using a browser-based client under Chrome, check
@@ -317,7 +130,7 @@ Here are a few things to try:
matrix client to your homeserver in your browser's network inspector. In
the response you should see `username` and `password`. Or:
* Use the following shell commands:
* Use the following shell commands for `coturn`:
```sh
secret=staticAuthSecretHere
@@ -327,11 +140,16 @@ Here are a few things to try:
echo -e "username: $u\npassword: $p"
```
Or:
or for `eturnal`
* Temporarily configure coturn to accept a static username/password. To do
this, comment out `use-auth-secret` and `static-auth-secret` and add the
following:
```sh
eturnalctl credentials
```
* Or (**coturn only**): Temporarily configure `coturn` to accept a static
username/password. To do this, comment out `use-auth-secret` and
`static-auth-secret` and add the following:
```
lt-cred-mech

View File

@@ -88,6 +88,82 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.73.0
## Legacy Prometheus metric names have now been removed
Synapse v1.69.0 included the deprecation of legacy Prometheus metric names
and offered an option to disable them.
Synapse v1.71.0 disabled legacy Prometheus metric names by default.
This version, v1.73.0, removes those legacy Prometheus metric names entirely.
This also means that the `enable_legacy_metrics` configuration option has been
removed; it will no longer be possible to re-enable the legacy metric names.
If you use metrics and have not yet updated your Grafana dashboard(s),
Prometheus console(s) or alerting rule(s), please consider doing so when upgrading
to this version.
Note that the included Grafana dashboard was updated in v1.72.0 to correct some
metric names which were missed when legacy metrics were disabled by default.
See [v1.69.0: Deprecation of legacy Prometheus metric names](#deprecation-of-legacy-prometheus-metric-names)
for more context.
# Upgrading to v1.72.0
## Dropping support for PostgreSQL 10
In line with our [deprecation policy](deprecation_policy.md), we've dropped
support for PostgreSQL 10, as it is no longer supported upstream.
This release of Synapse requires PostgreSQL 11+.
# Upgrading to v1.71.0
## Removal of the `generate_short_term_login_token` module API method
As announced with the release of [Synapse 1.69.0](#deprecation-of-the-generate_short_term_login_token-module-api-method), the deprecated `generate_short_term_login_token` module method has been removed.
Modules relying on it can instead use the `create_login_token` method.
## Changes to the events received by application services (interest)
To align with spec (changed in
[MSC3905](https://github.com/matrix-org/matrix-spec-proposals/pull/3905)), Synapse now
only considers local users to be interesting. In other words, the `users` namespace
regex is only be applied against local users of the homeserver.
Please note, this probably doesn't affect the expected behavior of your application
service, since an interesting local user in a room still means all messages in the room
(from local or remote users) will still be considered interesting. And matching a room
with the `rooms` or `aliases` namespace regex will still consider all events sent in the
room to be interesting to the application service.
If one of your application service's `users` regex was intending to match a remote user,
this will no longer match as you expect. The behavioral mismatch between matching all
local users and some remote users is why the spec was changed/clarified and this
caveat is no longer supported.
## Legacy Prometheus metric names are now disabled by default
Synapse v1.71.0 disables legacy Prometheus metric names by default.
For administrators that still rely on them and have not yet had chance to update their
uses of the metrics, it's still possible to specify `enable_legacy_metrics: true` in
the configuration to re-enable them temporarily.
Synapse v1.73.0 will **remove legacy metric names altogether** and at that point,
it will no longer be possible to re-enable them.
If you do not use metrics or you have already updated your Grafana dashboard(s),
Prometheus console(s) and alerting rule(s), there is no action needed.
See [v1.69.0: Deprecation of legacy Prometheus metric names](#deprecation-of-legacy-prometheus-metric-names).
# Upgrading to v1.69.0
## Changes to the receipts replication streams

View File

@@ -19,7 +19,7 @@ already on your `$PATH` depending on how Synapse was installed.
Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
## Making an Admin API request
For security reasons, we [recommend](reverse_proxy.md#synapse-administration-endpoints)
For security reasons, we [recommend](../../../reverse_proxy.md#synapse-administration-endpoints)
that the Admin API (`/_synapse/admin/...`) should be hidden from public view using a
reverse proxy. This means you should typically query the Admin API from a terminal on
the machine which runs Synapse.

View File

@@ -73,12 +73,12 @@ When a request is blocked, the response will have the `errcode` `M_RESOURCE_LIMI
Synapse records several different prometheus metrics for MAU.
`synapse_admin_mau:current` records the current MAU figure for native (non-application-service) users.
`synapse_admin_mau_current` records the current MAU figure for native (non-application-service) users.
`synapse_admin_mau:max` records the maximum MAU as dictated by the `max_mau_value` config value.
`synapse_admin_mau_max` records the maximum MAU as dictated by the `max_mau_value` config value.
`synapse_admin_mau_current_mau_by_service` records the current MAU including application service users. The label `app_service` can be used
to filter by a specific service ID. This *also* includes non-application-service users under `app_service=native` .
`synapse_admin_mau:registered_reserved_users` records the number of users specified in `mau_limits_reserved_threepids` which have
`synapse_admin_mau_registered_reserved_users` records the number of users specified in `mau_limits_reserved_threepids` which have
registered accounts on the homeserver.

View File

@@ -99,7 +99,7 @@ modules:
config: {}
```
---
## Server ##
## Server
Define your homeserver name and other base options.
@@ -159,7 +159,7 @@ including _matrix/...). This is the same URL a user might enter into the
'Custom Homeserver URL' field on their client. If you use Synapse with a
reverse proxy, this should be the URL to reach Synapse via the proxy.
Otherwise, it should be the URL to reach Synapse's client HTTP listener (see
'listeners' below).
['listeners'](#listeners) below).
Defaults to `https://<server_name>/`.
@@ -570,7 +570,7 @@ Example configuration:
delete_stale_devices_after: 1y
```
## Homeserver blocking ##
## Homeserver blocking
Useful options for Synapse admins.
---
@@ -858,7 +858,7 @@ which are older than the room's maximum retention period. Synapse will also
filter events received over federation so that events that should have been
purged are ignored and not stored again.
The message retention policies feature is disabled by default. Please be advised
The message retention policies feature is disabled by default. Please be advised
that enabling this feature carries some risk. There are known bugs with the implementation
which can cause database corruption. Setting retention to delete older history
is less risky than deleting newer history but in general caution is advised when enabling this
@@ -922,7 +922,7 @@ retention:
interval: 1d
```
---
## TLS ##
## TLS
Options related to TLS.
@@ -1012,7 +1012,7 @@ federation_custom_ca_list:
- myCA3.pem
```
---
## Federation ##
## Federation
Options related to federation.
@@ -1071,7 +1071,7 @@ Example configuration:
allow_device_name_lookup_over_federation: true
```
---
## Caching ##
## Caching
Options related to caching.
@@ -1185,7 +1185,7 @@ file in Synapse's `contrib` directory, you can send a `SIGHUP` signal by using
`systemctl reload matrix-synapse`.
---
## Database ##
## Database
Config options related to database settings.
---
@@ -1332,20 +1332,21 @@ databases:
cp_max: 10
```
---
## Logging ##
## Logging
Config options related to logging.
---
### `log_config`
This option specifies a yaml python logging config file as described [here](https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema).
This option specifies a yaml python logging config file as described
[here](https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema).
Example configuration:
```yaml
log_config: "CONFDIR/SERVERNAME.log.config"
```
---
## Ratelimiting ##
## Ratelimiting
Options related to ratelimiting in Synapse.
Each ratelimiting configuration is made of two parameters:
@@ -1576,7 +1577,7 @@ Example configuration:
federation_rr_transactions_per_room_per_second: 40
```
---
## Media Store ##
## Media Store
Config options related to Synapse's media store.
---
@@ -1766,7 +1767,7 @@ url_preview_ip_range_blacklist:
- 'ff00::/8'
- 'fec0::/10'
```
----
---
### `url_preview_ip_range_whitelist`
This option sets a list of IP address CIDR ranges that the URL preview spider is allowed
@@ -1860,7 +1861,7 @@ Example configuration:
- 'fr;q=0.8'
- '*;q=0.7'
```
----
---
### `oembed`
oEmbed allows for easier embedding content from a website. It can be
@@ -1877,7 +1878,7 @@ oembed:
- oembed/my_providers.json
```
---
## Captcha ##
## Captcha
See [here](../../CAPTCHA_SETUP.md) for full details on setting up captcha.
@@ -1926,7 +1927,7 @@ Example configuration:
recaptcha_siteverify_api: "https://my.recaptcha.site"
```
---
## TURN ##
## TURN
Options related to adding a TURN server to Synapse.
---
@@ -1947,7 +1948,7 @@ Example configuration:
```yaml
turn_shared_secret: "YOUR_SHARED_SECRET"
```
----
---
### `turn_username` and `turn_password`
The Username and password if the TURN server needs them and does not use a token.
@@ -2366,7 +2367,7 @@ Example configuration:
```yaml
session_lifetime: 24h
```
----
---
### `refresh_access_token_lifetime`
Time that an access token remains valid for, if the session is using refresh tokens.
@@ -2422,7 +2423,7 @@ nonrefreshable_access_token_lifetime: 24h
```
---
## Metrics ###
## Metrics
Config options related to metrics.
---
@@ -2436,31 +2437,6 @@ Example configuration:
enable_metrics: true
```
---
### `enable_legacy_metrics`
Set to `true` to publish both legacy and non-legacy Prometheus metric names,
or to `false` to only publish non-legacy Prometheus metric names.
Defaults to `true`. Has no effect if `enable_metrics` is `false`.
**In Synapse v1.71.0, this will default to `false` before being removed in Synapse v1.73.0.**
Legacy metric names include:
- metrics containing colons in the name, such as `synapse_util_caches_response_cache:hits`, because colons are supposed to be reserved for user-defined recording rules;
- counters that don't end with the `_total` suffix, such as `synapse_federation_client_sent_edus`, therefore not adhering to the OpenMetrics standard.
These legacy metric names are unconventional and not compliant with OpenMetrics standards.
They are included for backwards compatibility.
Example configuration:
```yaml
enable_legacy_metrics: false
```
See https://github.com/matrix-org/synapse/issues/11106 for context.
*Since v1.67.0.*
**Will be removed in v1.73.0.**
---
### `sentry`
Use this option to enable sentry integration. Provide the DSN assigned to you by sentry
@@ -2519,7 +2495,7 @@ Example configuration:
report_stats_endpoint: https://example.com/report-usage-stats/push
```
---
## API Configuration ##
## API Configuration
Config settings related to the client/server API
---
@@ -2619,7 +2595,7 @@ Example configuration:
form_secret: <PRIVATE STRING>
```
---
## Signing Keys ##
## Signing Keys
Config options relating to signing keys
---
@@ -2680,6 +2656,12 @@ is still supported for backwards-compatibility, but it is deprecated.
warning on start-up. To suppress this warning, set
`suppress_key_server_warning` to true.
If the use of a trusted key server has to be deactivated, e.g. in a private
federation or for privacy reasons, this can be realised by setting
an empty array (`trusted_key_servers: []`). Then Synapse will request the keys
directly from the server that owns the keys. If Synapse does not get keys directly
from the server, the events of this server will be rejected.
Options for each entry in the list include:
* `server_name`: the name of the server. Required.
* `verify_keys`: an optional map from key id to base64-encoded public key.
@@ -2728,7 +2710,7 @@ Example configuration:
key_server_signing_keys_path: "key_server_signing_keys.key"
```
---
## Single sign-on integration ##
## Single sign-on integration
The following settings can be used to make Synapse use a single sign-on
provider for authentication, instead of its internal password database.
@@ -2986,10 +2968,17 @@ Options for each entry include:
For the default provider, the following settings are available:
* subject_claim: name of the claim containing a unique identifier
* `subject_claim`: name of the claim containing a unique identifier
for the user. Defaults to 'sub', which OpenID Connect
compliant providers should provide.
* `picture_claim`: name of the claim containing an url for the user's profile picture.
Defaults to 'picture', which OpenID Connect compliant providers should provide
and has to refer to a direct image file such as PNG, JPEG, or GIF image file.
Currently only supported in monolithic (single-process) server configurations
where the media repository runs within the Synapse process.
* `localpart_template`: Jinja2 template for the localpart of the MXID.
If this is not set, the user will be prompted to choose their
own username (see the documentation for the `sso_auth_account_details.html`
@@ -3014,6 +3003,15 @@ Options for each entry include:
which is set to the claims returned by the UserInfo Endpoint and/or
in the ID Token.
* `backchannel_logout_enabled`: set to `true` to process OIDC Back-Channel Logout notifications.
Those notifications are expected to be received on `/_synapse/client/oidc/backchannel_logout`.
Defaults to `false`.
* `backchannel_logout_ignore_sub`: by default, the OIDC Back-Channel Logout feature checks that the
`sub` claim matches the subject claim received during login. This check can be disabled by setting
this to `true`. Defaults to `false`.
You might want to disable this if the `subject_claim` returned by the mapping provider is not `sub`.
It is possible to configure Synapse to only allow logins if certain attributes
match particular values in the OIDC userinfo. The requirements can be listed under
@@ -3348,7 +3346,7 @@ email:
email_validation: "[%(server_name)s] Validate your email"
```
---
## Push ##
## Push
Configuration settings related to push notifications
---
@@ -3357,6 +3355,10 @@ Configuration settings related to push notifications
This setting defines options for push notifications.
This option has a number of sub-options. They are as follows:
* `enable_push`: Enables or disables push notification calculation. Note, disabling this will also
stop unread counts being calculated for rooms. This mode of operation is intended
for homeservers which may only have bots or appservice users connected, or are otherwise
not interested in push/unread counters. This is enabled by default.
* `include_content`: Clients requesting push notifications can either have the body of
the message sent in the notification poke along with other details
like the sender, or just the event ID and room ID (`event_id_only`).
@@ -3377,15 +3379,16 @@ This option has a number of sub-options. They are as follows:
Example configuration:
```yaml
push:
enable_push: true
include_content: false
group_unread_count_by_room: false
```
---
## Rooms ##
## Rooms
Config options relating to rooms.
---
### `encryption_enabled_by_default`
### `encryption_enabled_by_default_for_room_type`
Controls whether locally-created rooms should be end-to-end encrypted by
default.
@@ -3422,7 +3425,7 @@ This option has the following sub-options:
NB. If you set this to true, and the last time the user_directory search
indexes were (re)built was before Synapse 1.44, you'll have to
rebuild the indexes in order to search through all known users.
These indexes are built the first time Synapse starts; admins can
manually trigger a rebuild via the API following the instructions
[for running background updates](../administration/admin_api/background_updates.md#run),
@@ -3627,7 +3630,7 @@ default_power_level_content_override:
```
---
## Opentracing ##
## Opentracing
Configuration options related to Opentracing support.
---
@@ -3670,14 +3673,78 @@ opentracing:
false
```
---
## Workers ##
Configuration options related to workers.
## Coordinating workers
Configuration options related to workers which belong in the main config file
(usually called `homeserver.yaml`).
A Synapse deployment can scale horizontally by running multiple Synapse processes
called _workers_. Incoming requests are distributed between workers to handle higher
loads. Some workers are privileged and can accept requests from other workers.
As a result, the worker configuration is divided into two parts.
1. The first part (in this section of the manual) defines which shardable tasks
are delegated to privileged workers. This allows unprivileged workers to make
requests to a privileged worker to act on their behalf.
1. [The second part](#individual-worker-configuration)
controls the behaviour of individual workers in isolation.
For guidance on setting up workers, see the [worker documentation](../../workers.md).
---
### `worker_replication_secret`
A shared secret used by the replication APIs on the main process to authenticate
HTTP requests from workers.
The default, this value is omitted (equivalently `null`), which means that
traffic between the workers and the main process is not authenticated.
Example configuration:
```yaml
worker_replication_secret: "secret_secret"
```
---
### `start_pushers`
Unnecessary to set if using [`pusher_instances`](#pusher_instances) with [`generic_workers`](../../workers.md#synapseappgeneric_worker).
Controls sending of push notifications on the main process. Set to `false`
if using a [pusher worker](../../workers.md#synapseapppusher). Defaults to `true`.
Example configuration:
```yaml
start_pushers: false
```
---
### `pusher_instances`
It is possible to scale the processes that handle sending push notifications to [sygnal](https://github.com/matrix-org/sygnal)
and email by running a [`generic_worker`](../../workers.md#synapseappgeneric_worker) and adding it's [`worker_name`](#worker_name) to
a `pusher_instances` map. Doing so will remove handling of this function from the main
process. Multiple workers can be added to this map, in which case the work is balanced
across them. Ensure the main process and all pusher workers are restarted after changing
this option.
Example configuration for a single worker:
```yaml
pusher_instances:
- pusher_worker1
```
And for multiple workers:
```yaml
pusher_instances:
- pusher_worker1
- pusher_worker2
```
---
### `send_federation`
Unnecessary to set if using [`federation_sender_instances`](#federation_sender_instances) with [`generic_workers`](../../workers.md#synapseappgeneric_worker).
Controls sending of outbound federation transactions on the main process.
Set to false if using a federation sender worker. Defaults to true.
Set to `false` if using a [federation sender worker](../../workers.md#synapseappfederation_sender).
Defaults to `true`.
Example configuration:
```yaml
@@ -3686,24 +3753,37 @@ send_federation: false
---
### `federation_sender_instances`
It is possible to run multiple federation sender workers, in which case the
work is balanced across them. Use this setting to list the senders.
It is possible to scale the processes that handle sending outbound federation requests
by running a [`generic_worker`](../../workers.md#synapseappgeneric_worker) and adding it's [`worker_name`](#worker_name) to
a `federation_sender_instances` map. Doing so will remove handling of this function from
the main process. Multiple workers can be added to this map, in which case the work is
balanced across them.
This configuration setting must be shared between all federation sender workers, and if
changed all federation sender workers must be stopped at the same time and then
started, to ensure that all instances are running with the same config (otherwise
This configuration setting must be shared between all workers handling federation
sending, and if changed all federation sender workers must be stopped at the same time
and then started, to ensure that all instances are running with the same config (otherwise
events may be dropped).
Example configuration:
Example configuration for a single worker:
```yaml
federation_sender_instances:
- federation_sender1
```
And for multiple workers:
```yaml
federation_sender_instances:
- federation_sender1
- federation_sender2
```
---
### `instance_map`
When using workers this should be a map from worker name to the
When using workers this should be a map from [`worker_name`](#worker_name) to the
HTTP replication listener of the worker, if configured.
Each worker declared under [`stream_writers`](../../workers.md#stream-writers) needs
a HTTP replication listener, and that listener should be included in the `instance_map`.
(The main process also needs an HTTP replication listener, but it should not be
listed in the `instance_map`.)
Example configuration:
```yaml
@@ -3716,8 +3796,11 @@ instance_map:
### `stream_writers`
Experimental: When using workers you can define which workers should
handle event persistence and typing notifications. Any worker
specified here must also be in the `instance_map`.
handle writing to streams such as event persistence and typing notifications.
Any worker specified here must also be in the [`instance_map`](#instance_map).
See the list of available streams in the
[worker documentation](../../workers.md#stream-writers).
Example configuration:
```yaml
@@ -3728,29 +3811,18 @@ stream_writers:
---
### `run_background_tasks_on`
The worker that is used to run background tasks (e.g. cleaning up expired
data). If not provided this defaults to the main process.
The [worker](../../workers.md#background-tasks) that is used to run
background tasks (e.g. cleaning up expired data). If not provided this
defaults to the main process.
Example configuration:
```yaml
run_background_tasks_on: worker1
```
---
### `worker_replication_secret`
A shared secret used by the replication APIs to authenticate HTTP requests
from workers.
By default this is unused and traffic is not authenticated.
Example configuration:
```yaml
worker_replication_secret: "secret_secret"
```
### `redis`
Configuration for Redis when using workers. This *must* be enabled when
using workers (unless using old style direct TCP configuration).
Configuration for Redis when using workers. This *must* be enabled when using workers.
This setting has the following sub-options:
* `enabled`: whether to use Redis support. Defaults to false.
* `host` and `port`: Optional host and port to use to connect to redis. Defaults to
@@ -3765,7 +3837,143 @@ redis:
port: 6379
password: <secret_password>
```
## Background Updates ##
---
## Individual worker configuration
These options configure an individual worker, in its worker configuration file.
They should be not be provided when configuring the main process.
Note also the configuration above for
[coordinating a cluster of workers](#coordinating-workers).
For guidance on setting up workers, see the [worker documentation](../../workers.md).
---
### `worker_app`
The type of worker. The currently available worker applications are listed
in [worker documentation](../../workers.md#available-worker-applications).
The most common worker is the
[`synapse.app.generic_worker`](../../workers.md#synapseappgeneric_worker).
Example configuration:
```yaml
worker_app: synapse.app.generic_worker
```
---
### `worker_name`
A unique name for the worker. The worker needs a name to be addressed in
further parameters and identification in log files. We strongly recommend
giving each worker a unique `worker_name`.
Example configuration:
```yaml
worker_name: generic_worker1
```
---
### `worker_replication_host`
The HTTP replication endpoint that it should talk to on the main Synapse process.
The main Synapse process defines this with a `replication` resource in
[`listeners` option](#listeners).
Example configuration:
```yaml
worker_replication_host: 127.0.0.1
```
---
### `worker_replication_http_port`
The HTTP replication port that it should talk to on the main Synapse process.
The main Synapse process defines this with a `replication` resource in
[`listeners` option](#listeners).
Example configuration:
```yaml
worker_replication_http_port: 9093
```
---
### `worker_replication_http_tls`
Whether TLS should be used for talking to the HTTP replication port on the main
Synapse process.
The main Synapse process defines this with the `tls` option on its [listener](#listeners) that
has the `replication` resource enabled.
**Please note:** by default, it is not safe to expose replication ports to the
public Internet, even with TLS enabled.
See [`worker_replication_secret`](#worker_replication_secret).
Defaults to `false`.
*Added in Synapse 1.72.0.*
Example configuration:
```yaml
worker_replication_http_tls: true
```
---
### `worker_listeners`
A worker can handle HTTP requests. To do so, a `worker_listeners` option
must be declared, in the same way as the [`listeners` option](#listeners)
in the shared config.
Workers declared in [`stream_writers`](#stream_writers) will need to include a
`replication` listener here, in order to accept internal HTTP requests from
other workers.
Example configuration:
```yaml
worker_listeners:
- type: http
port: 8083
resources:
- names: [client, federation]
```
---
### `worker_daemonize`
Specifies whether the worker should be started as a daemon process.
If Synapse is being managed by [systemd](../../systemd-with-workers/README.md), this option
must be omitted or set to `false`.
Defaults to `false`.
Example configuration:
```yaml
worker_daemonize: true
```
---
### `worker_pid_file`
When running a worker as a daemon, we need a place to store the
[PID](https://en.wikipedia.org/wiki/Process_identifier) of the worker.
This option defines the location of that "pid file".
This option is required if `worker_daemonize` is `true` and ignored
otherwise. It has no default.
See also the [`pid_file` option](#pid_file) option for the main Synapse process.
Example configuration:
```yaml
worker_pid_file: DATADIR/generic_worker1.pid
```
---
### `worker_log_config`
This option specifies a yaml python logging config file as described
[here](https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema).
See also the [`log_config` option](#log_config) option for the main Synapse process.
Example configuration:
```yaml
worker_log_config: /etc/matrix-synapse/generic-worker-log.yaml
```
---
## Background Updates
Configuration settings related to background updates.
---
@@ -3794,4 +4002,3 @@ background_updates:
min_batch_size: 10
default_batch_size: 50
```

View File

@@ -88,10 +88,12 @@ shared configuration file.
### Shared configuration
Normally, only a couple of changes are needed to make an existing configuration
file suitable for use with workers. First, you need to enable an "HTTP replication
listener" for the main process; and secondly, you need to enable redis-based
replication. Optionally, a shared secret can be used to authenticate HTTP
traffic between workers. For example:
file suitable for use with workers. First, you need to enable an
["HTTP replication listener"](usage/configuration/config_documentation.md#listeners)
for the main process; and secondly, you need to enable
[redis-based replication](usage/configuration/config_documentation.md#redis).
Optionally, a [shared secret](usage/configuration/config_documentation.md#worker_replication_secret)
can be used to authenticate HTTP traffic between workers. For example:
```yaml
# extend the existing `listeners` section. This defines the ports that the
@@ -111,27 +113,30 @@ redis:
enabled: true
```
See the [configuration manual](usage/configuration/config_documentation.html) for the full documentation of each option.
See the [configuration manual](usage/configuration/config_documentation.md)
for the full documentation of each option.
Under **no circumstances** should the replication listener be exposed to the
public internet; replication traffic is:
* always unencrypted
* unauthenticated, unless `worker_replication_secret` is configured
* unauthenticated, unless [`worker_replication_secret`](usage/configuration/config_documentation.md#worker_replication_secret)
is configured
### Worker configuration
In the config file for each worker, you must specify:
* The type of worker (`worker_app`). The currently available worker applications are listed below.
* A unique name for the worker (`worker_name`).
* The type of worker ([`worker_app`](usage/configuration/config_documentation.md#worker_app)).
The currently available worker applications are listed [below](#available-worker-applications).
* A unique name for the worker ([`worker_name`](usage/configuration/config_documentation.md#worker_name)).
* The HTTP replication endpoint that it should talk to on the main synapse process
(`worker_replication_host` and `worker_replication_http_port`)
* If handling HTTP requests, a `worker_listeners` option with an `http`
listener, in the same way as the [`listeners`](usage/configuration/config_documentation.md#listeners)
option in the shared config.
* If handling the `^/_matrix/client/v3/keys/upload` endpoint, the HTTP URI for
the main process (`worker_main_http_uri`).
([`worker_replication_host`](usage/configuration/config_documentation.md#worker_replication_host) and
[`worker_replication_http_port`](usage/configuration/config_documentation.md#worker_replication_http_port)).
* If handling HTTP requests, a [`worker_listeners`](usage/configuration/config_documentation.md#worker_listeners) option
with an `http` listener.
* **Synapse 1.72 and older:** if handling the `^/_matrix/client/v3/keys/upload` endpoint, the HTTP URI for
the main process (`worker_main_http_uri`). This config option is no longer required and is ignored when running Synapse 1.73 and newer.
For example:
@@ -146,7 +151,6 @@ plain HTTP endpoint on port 8083 separately serving various endpoints, e.g.
Obviously you should configure your reverse-proxy to route the relevant
endpoints to the worker (`localhost:8083` in the above example).
### Running Synapse with workers
Finally, you need to start your worker processes. This can be done with either
@@ -187,6 +191,7 @@ information.
^/_matrix/federation/(v1|v2)/send_leave/
^/_matrix/federation/(v1|v2)/invite/
^/_matrix/federation/v1/event_auth/
^/_matrix/federation/v1/timestamp_to_event/
^/_matrix/federation/v1/exchange_third_party_invite/
^/_matrix/federation/v1/user/devices/
^/_matrix/key/v2/query
@@ -214,10 +219,10 @@ information.
^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event/
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms$
^/_matrix/client/v1/rooms/.*/timestamp_to_event$
^/_matrix/client/(api/v1|r0|v3|unstable)/search$
# Encryption requests
# Note that ^/_matrix/client/(r0|v3|unstable)/keys/upload/ requires `worker_main_http_uri`
^/_matrix/client/(r0|v3|unstable)/keys/query$
^/_matrix/client/(r0|v3|unstable)/keys/changes$
^/_matrix/client/(r0|v3|unstable)/keys/claim$
@@ -288,7 +293,8 @@ For multiple workers not handling the SSO endpoints properly, see
[#9427](https://github.com/matrix-org/synapse/issues/9427).
Note that a [HTTP listener](usage/configuration/config_documentation.md#listeners)
with `client` and `federation` `resources` must be configured in the `worker_listeners`
with `client` and `federation` `resources` must be configured in the
[`worker_listeners`](usage/configuration/config_documentation.md#worker_listeners)
option in the worker config.
#### Load balancing
@@ -300,9 +306,11 @@ may wish to run multiple groups of workers handling different endpoints so that
load balancing can be done in different ways.
For `/sync` and `/initialSync` requests it will be more efficient if all
requests from a particular user are routed to a single instance. Extracting a
user ID from the access token or `Authorization` header is currently left as an
exercise for the reader. Admins may additionally wish to separate out `/sync`
requests from a particular user are routed to a single instance. This can
be done e.g. in nginx via IP `hash $http_x_forwarded_for;` or via
`hash $http_authorization consistent;` which contains the users access token.
Admins may additionally wish to separate out `/sync`
requests that have a `since` query parameter from those that don't (and
`/initialSync`), as requests that don't are known as "initial sync" that happens
when a user logs in on a new device and can be *very* resource intensive, so
@@ -331,9 +339,10 @@ of the main process to a particular worker.
To enable this, the worker must have a
[HTTP `replication` listener](usage/configuration/config_documentation.md#listeners) configured,
have a `worker_name` and be listed in the `instance_map` config. The same worker
can handle multiple streams, but unless otherwise documented, each stream can only
have a single writer.
have a [`worker_name`](usage/configuration/config_documentation.md#worker_name)
and be listed in the [`instance_map`](usage/configuration/config_documentation.md#instance_map)
config. The same worker can handle multiple streams, but unless otherwise documented,
each stream can only have a single writer.
For example, to move event persistence off to a dedicated worker, the shared
configuration would include:
@@ -360,9 +369,26 @@ streams and the endpoints associated with them:
##### The `events` stream
The `events` stream experimentally supports having multiple writers, where work
is sharded between them by room ID. Note that you *must* restart all worker
instances when adding or removing event persisters. An example `stream_writers`
The `events` stream experimentally supports having multiple writer workers, where load
is sharded between them by room ID. Each writer is called an _event persister_. They are
responsible for
- receiving new events,
- linking them to those already in the room [DAG](development/room-dag-concepts.md),
- persisting them to the DB, and finally
- updating the events stream.
Because load is sharded in this way, you *must* restart all worker instances when
adding or removing event persisters.
An `event_persister` should not be mistaken for an `event_creator`.
An `event_creator` listens for requests from clients to create new events and does
so. It will then pass those events over HTTP replication to any configured event
persisters (or the main process if none are configured).
Note that `event_creator`s and `event_persister`s are implemented using the same
[`synapse.app.generic_worker`](#synapse.app.generic_worker).
An example [`stream_writers`](usage/configuration/config_documentation.md#stream_writers)
configuration with multiple writers:
```yaml
@@ -416,16 +442,18 @@ worker. Background tasks are run periodically or started via replication. Exactl
which tasks are configured to run depends on your Synapse configuration (e.g. if
stats is enabled). This worker doesn't handle any REST endpoints itself.
To enable this, the worker must have a `worker_name` and can be configured to run
background tasks. For example, to move background tasks to a dedicated worker,
the shared configuration would include:
To enable this, the worker must have a unique
[`worker_name`](usage/configuration/config_documentation.md#worker_name)
and can be configured to run background tasks. For example, to move background tasks
to a dedicated worker, the shared configuration would include:
```yaml
run_background_tasks_on: background_worker
```
You might also wish to investigate the `update_user_directory_from_worker` and
`media_instance_running_background_jobs` settings.
You might also wish to investigate the
[`update_user_directory_from_worker`](#updating-the-user-directory) and
[`media_instance_running_background_jobs`](#synapseappmedia_repository) settings.
An example for a dedicated background worker instance:
@@ -477,14 +505,21 @@ worker application type.
### `synapse.app.pusher`
It is likely this option will be deprecated in the future and is not recommended for new
installations. Instead, [use `synapse.app.generic_worker` with the `pusher_instances`](usage/configuration/config_documentation.md#pusher_instances).
Handles sending push notifications to sygnal and email. Doesn't handle any
REST endpoints itself, but you should set `start_pushers: False` in the
REST endpoints itself, but you should set
[`start_pushers: false`](usage/configuration/config_documentation.md#start_pushers) in the
shared configuration file to stop the main synapse sending push notifications.
To run multiple instances at once the `pusher_instances` option should list all
pusher instances by their worker name, e.g.:
To run multiple instances at once the
[`pusher_instances`](usage/configuration/config_documentation.md#pusher_instances)
option should list all pusher instances by their
[`worker_name`](usage/configuration/config_documentation.md#worker_name), e.g.:
```yaml
start_pushers: false
pusher_instances:
- pusher_worker1
- pusher_worker2
@@ -511,16 +546,24 @@ Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.federation_sender`
It is likely this option will be deprecated in the future and not recommended for
new installations. Instead, [use `synapse.app.generic_worker` with the `federation_sender_instances`](usage/configuration/config_documentation.md#federation_sender_instances).
Handles sending federation traffic to other servers. Doesn't handle any
REST endpoints itself, but you should set `send_federation: False` in the
shared configuration file to stop the main synapse sending this traffic.
REST endpoints itself, but you should set
[`send_federation: false`](usage/configuration/config_documentation.md#send_federation)
in the shared configuration file to stop the main synapse sending this traffic.
If running multiple federation senders then you must list each
instance in the `federation_sender_instances` option by their `worker_name`.
instance in the
[`federation_sender_instances`](usage/configuration/config_documentation.md#federation_sender_instances)
option by their
[`worker_name`](usage/configuration/config_documentation.md#worker_name).
All instances must be stopped and started when adding or removing instances.
For example:
```yaml
send_federation: false
federation_sender_instances:
- federation_sender1
- federation_sender2
@@ -547,7 +590,9 @@ Handles the media repository. It can handle all endpoints starting with:
^/_synapse/admin/v1/quarantine_media/.*$
^/_synapse/admin/v1/users/.*/media$
You should also set `enable_media_repo: False` in the shared configuration
You should also set
[`enable_media_repo: False`](usage/configuration/config_documentation.md#enable_media_repo)
in the shared configuration
file to stop the main synapse running background jobs related to managing the
media repository. Note that doing so will prevent the main process from being
able to handle the above endpoints.
@@ -600,7 +645,9 @@ equivalent to `synapse.app.generic_worker`:
* `synapse.app.client_reader`
* `synapse.app.event_creator`
* `synapse.app.federation_reader`
* `synapse.app.federation_sender`
* `synapse.app.frontend_proxy`
* `synapse.app.pusher`
* `synapse.app.synchrotron`

View File

@@ -11,6 +11,7 @@ warn_unused_ignores = True
local_partial_types = True
no_implicit_optional = True
disallow_untyped_defs = True
strict_equality = True
files =
docker/,
@@ -56,24 +57,8 @@ exclude = (?x)
|tests/rest/media/v1/test_media_storage.py
|tests/server.py
|tests/server_notices/test_resource_limits_server_notices.py
|tests/test_metrics.py
|tests/test_state.py
|tests/test_terms_auth.py
|tests/util/caches/test_cached_call.py
|tests/util/caches/test_deferred_cache.py
|tests/util/caches/test_descriptors.py
|tests/util/caches/test_response_cache.py
|tests/util/caches/test_ttlcache.py
|tests/util/test_async_helpers.py
|tests/util/test_batching_queue.py
|tests/util/test_dict_cache.py
|tests/util/test_expiring_cache.py
|tests/util/test_file_consumer.py
|tests/util/test_linearizer.py
|tests/util/test_logcontext.py
|tests/util/test_lrucache.py
|tests/util/test_rwlock.py
|tests/util/test_wheel_timer.py
)$
[mypy-synapse.federation.transport.client]
@@ -106,6 +91,9 @@ disallow_untyped_defs = False
[mypy-tests.handlers.test_user_directory]
disallow_untyped_defs = True
[mypy-tests.metrics.test_background_process_metrics]
disallow_untyped_defs = True
[mypy-tests.push.test_bulk_push_rule_evaluator]
disallow_untyped_defs = True
@@ -115,9 +103,15 @@ disallow_untyped_defs = True
[mypy-tests.state.test_profile]
disallow_untyped_defs = True
[mypy-tests.storage.test_id_generators]
disallow_untyped_defs = True
[mypy-tests.storage.test_profile]
disallow_untyped_defs = True
[mypy-tests.handlers.test_sso]
disallow_untyped_defs = True
[mypy-tests.storage.test_user_directory]
disallow_untyped_defs = True
@@ -127,9 +121,17 @@ disallow_untyped_defs = True
[mypy-tests.federation.transport.test_client]
disallow_untyped_defs = True
[mypy-tests.utils]
[mypy-tests.util.caches.*]
disallow_untyped_defs = True
[mypy-tests.util.caches.test_descriptors]
disallow_untyped_defs = False
[mypy-tests.util.*]
disallow_untyped_defs = True
[mypy-tests.utils]
disallow_untyped_defs = True
;; Dependencies without annotations
;; Before ignoring a module, check to see if type stubs are available.

689
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -57,7 +57,7 @@ manifest-path = "rust/Cargo.toml"
[tool.poetry]
name = "matrix-synapse"
version = "1.70.1"
version = "1.73.0rc2"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@@ -192,7 +192,7 @@ psycopg2 = { version = ">=2.8", markers = "platform_python_implementation != 'Py
psycopg2cffi = { version = ">=2.8", markers = "platform_python_implementation == 'PyPy'", optional = true }
psycopg2cffi-compat = { version = "==1.1", markers = "platform_python_implementation == 'PyPy'", optional = true }
pysaml2 = { version = ">=4.5.0", optional = true }
authlib = { version = ">=0.14.0", optional = true }
authlib = { version = ">=0.15.1", optional = true }
# systemd-python is necessary for logging to the systemd journal via
# `systemd.journal.JournalHandler`, as is documented in
# `contrib/systemd/log_config.yaml`.

View File

@@ -33,10 +33,12 @@ fn bench_match_exact(b: &mut Bencher) {
let eval = PushRuleEvaluator::py_new(
flattened_keys,
10,
0,
Some(0),
Default::default(),
Default::default(),
true,
vec![],
false,
)
.unwrap();
@@ -67,10 +69,12 @@ fn bench_match_word(b: &mut Bencher) {
let eval = PushRuleEvaluator::py_new(
flattened_keys,
10,
0,
Some(0),
Default::default(),
Default::default(),
true,
vec![],
false,
)
.unwrap();
@@ -101,10 +105,12 @@ fn bench_match_word_miss(b: &mut Bencher) {
let eval = PushRuleEvaluator::py_new(
flattened_keys,
10,
0,
Some(0),
Default::default(),
Default::default(),
true,
vec![],
false,
)
.unwrap();
@@ -135,10 +141,12 @@ fn bench_eval_message(b: &mut Bencher) {
let eval = PushRuleEvaluator::py_new(
flattened_keys,
10,
0,
Some(0),
Default::default(),
Default::default(),
true,
vec![],
false,
)
.unwrap();

View File

@@ -0,0 +1,77 @@
// Copyright 2022 The Matrix.org Foundation C.I.C.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(test)]
use synapse::tree_cache::TreeCache;
use test::Bencher;
extern crate test;
#[bench]
fn bench_tree_cache_get_non_empty(b: &mut Bencher) {
let mut cache: TreeCache<&str, &str> = TreeCache::new();
cache.set(["a", "b", "c", "d"], "f").unwrap();
b.iter(|| cache.get(&["a", "b", "c", "d"]));
}
#[bench]
fn bench_tree_cache_get_empty(b: &mut Bencher) {
let cache: TreeCache<&str, &str> = TreeCache::new();
b.iter(|| cache.get(&["a", "b", "c", "d"]));
}
#[bench]
fn bench_tree_cache_set(b: &mut Bencher) {
let mut cache: TreeCache<&str, &str> = TreeCache::new();
b.iter(|| cache.set(["a", "b", "c", "d"], "f").unwrap());
}
#[bench]
fn bench_tree_cache_length(b: &mut Bencher) {
let mut cache: TreeCache<u32, u32> = TreeCache::new();
for c1 in 0..=10 {
for c2 in 0..=10 {
for c3 in 0..=10 {
for c4 in 0..=10 {
cache.set([c1, c2, c3, c4], 1).unwrap()
}
}
}
}
b.iter(|| cache.len());
}
#[bench]
fn tree_cache_iterate(b: &mut Bencher) {
let mut cache: TreeCache<u32, u32> = TreeCache::new();
for c1 in 0..=10 {
for c2 in 0..=10 {
for c3 in 0..=10 {
for c4 in 0..=10 {
cache.set([c1, c2, c3, c4], 1).unwrap()
}
}
}
}
b.iter(|| cache.items().count());
}

View File

@@ -1,6 +1,7 @@
use pyo3::prelude::*;
pub mod push;
pub mod tree_cache;
/// Returns the hash of all the rust source files at the time it was compiled.
///
@@ -26,6 +27,7 @@ fn synapse_rust(py: Python<'_>, m: &PyModule) -> PyResult<()> {
m.add_function(wrap_pyfunction!(get_rust_file_digest, m)?)?;
push::register_module(py, m)?;
tree_cache::binding::register_module(py, m)?;
Ok(())
}

View File

@@ -25,6 +25,7 @@ use crate::push::Action;
use crate::push::Condition;
use crate::push::EventMatchCondition;
use crate::push::PushRule;
use crate::push::RelatedEventMatchCondition;
use crate::push::SetTweak;
use crate::push::TweakValue;
@@ -114,6 +115,22 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.im.nheko.msc3664.reply"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::RelatedEventMatch(
RelatedEventMatchCondition {
key: Some(Cow::Borrowed("sender")),
pattern: None,
pattern_type: Some(Cow::Borrowed("user_id")),
rel_type: Cow::Borrowed("m.in_reply_to"),
include_fallbacks: None,
},
))]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_ACTION, SOUND_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.contains_display_name"),
priority_class: 5,
@@ -257,6 +274,156 @@ pub const BASE_APPEND_UNDERRIDE_RULES: &[PushRule] = &[
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(
"global/underride/.org.matrix.msc3933.rule.extensible.encrypted_room_one_to_one",
),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("org.matrix.msc1767.encrypted")),
pattern_type: None,
})),
Condition::Known(KnownCondition::RoomMemberCount {
is: Some(Cow::Borrowed("2")),
}),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, SOUND_ACTION, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(
"global/underride/.org.matrix.msc3933.rule.extensible.message.room_one_to_one",
),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("org.matrix.msc1767.message")),
pattern_type: None,
})),
Condition::Known(KnownCondition::RoomMemberCount {
is: Some(Cow::Borrowed("2")),
}),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, SOUND_ACTION, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(
"global/underride/.org.matrix.msc3933.rule.extensible.file.room_one_to_one",
),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("org.matrix.msc1767.file")),
pattern_type: None,
})),
Condition::Known(KnownCondition::RoomMemberCount {
is: Some(Cow::Borrowed("2")),
}),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, SOUND_ACTION, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(
"global/underride/.org.matrix.msc3933.rule.extensible.image.room_one_to_one",
),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("org.matrix.msc1767.image")),
pattern_type: None,
})),
Condition::Known(KnownCondition::RoomMemberCount {
is: Some(Cow::Borrowed("2")),
}),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, SOUND_ACTION, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(
"global/underride/.org.matrix.msc3933.rule.extensible.video.room_one_to_one",
),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("org.matrix.msc1767.video")),
pattern_type: None,
})),
Condition::Known(KnownCondition::RoomMemberCount {
is: Some(Cow::Borrowed("2")),
}),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, SOUND_ACTION, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(
"global/underride/.org.matrix.msc3933.rule.extensible.audio.room_one_to_one",
),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("org.matrix.msc1767.audio")),
pattern_type: None,
})),
Condition::Known(KnownCondition::RoomMemberCount {
is: Some(Cow::Borrowed("2")),
}),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, SOUND_ACTION, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.m.rule.message"),
priority_class: 1,
@@ -285,6 +452,126 @@ pub const BASE_APPEND_UNDERRIDE_RULES: &[PushRule] = &[
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.org.matrix.msc1767.rule.extensible.encrypted"),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("m.encrypted")),
pattern_type: None,
})),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.org.matrix.msc1767.rule.extensible.message"),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("m.message")),
pattern_type: None,
})),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.org.matrix.msc1767.rule.extensible.file"),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("m.file")),
pattern_type: None,
})),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.org.matrix.msc1767.rule.extensible.image"),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("m.image")),
pattern_type: None,
})),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.org.matrix.msc1767.rule.extensible.video"),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("m.video")),
pattern_type: None,
})),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.org.matrix.msc1767.rule.extensible.audio"),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
// MSC3933: Type changed from template rule - see MSC.
pattern: Some(Cow::Borrowed("m.audio")),
pattern_type: None,
})),
// MSC3933: Add condition on top of template rule - see MSC.
Condition::Known(KnownCondition::RoomVersionSupports {
// RoomVersionFeatures::ExtensibleEvents.as_str(), ideally
feature: Cow::Borrowed("org.matrix.msc3932.extensible_events"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.im.vector.jitsi"),
priority_class: 1,

View File

@@ -23,11 +23,39 @@ use regex::Regex;
use super::{
utils::{get_glob_matcher, get_localpart_from_id, GlobMatchType},
Action, Condition, EventMatchCondition, FilteredPushRules, KnownCondition,
RelatedEventMatchCondition,
};
lazy_static! {
/// Used to parse the `is` clause in the room member count condition.
static ref INEQUALITY_EXPR: Regex = Regex::new(r"^([=<>]*)([0-9]+)$").expect("valid regex");
/// Used to determine which MSC3931 room version feature flags are actually known to
/// the push evaluator.
static ref KNOWN_RVER_FLAGS: Vec<String> = vec![
RoomVersionFeatures::ExtensibleEvents.as_str().to_string(),
];
/// The "safe" rule IDs which are not affected by MSC3932's behaviour (room versions which
/// declare Extensible Events support ultimately *disable* push rules which do not declare
/// *any* MSC3931 room_version_supports condition).
static ref SAFE_EXTENSIBLE_EVENTS_RULE_IDS: Vec<String> = vec![
"global/override/.m.rule.master".to_string(),
"global/override/.m.rule.roomnotif".to_string(),
"global/content/.m.rule.contains_user_name".to_string(),
];
}
enum RoomVersionFeatures {
ExtensibleEvents,
}
impl RoomVersionFeatures {
fn as_str(&self) -> &'static str {
match self {
RoomVersionFeatures::ExtensibleEvents => "org.matrix.msc3932.extensible_events",
}
}
}
/// Allows running a set of push rules against a particular event.
@@ -49,17 +77,36 @@ pub struct PushRuleEvaluator {
/// The power level of the sender of the event, or None if event is an
/// outlier.
sender_power_level: Option<i64>,
/// The related events, indexed by relation type. Flattened in the same manner as
/// `flattened_keys`.
related_events_flattened: BTreeMap<String, BTreeMap<String, String>>,
/// If msc3664, push rules for related events, is enabled.
related_event_match_enabled: bool,
/// If MSC3931 is applicable, the feature flags for the room version.
room_version_feature_flags: Vec<String>,
/// If MSC3931 (room version feature flags) is enabled. Usually controlled by the same
/// flag as MSC1767 (extensible events core).
msc3931_enabled: bool,
}
#[pymethods]
impl PushRuleEvaluator {
/// Create a new `PushRuleEvaluator`. See struct docstring for details.
#[allow(clippy::too_many_arguments)]
#[new]
pub fn py_new(
flattened_keys: BTreeMap<String, String>,
room_member_count: u64,
sender_power_level: Option<i64>,
notification_power_levels: BTreeMap<String, i64>,
related_events_flattened: BTreeMap<String, BTreeMap<String, String>>,
related_event_match_enabled: bool,
room_version_feature_flags: Vec<String>,
msc3931_enabled: bool,
) -> Result<Self, Error> {
let body = flattened_keys
.get("content.body")
@@ -72,6 +119,10 @@ impl PushRuleEvaluator {
room_member_count,
notification_power_levels,
sender_power_level,
related_events_flattened,
related_event_match_enabled,
room_version_feature_flags,
msc3931_enabled,
})
}
@@ -94,7 +145,19 @@ impl PushRuleEvaluator {
continue;
}
let rule_id = &push_rule.rule_id().to_string();
let extev_flag = &RoomVersionFeatures::ExtensibleEvents.as_str().to_string();
let supports_extensible_events = self.room_version_feature_flags.contains(extev_flag);
let safe_from_rver_condition = SAFE_EXTENSIBLE_EVENTS_RULE_IDS.contains(rule_id);
let mut has_rver_condition = false;
for condition in push_rule.conditions.iter() {
has_rver_condition |= matches!(
condition,
// per MSC3932, we just need *any* room version condition to match
Condition::Known(KnownCondition::RoomVersionSupports { feature: _ }),
);
match self.match_condition(condition, user_id, display_name) {
Ok(true) => {}
Ok(false) => continue 'outer,
@@ -105,6 +168,13 @@ impl PushRuleEvaluator {
}
}
// MSC3932: Disable push rules in extensible event-supporting room versions if they
// don't describe *any* MSC3931 room version condition, unless the rule is on the
// safe list.
if !has_rver_condition && !safe_from_rver_condition && supports_extensible_events {
continue;
}
let actions = push_rule
.actions
.iter()
@@ -156,6 +226,9 @@ impl PushRuleEvaluator {
KnownCondition::EventMatch(event_match) => {
self.match_event_match(event_match, user_id)?
}
KnownCondition::RelatedEventMatch(event_match) => {
self.match_related_event_match(event_match, user_id)?
}
KnownCondition::ContainsDisplayName => {
if let Some(dn) = display_name {
if !dn.is_empty() {
@@ -189,6 +262,15 @@ impl PushRuleEvaluator {
false
}
}
KnownCondition::RoomVersionSupports { feature } => {
if !self.msc3931_enabled {
false
} else {
let flag = feature.to_string();
KNOWN_RVER_FLAGS.contains(&flag)
&& self.room_version_feature_flags.contains(&flag)
}
}
};
Ok(result)
@@ -239,6 +321,79 @@ impl PushRuleEvaluator {
compiled_pattern.is_match(haystack)
}
/// Evaluates a `related_event_match` condition. (MSC3664)
fn match_related_event_match(
&self,
event_match: &RelatedEventMatchCondition,
user_id: Option<&str>,
) -> Result<bool, Error> {
// First check if related event matching is enabled...
if !self.related_event_match_enabled {
return Ok(false);
}
// get the related event, fail if there is none.
let event = if let Some(event) = self.related_events_flattened.get(&*event_match.rel_type) {
event
} else {
return Ok(false);
};
// If we are not matching fallbacks, don't match if our special key indicating this is a
// fallback relation is not present.
if !event_match.include_fallbacks.unwrap_or(false)
&& event.contains_key("im.vector.is_falling_back")
{
return Ok(false);
}
// if we have no key, accept the event as matching, if it existed without matching any
// fields.
let key = if let Some(key) = &event_match.key {
key
} else {
return Ok(true);
};
let pattern = if let Some(pattern) = &event_match.pattern {
pattern
} else if let Some(pattern_type) = &event_match.pattern_type {
// The `pattern_type` can either be "user_id" or "user_localpart",
// either way if we don't have a `user_id` then the condition can't
// match.
let user_id = if let Some(user_id) = user_id {
user_id
} else {
return Ok(false);
};
match &**pattern_type {
"user_id" => user_id,
"user_localpart" => get_localpart_from_id(user_id)?,
_ => return Ok(false),
}
} else {
return Ok(false);
};
let haystack = if let Some(haystack) = event.get(&**key) {
haystack
} else {
return Ok(false);
};
// For the content.body we match against "words", but for everything
// else we match against the entire value.
let match_type = if key == "content.body" {
GlobMatchType::Word
} else {
GlobMatchType::Whole
};
let mut compiled_pattern = get_glob_matcher(pattern, match_type)?;
compiled_pattern.is_match(haystack)
}
/// Match the member count against an 'is' condition
/// The `is` condition can be things like '>2', '==3' or even just '4'.
fn match_member_count(&self, is: &str) -> Result<bool, Error> {
@@ -267,9 +422,70 @@ impl PushRuleEvaluator {
fn push_rule_evaluator() {
let mut flattened_keys = BTreeMap::new();
flattened_keys.insert("content.body".to_string(), "foo bar bob hello".to_string());
let evaluator =
PushRuleEvaluator::py_new(flattened_keys, 10, Some(0), BTreeMap::new()).unwrap();
let evaluator = PushRuleEvaluator::py_new(
flattened_keys,
10,
Some(0),
BTreeMap::new(),
BTreeMap::new(),
true,
vec![],
true,
)
.unwrap();
let result = evaluator.run(&FilteredPushRules::default(), None, Some("bob"));
assert_eq!(result.len(), 3);
}
#[test]
fn test_requires_room_version_supports_condition() {
use std::borrow::Cow;
use crate::push::{PushRule, PushRules};
let mut flattened_keys = BTreeMap::new();
flattened_keys.insert("content.body".to_string(), "foo bar bob hello".to_string());
let flags = vec![RoomVersionFeatures::ExtensibleEvents.as_str().to_string()];
let evaluator = PushRuleEvaluator::py_new(
flattened_keys,
10,
Some(0),
BTreeMap::new(),
BTreeMap::new(),
false,
flags,
true,
)
.unwrap();
// first test: are the master and contains_user_name rules excluded from the "requires room
// version condition" check?
let mut result = evaluator.run(
&FilteredPushRules::default(),
Some("@bob:example.org"),
None,
);
assert_eq!(result.len(), 3);
// second test: if an appropriate push rule is in play, does it get handled?
let custom_rule = PushRule {
rule_id: Cow::from("global/underride/.org.example.extensible"),
priority_class: 1, // underride
conditions: Cow::from(vec![Condition::Known(
KnownCondition::RoomVersionSupports {
feature: Cow::from(RoomVersionFeatures::ExtensibleEvents.as_str().to_string()),
},
)]),
actions: Cow::from(vec![Action::Notify]),
default: false,
default_enabled: true,
};
let rules = PushRules::new(vec![custom_rule]);
result = evaluator.run(
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, true),
None,
None,
);
assert_eq!(result.len(), 1);
}

View File

@@ -267,6 +267,8 @@ pub enum Condition {
#[serde(tag = "kind")]
pub enum KnownCondition {
EventMatch(EventMatchCondition),
#[serde(rename = "im.nheko.msc3664.related_event_match")]
RelatedEventMatch(RelatedEventMatchCondition),
ContainsDisplayName,
RoomMemberCount {
#[serde(skip_serializing_if = "Option::is_none")]
@@ -275,6 +277,10 @@ pub enum KnownCondition {
SenderNotificationPermission {
key: Cow<'static, str>,
},
#[serde(rename = "org.matrix.msc3931.room_version_supports")]
RoomVersionSupports {
feature: Cow<'static, str>,
},
}
impl IntoPy<PyObject> for Condition {
@@ -299,6 +305,20 @@ pub struct EventMatchCondition {
pub pattern_type: Option<Cow<'static, str>>,
}
/// The body of a [`Condition::RelatedEventMatch`]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct RelatedEventMatchCondition {
#[serde(skip_serializing_if = "Option::is_none")]
pub key: Option<Cow<'static, str>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub pattern: Option<Cow<'static, str>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub pattern_type: Option<Cow<'static, str>>,
pub rel_type: Cow<'static, str>,
#[serde(skip_serializing_if = "Option::is_none")]
pub include_fallbacks: Option<bool>,
}
/// The collection of push rules for a user.
#[derive(Debug, Clone, Default)]
#[pyclass(frozen)]
@@ -391,15 +411,24 @@ impl PushRules {
pub struct FilteredPushRules {
push_rules: PushRules,
enabled_map: BTreeMap<String, bool>,
msc3664_enabled: bool,
msc1767_enabled: bool,
}
#[pymethods]
impl FilteredPushRules {
#[new]
pub fn py_new(push_rules: PushRules, enabled_map: BTreeMap<String, bool>) -> Self {
pub fn py_new(
push_rules: PushRules,
enabled_map: BTreeMap<String, bool>,
msc3664_enabled: bool,
msc1767_enabled: bool,
) -> Self {
Self {
push_rules,
enabled_map,
msc3664_enabled,
msc1767_enabled,
}
}
@@ -414,13 +443,29 @@ impl FilteredPushRules {
/// Iterates over all the rules and their enabled state, including base
/// rules, in the order they should be executed in.
fn iter(&self) -> impl Iterator<Item = (&PushRule, bool)> {
self.push_rules.iter().map(|r| {
let enabled = *self
.enabled_map
.get(&*r.rule_id)
.unwrap_or(&r.default_enabled);
(r, enabled)
})
self.push_rules
.iter()
.filter(|rule| {
// Ignore disabled experimental push rules
if !self.msc3664_enabled
&& rule.rule_id == "global/override/.im.nheko.msc3664.reply"
{
return false;
}
if !self.msc1767_enabled && rule.rule_id.contains("org.matrix.msc1767") {
return false;
}
true
})
.map(|r| {
let enabled = *self
.enabled_map
.get(&*r.rule_id)
.unwrap_or(&r.default_enabled);
(r, enabled)
})
}
}
@@ -446,6 +491,29 @@ fn test_deserialize_condition() {
let _: Condition = serde_json::from_str(json).unwrap();
}
#[test]
fn test_deserialize_unstable_msc3664_condition() {
let json = r#"{"kind":"im.nheko.msc3664.related_event_match","key":"content.body","pattern":"coffee","rel_type":"m.in_reply_to"}"#;
let condition: Condition = serde_json::from_str(json).unwrap();
assert!(matches!(
condition,
Condition::Known(KnownCondition::RelatedEventMatch(_))
));
}
#[test]
fn test_deserialize_unstable_msc3931_condition() {
let json =
r#"{"kind":"org.matrix.msc3931.room_version_supports","feature":"org.example.feature"}"#;
let condition: Condition = serde_json::from_str(json).unwrap();
assert!(matches!(
condition,
Condition::Known(KnownCondition::RoomVersionSupports { feature: _ })
));
}
#[test]
fn test_deserialize_custom_condition() {
let json = r#"{"kind":"custom_tag"}"#;

View File

@@ -0,0 +1,247 @@
use std::hash::Hash;
use anyhow::Error;
use pyo3::{
pyclass, pymethods,
types::{PyModule, PyTuple},
IntoPy, PyAny, PyObject, PyResult, Python, ToPyObject,
};
use super::TreeCache;
pub fn register_module(py: Python<'_>, m: &PyModule) -> PyResult<()> {
let child_module = PyModule::new(py, "tree_cache")?;
child_module.add_class::<PythonTreeCache>()?;
child_module.add_class::<StringTreeCache>()?;
m.add_submodule(child_module)?;
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import push` work.
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.tree_cache", child_module)?;
Ok(())
}
#[derive(Clone)]
struct HashablePyObject {
obj: PyObject,
hash: isize,
}
impl HashablePyObject {
pub fn new(obj: &PyAny) -> Result<Self, Error> {
let hash = obj.hash()?;
Ok(HashablePyObject {
obj: obj.to_object(obj.py()),
hash,
})
}
}
impl IntoPy<PyObject> for HashablePyObject {
fn into_py(self, _: Python<'_>) -> PyObject {
self.obj.clone()
}
}
impl IntoPy<PyObject> for &HashablePyObject {
fn into_py(self, _: Python<'_>) -> PyObject {
self.obj.clone()
}
}
impl ToPyObject for HashablePyObject {
fn to_object(&self, _py: Python<'_>) -> PyObject {
self.obj.clone()
}
}
impl Hash for HashablePyObject {
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.hash.hash(state);
}
}
impl PartialEq for HashablePyObject {
fn eq(&self, other: &Self) -> bool {
let equal = Python::with_gil(|py| {
let result = self.obj.as_ref(py).eq(other.obj.as_ref(py));
result.unwrap_or(false)
});
equal
}
}
impl Eq for HashablePyObject {}
#[pyclass]
struct PythonTreeCache(TreeCache<HashablePyObject, PyObject>);
#[pymethods]
impl PythonTreeCache {
#[new]
fn new() -> Self {
PythonTreeCache(Default::default())
}
pub fn set(&mut self, key: &PyAny, value: PyObject) -> Result<(), Error> {
let v: Vec<HashablePyObject> = key
.iter()?
.map(|obj| HashablePyObject::new(obj?))
.collect::<Result<_, _>>()?;
self.0.set(v, value)?;
Ok(())
}
pub fn get_node<'a>(
&'a self,
py: Python<'a>,
key: &'a PyAny,
) -> Result<Option<Vec<(&'a PyTuple, &'a PyObject)>>, Error> {
let v: Vec<HashablePyObject> = key
.iter()?
.map(|obj| HashablePyObject::new(obj?))
.collect::<Result<_, _>>()?;
let Some(node) = self.0.get_node(v.clone())? else {
return Ok(None)
};
let items = node
.items()
.map(|(k, value)| {
let vec = v.iter().chain(k.iter().map(|a| *a)).collect::<Vec<_>>();
let nk = PyTuple::new(py, vec);
(nk, value)
})
.collect::<Vec<_>>();
Ok(Some(items))
}
pub fn get(&self, key: &PyAny) -> Result<Option<&PyObject>, Error> {
let v: Vec<HashablePyObject> = key
.iter()?
.map(|obj| HashablePyObject::new(obj?))
.collect::<Result<_, _>>()?;
Ok(self.0.get(&v)?)
}
pub fn pop_node<'a>(
&'a mut self,
py: Python<'a>,
key: &'a PyAny,
) -> Result<Option<Vec<(&'a PyTuple, PyObject)>>, Error> {
let v: Vec<HashablePyObject> = key
.iter()?
.map(|obj| HashablePyObject::new(obj?))
.collect::<Result<_, _>>()?;
let Some(node) = self.0.pop_node(v.clone())? else {
return Ok(None)
};
let items = node
.into_items()
.map(|(k, value)| {
let vec = v.iter().chain(k.iter()).collect::<Vec<_>>();
let nk = PyTuple::new(py, vec);
(nk, value)
})
.collect::<Vec<_>>();
Ok(Some(items))
}
pub fn pop(&mut self, key: &PyAny) -> Result<Option<PyObject>, Error> {
let v: Vec<HashablePyObject> = key
.iter()?
.map(|obj| HashablePyObject::new(obj?))
.collect::<Result<_, _>>()?;
Ok(self.0.pop(&v)?)
}
pub fn clear(&mut self) {
self.0.clear()
}
pub fn len(&self) -> usize {
self.0.len()
}
pub fn values(&self) -> Vec<&PyObject> {
self.0.values().collect()
}
pub fn items(&self) -> Vec<(Vec<&HashablePyObject>, &PyObject)> {
todo!()
}
}
#[pyclass]
struct StringTreeCache(TreeCache<String, String>);
#[pymethods]
impl StringTreeCache {
#[new]
fn new() -> Self {
StringTreeCache(Default::default())
}
pub fn set(&mut self, key: &PyAny, value: String) -> Result<(), Error> {
let key = key
.iter()?
.map(|o| o.expect("iter failed").extract().expect("not a string"));
self.0.set(key, value)?;
Ok(())
}
// pub fn get_node(&self, key: &PyAny) -> Result<Option<&TreeCacheNode<K, PyObject>>, Error> {
// todo!()
// }
pub fn get(&self, key: &PyAny) -> Result<Option<&String>, Error> {
let key = key.iter()?.map(|o| {
o.expect("iter failed")
.extract::<String>()
.expect("not a string")
});
Ok(self.0.get(key)?)
}
// pub fn pop_node(&mut self, key: &PyAny) -> Result<Option<TreeCacheNode<K, PyObject>>, Error> {
// todo!()
// }
pub fn pop(&mut self, key: Vec<String>) -> Result<Option<String>, Error> {
Ok(self.0.pop(&key)?)
}
pub fn clear(&mut self) {
self.0.clear()
}
pub fn len(&self) -> usize {
self.0.len()
}
pub fn values(&self) -> Vec<&String> {
self.0.values().collect()
}
pub fn items(&self) -> Vec<(Vec<&HashablePyObject>, &PyObject)> {
todo!()
}
}

421
rust/src/tree_cache/mod.rs Normal file
View File

@@ -0,0 +1,421 @@
use std::{borrow::Borrow, collections::HashMap, hash::Hash};
use anyhow::{bail, Error};
pub mod binding;
pub enum TreeCacheNode<K, V> {
Leaf(V),
Branch(usize, HashMap<K, TreeCacheNode<K, V>>),
}
impl<K, V> TreeCacheNode<K, V> {
pub fn new_branch() -> Self {
TreeCacheNode::Branch(0, Default::default())
}
fn len(&self) -> usize {
match self {
TreeCacheNode::Leaf(_) => 1,
TreeCacheNode::Branch(size, _) => *size,
}
}
}
impl<'a, K: Eq + Hash + 'a, V> TreeCacheNode<K, V> {
pub fn set(
&mut self,
mut key: impl Iterator<Item = K>,
value: V,
) -> Result<(usize, usize), Error> {
if let Some(k) = key.next() {
match self {
TreeCacheNode::Leaf(_) => bail!("Given key is too long"),
TreeCacheNode::Branch(size, map) => {
let node = map.entry(k).or_insert_with(TreeCacheNode::new_branch);
let (added, removed) = node.set(key, value)?;
*size += added;
*size -= removed;
Ok((added, removed))
}
}
} else {
let added = if let TreeCacheNode::Branch(_, map) = self {
(1, map.len())
} else {
(0, 0)
};
*self = TreeCacheNode::Leaf(value);
Ok(added)
}
}
pub fn pop<Q>(
&mut self,
current_key: Q,
mut next_keys: impl Iterator<Item = Q>,
) -> Result<Option<TreeCacheNode<K, V>>, Error>
where
Q: Borrow<K>,
Q: Hash + Eq + 'a,
{
if let Some(next_key) = next_keys.next() {
match self {
TreeCacheNode::Leaf(_) => bail!("Given key is too long"),
TreeCacheNode::Branch(size, map) => {
let node = if let Some(node) = map.get_mut(current_key.borrow()) {
node
} else {
return Ok(None);
};
if let Some(popped) = node.pop(next_key, next_keys)? {
*size -= node.len();
Ok(Some(popped))
} else {
Ok(None)
}
}
}
} else {
match self {
TreeCacheNode::Leaf(_) => bail!("Given key is too long"),
TreeCacheNode::Branch(size, map) => {
if let Some(node) = map.remove(current_key.borrow()) {
*size -= node.len();
Ok(Some(node))
} else {
Ok(None)
}
}
}
}
}
pub fn items(&'a self) -> impl Iterator<Item = (Vec<&K>, &V)> {
// To avoid a lot of mallocs we guess the length of the key. Ideally
// we'd know this.
let capacity_guesstimate = 10;
let mut stack = vec![(Vec::with_capacity(capacity_guesstimate), self)];
std::iter::from_fn(move || {
while let Some((prefix, node)) = stack.pop() {
match node {
TreeCacheNode::Leaf(value) => return Some((prefix, value)),
TreeCacheNode::Branch(_, map) => {
stack.extend(map.iter().map(|(k, v)| {
let mut new_prefix = Vec::with_capacity(capacity_guesstimate);
new_prefix.extend_from_slice(&prefix);
new_prefix.push(k);
(new_prefix, v)
}));
}
}
}
None
})
}
pub fn values(&'a self) -> impl Iterator<Item = &V> {
let mut stack = vec![self];
std::iter::from_fn(move || {
while let Some(node) = stack.pop() {
match node {
TreeCacheNode::Leaf(value) => return Some(value),
TreeCacheNode::Branch(_, map) => {
stack.extend(map.iter().map(|(_k, v)| v));
}
}
}
None
})
}
}
impl<'a, K: Clone + Eq + Hash + 'a, V> TreeCacheNode<K, V> {
pub fn into_items(self) -> impl Iterator<Item = (Vec<K>, V)> {
let mut stack = vec![(Vec::new(), self)];
std::iter::from_fn(move || {
while let Some((prefix, node)) = stack.pop() {
match node {
TreeCacheNode::Leaf(value) => return Some((prefix, value)),
TreeCacheNode::Branch(_, map) => {
stack.extend(map.into_iter().map(|(k, v)| {
let mut prefix = prefix.clone();
prefix.push(k);
(prefix, v)
}));
}
}
}
None
})
}
}
impl<K, V> Default for TreeCacheNode<K, V> {
fn default() -> Self {
TreeCacheNode::new_branch()
}
}
pub struct TreeCache<K, V> {
root: TreeCacheNode<K, V>,
}
impl<K, V> TreeCache<K, V> {
pub fn new() -> Self {
TreeCache {
root: TreeCacheNode::new_branch(),
}
}
}
impl<'a, K: Eq + Hash + 'a, V> TreeCache<K, V> {
pub fn set(&mut self, key: impl IntoIterator<Item = K>, value: V) -> Result<(), Error> {
self.root.set(key.into_iter(), value)?;
Ok(())
}
pub fn get_node<Q>(
&self,
key: impl IntoIterator<Item = Q>,
) -> Result<Option<&TreeCacheNode<K, V>>, Error>
where
Q: Borrow<K>,
Q: Hash + Eq + 'a,
{
let mut node = &self.root;
for k in key {
match node {
TreeCacheNode::Leaf(_) => bail!("Given key is too long"),
TreeCacheNode::Branch(_, map) => {
node = if let Some(node) = map.get(k.borrow()) {
node
} else {
return Ok(None);
};
}
}
}
Ok(Some(node))
}
pub fn get<Q>(&self, key: impl IntoIterator<Item = Q>) -> Result<Option<&V>, Error>
where
Q: Borrow<K>,
Q: Hash + Eq + 'a,
{
if let Some(node) = self.get_node(key)? {
match node {
TreeCacheNode::Leaf(value) => Ok(Some(value)),
TreeCacheNode::Branch(_, _) => bail!("Given key is too short"),
}
} else {
Ok(None)
}
}
pub fn pop_node<Q>(
&mut self,
key: impl IntoIterator<Item = Q>,
) -> Result<Option<TreeCacheNode<K, V>>, Error>
where
Q: Borrow<K>,
Q: Hash + Eq + 'a,
{
let mut key_iter = key.into_iter();
let k = if let Some(k) = key_iter.next() {
k
} else {
let node = std::mem::replace(&mut self.root, TreeCacheNode::new_branch());
return Ok(Some(node));
};
self.root.pop(k, key_iter)
}
pub fn pop(&mut self, key: &[K]) -> Result<Option<V>, Error> {
if let Some(node) = self.pop_node(key)? {
match node {
TreeCacheNode::Leaf(value) => Ok(Some(value)),
TreeCacheNode::Branch(_, _) => bail!("Given key is too short"),
}
} else {
Ok(None)
}
}
pub fn clear(&mut self) {
self.root = TreeCacheNode::new_branch();
}
pub fn len(&self) -> usize {
match self.root {
TreeCacheNode::Leaf(_) => 1,
TreeCacheNode::Branch(size, _) => size,
}
}
pub fn values(&self) -> impl Iterator<Item = &V> {
let mut stack = vec![&self.root];
std::iter::from_fn(move || {
while let Some(node) = stack.pop() {
match node {
TreeCacheNode::Leaf(value) => return Some(value),
TreeCacheNode::Branch(_, map) => {
stack.extend(map.values());
}
}
}
None
})
}
pub fn items(&self) -> impl Iterator<Item = (Vec<&K>, &V)> {
self.root.items()
}
}
impl<K, V> Default for TreeCache<K, V> {
fn default() -> Self {
TreeCache::new()
}
}
#[cfg(test)]
mod test {
use std::collections::BTreeSet;
use super::*;
#[test]
fn get_set() -> Result<(), Error> {
let mut cache = TreeCache::new();
cache.set(vec!["a", "b"], "c")?;
assert_eq!(cache.get(&["a", "b"])?, Some(&"c"));
let node = cache.get_node(&["a"])?.unwrap();
match node {
TreeCacheNode::Leaf(_) => bail!("expected branch"),
TreeCacheNode::Branch(_, map) => {
assert_eq!(map.len(), 1);
assert!(map.contains_key("b"));
}
}
Ok(())
}
#[test]
fn length() -> Result<(), Error> {
let mut cache = TreeCache::new();
cache.set(vec!["a", "b"], "c")?;
assert_eq!(cache.len(), 1);
cache.set(vec!["a", "b"], "d")?;
assert_eq!(cache.len(), 1);
cache.set(vec!["e", "f"], "g")?;
assert_eq!(cache.len(), 2);
cache.set(vec!["e", "h"], "i")?;
assert_eq!(cache.len(), 3);
cache.set(vec!["e"], "i")?;
assert_eq!(cache.len(), 2);
cache.pop_node(&["a"])?;
assert_eq!(cache.len(), 1);
Ok(())
}
#[test]
fn clear() -> Result<(), Error> {
let mut cache = TreeCache::new();
cache.set(vec!["a", "b"], "c")?;
assert_eq!(cache.len(), 1);
cache.clear();
assert_eq!(cache.len(), 0);
assert_eq!(cache.get(&["a", "b"])?, None);
Ok(())
}
#[test]
fn pop() -> Result<(), Error> {
let mut cache = TreeCache::new();
cache.set(vec!["a", "b"], "c")?;
assert_eq!(cache.pop(&["a", "b"])?, Some("c"));
assert_eq!(cache.pop(&["a", "b"])?, None);
Ok(())
}
#[test]
fn values() -> Result<(), Error> {
let mut cache = TreeCache::new();
cache.set(vec!["a", "b"], "c")?;
let expected = ["c"].iter().collect();
assert_eq!(cache.values().collect::<BTreeSet<_>>(), expected);
cache.set(vec!["d", "e"], "f")?;
let expected = ["c", "f"].iter().collect();
assert_eq!(cache.values().collect::<BTreeSet<_>>(), expected);
Ok(())
}
#[test]
fn items() -> Result<(), Error> {
let mut cache = TreeCache::new();
cache.set(vec!["a", "b"], "c")?;
cache.set(vec!["d", "e"], "f")?;
let expected = [(vec![&"a", &"b"], &"c"), (vec![&"d", &"e"], &"f")]
.into_iter()
.collect();
assert_eq!(cache.items().collect::<BTreeSet<_>>(), expected);
Ok(())
}
}

View File

@@ -27,6 +27,7 @@ DISTS = (
"debian:sid",
"ubuntu:focal", # 20.04 LTS (our EOL forced by Py38 on 2024-10-14)
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04)
"ubuntu:kinetic", # 22.10 (EOL 2023-07-20)
)
DESC = """\

View File

@@ -126,7 +126,7 @@ export COMPLEMENT_BASE_IMAGE=complement-synapse
extra_test_args=()
test_tags="synapse_blacklist,msc3787"
test_tags="synapse_blacklist,msc3787,msc3874"
# All environment variables starting with PASS_ will be shared.
# (The prefix is stripped off before reaching the container.)
@@ -139,6 +139,9 @@ if [[ -n "$WORKERS" ]]; then
# Use workers.
export PASS_SYNAPSE_COMPLEMENT_USE_WORKERS=true
# Pass through the workers defined. If none, it will be an empty string
export PASS_SYNAPSE_WORKER_TYPES="$WORKER_TYPES"
# Workers can only use Postgres as a database.
export PASS_SYNAPSE_COMPLEMENT_DATABASE=postgres
@@ -159,9 +162,9 @@ else
# We only test faster room joins on monoliths, because they are purposefully
# being developed without worker support to start with.
#
# The tests for importing historical messages (MSC2716) and jump to date (MSC3030)
# also only pass with monoliths, currently.
test_tags="$test_tags,faster_joins,msc2716,msc3030"
# The tests for importing historical messages (MSC2716) also only pass with monoliths,
# currently.
test_tags="$test_tags,faster_joins,msc2716"
fi

View File

@@ -46,11 +46,12 @@ import signedjson.key
import signedjson.types
import srvlookup
import yaml
from requests import PreparedRequest, Response
from requests.adapters import HTTPAdapter
from urllib3 import HTTPConnectionPool
# uncomment the following to enable debug logging of http requests
# from httplib import HTTPConnection
# from http.client import HTTPConnection
# HTTPConnection.debuglevel = 1
@@ -103,6 +104,7 @@ def request(
destination: str,
path: str,
content: Optional[str],
verify_tls: bool,
) -> requests.Response:
if method is None:
if content is None:
@@ -141,7 +143,6 @@ def request(
s.mount("matrix://", MatrixConnectionAdapter())
headers: Dict[str, str] = {
"Host": destination,
"Authorization": authorization_headers[0],
}
@@ -152,7 +153,7 @@ def request(
method=method,
url=dest,
headers=headers,
verify=False,
verify=verify_tls,
data=content,
stream=True,
)
@@ -202,6 +203,12 @@ def main() -> None:
parser.add_argument("--body", help="Data to send as the body of the HTTP request")
parser.add_argument(
"--insecure",
action="store_true",
help="Disable TLS certificate verification",
)
parser.add_argument(
"path", help="request path, including the '/_matrix/federation/...' prefix."
)
@@ -227,6 +234,7 @@ def main() -> None:
args.destination,
args.path,
content=args.body,
verify_tls=not args.insecure,
)
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
@@ -254,36 +262,93 @@ def read_args_from_config(args: argparse.Namespace) -> None:
class MatrixConnectionAdapter(HTTPAdapter):
@staticmethod
def lookup(s: str, skip_well_known: bool = False) -> Tuple[str, int]:
if s[-1] == "]":
# ipv6 literal (with no port)
return s, 8448
def send(
self,
request: PreparedRequest,
*args: Any,
**kwargs: Any,
) -> Response:
# overrides the send() method in the base class.
if ":" in s:
out = s.rsplit(":", 1)
# We need to look for .well-known redirects before passing the request up to
# HTTPAdapter.send().
assert isinstance(request.url, str)
parsed = urlparse.urlsplit(request.url)
server_name = parsed.netloc
well_known = self._get_well_known(parsed.netloc)
if well_known:
server_name = well_known
# replace the scheme in the uri with https, so that cert verification is done
# also replace the hostname if we got a .well-known result
request.url = urlparse.urlunsplit(
("https", server_name, parsed.path, parsed.query, parsed.fragment)
)
# at this point we also add the host header (otherwise urllib will add one
# based on the `host` from the connection returned by `get_connection`,
# which will be wrong if there is an SRV record).
request.headers["Host"] = server_name
return super().send(request, *args, **kwargs)
def get_connection(
self, url: str, proxies: Optional[Dict[str, str]] = None
) -> HTTPConnectionPool:
# overrides the get_connection() method in the base class
parsed = urlparse.urlsplit(url)
(host, port, ssl_server_name) = self._lookup(parsed.netloc)
print(
f"Connecting to {host}:{port} with SNI {ssl_server_name}", file=sys.stderr
)
return self.poolmanager.connection_from_host(
host,
port=port,
scheme="https",
pool_kwargs={"server_hostname": ssl_server_name},
)
@staticmethod
def _lookup(server_name: str) -> Tuple[str, int, str]:
"""
Do an SRV lookup on a server name and return the host:port to connect to
Given the server_name (after any .well-known lookup), return the host, port and
the ssl server name
"""
if server_name[-1] == "]":
# ipv6 literal (with no port)
return server_name, 8448, server_name
if ":" in server_name:
# explicit port
out = server_name.rsplit(":", 1)
try:
port = int(out[1])
except ValueError:
raise ValueError("Invalid host:port '%s'" % s)
return out[0], port
# try a .well-known lookup
if not skip_well_known:
well_known = MatrixConnectionAdapter.get_well_known(s)
if well_known:
return MatrixConnectionAdapter.lookup(well_known, skip_well_known=True)
raise ValueError("Invalid host:port '%s'" % (server_name,))
return out[0], port, out[0]
try:
srv = srvlookup.lookup("matrix", "tcp", s)[0]
return srv.host, srv.port
srv = srvlookup.lookup("matrix", "tcp", server_name)[0]
print(
f"SRV lookup on _matrix._tcp.{server_name} gave {srv}",
file=sys.stderr,
)
return srv.host, srv.port, server_name
except Exception:
return s, 8448
return server_name, 8448, server_name
@staticmethod
def get_well_known(server_name: str) -> Optional[str]:
uri = "https://%s/.well-known/matrix/server" % (server_name,)
print("fetching %s" % (uri,), file=sys.stderr)
def _get_well_known(server_name: str) -> Optional[str]:
if ":" in server_name:
# explicit port, or ipv6 literal. Either way, no .well-known
return None
# TODO: check for ipv4 literals
uri = f"https://{server_name}/.well-known/matrix/server"
print(f"fetching {uri}", file=sys.stderr)
try:
resp = requests.get(uri)
@@ -304,19 +369,6 @@ class MatrixConnectionAdapter(HTTPAdapter):
print("Invalid response from %s: %s" % (uri, e), file=sys.stderr)
return None
def get_connection(
self, url: str, proxies: Optional[Dict[str, str]] = None
) -> HTTPConnectionPool:
parsed = urlparse.urlparse(url)
(host, port) = self.lookup(parsed.netloc)
netloc = "%s:%d" % (host, port)
print("Connecting to %s" % (netloc,), file=sys.stderr)
url = urlparse.urlunparse(
("https", netloc, parsed.path, parsed.params, parsed.query, parsed.fragment)
)
return super().get_connection(url, proxies)
if __name__ == "__main__":
main()

View File

@@ -219,9 +219,7 @@ def _prepare() -> None:
update_branch(repo)
# Create the new release branch
# Type ignore will no longer be needed after GitPython 3.1.28.
# See https://github.com/gitpython-developers/GitPython/pull/1419
repo.create_head(release_branch_name, commit=base_branch) # type: ignore[arg-type]
repo.create_head(release_branch_name, commit=base_branch)
# Special-case SyTest: we don't actually prepare any files so we may
# as well push it now (and only when we create a release branch;

View File

@@ -1,4 +1,4 @@
from typing import Any, Collection, Dict, Mapping, Optional, Sequence, Set, Tuple, Union
from typing import Any, Collection, Dict, Mapping, Optional, Sequence, Tuple, Union
from synapse.types import JsonDict
@@ -25,7 +25,13 @@ class PushRules:
def rules(self) -> Collection[PushRule]: ...
class FilteredPushRules:
def __init__(self, push_rules: PushRules, enabled_map: Dict[str, bool]): ...
def __init__(
self,
push_rules: PushRules,
enabled_map: Dict[str, bool],
msc3664_enabled: bool,
msc1767_enabled: bool,
): ...
def rules(self) -> Collection[Tuple[PushRule, bool]]: ...
def get_base_rule_ids() -> Collection[str]: ...
@@ -37,10 +43,14 @@ class PushRuleEvaluator:
room_member_count: int,
sender_power_level: Optional[int],
notification_power_levels: Mapping[str, int],
related_events_flattened: Mapping[str, Mapping[str, str]],
related_event_match_enabled: bool,
room_version_feature_flags: list[str],
msc3931_enabled: bool,
): ...
def run(
self,
push_rules: FilteredPushRules,
user_id: Optional[str],
display_name: Optional[str],
) -> Collection[dict]: ...
) -> Collection[Union[Mapping, str]]: ...

8
synapse/_scripts/update_synapse_database.py Executable file → Normal file
View File

@@ -15,7 +15,6 @@
import argparse
import logging
import sys
from typing import cast
import yaml
@@ -100,13 +99,6 @@ def main() -> None:
# Load, process and sanity-check the config.
hs_config = yaml.safe_load(args.database_config)
if "database" not in hs_config and "databases" not in hs_config:
sys.stderr.write(
"The configuration file must have a 'database' or 'databases' section. "
"See https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#database"
)
sys.exit(4)
config = HomeServerConfig()
config.parse_config_dict(hs_config, "", "")

View File

@@ -125,6 +125,8 @@ class EventTypes:
MSC2716_BATCH: Final = "org.matrix.msc2716.batch"
MSC2716_MARKER: Final = "org.matrix.msc2716.marker"
Reaction: Final = "m.reaction"
class ToDeviceEventTypes:
RoomKeyRequest: Final = "m.room_key_request"

View File

@@ -155,7 +155,13 @@ class RedirectException(CodeMessageException):
class SynapseError(CodeMessageException):
"""A base exception type for matrix errors which have an errcode and error
message (as well as an HTTP status code).
message (as well as an HTTP status code). These often bubble all the way up to the
client API response so the error code and status often reach the client directly as
defined here. If the error doesn't make sense to present to a client, then it
probably shouldn't be a `SynapseError`. For example, if we contact another
homeserver over federation, we shouldn't automatically ferry response errors back to
the client on our end (a 500 from a remote server does not make sense to a client
when our server did not experience a 500).
Attributes:
errcode: Matrix error code e.g 'M_FORBIDDEN'
@@ -600,8 +606,20 @@ def cs_error(msg: str, code: str = Codes.UNKNOWN, **kwargs: Any) -> "JsonDict":
class FederationError(RuntimeError):
"""This class is used to inform remote homeservers about erroneous
PDUs they sent us.
"""
Raised when we process an erroneous PDU.
There are two kinds of scenarios where this exception can be raised:
1. We may pull an invalid PDU from a remote homeserver (e.g. during backfill). We
raise this exception to signal an error to the rest of the application.
2. We may be pushed an invalid PDU as part of a `/send` transaction from a remote
homeserver. We raise so that we can respond to the transaction and include the
error string in the "PDU Processing Result". The message which will likely be
ignored by the remote homeserver and is not machine parse-able since it's just a
string.
TODO: In the future, we should split these usage scenarios into their own error types.
FATAL: The remote server could not interpret the source event.
(e.g., it was missing a required field)
@@ -695,7 +713,7 @@ class HttpResponseException(CodeMessageException):
set to the reason code from the HTTP response.
Returns:
SynapseError:
The error converted to a SynapseError.
"""
# try to parse the body as json, to get better errcode/msg, but
# default to M_UNKNOWN with the HTTP status as the error text

View File

@@ -43,7 +43,7 @@ if TYPE_CHECKING:
from synapse.server import HomeServer
FILTER_SCHEMA = {
"additionalProperties": False,
"additionalProperties": True, # Allow new fields for forward compatibility
"type": "object",
"properties": {
"limit": {"type": "number"},
@@ -63,7 +63,7 @@ FILTER_SCHEMA = {
}
ROOM_FILTER_SCHEMA = {
"additionalProperties": False,
"additionalProperties": True, # Allow new fields for forward compatibility
"type": "object",
"properties": {
"not_rooms": {"$ref": "#/definitions/room_id_array"},
@@ -77,7 +77,7 @@ ROOM_FILTER_SCHEMA = {
}
ROOM_EVENT_FILTER_SCHEMA = {
"additionalProperties": False,
"additionalProperties": True, # Allow new fields for forward compatibility
"type": "object",
"properties": {
"limit": {"type": "number"},
@@ -143,7 +143,7 @@ USER_FILTER_SCHEMA = {
},
},
},
"additionalProperties": False,
"additionalProperties": True, # Allow new fields for forward compatibility
}

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Callable, Dict, Optional
from typing import Callable, Dict, List, Optional
import attr
@@ -51,6 +51,13 @@ class RoomDisposition:
UNSTABLE = "unstable"
class PushRuleRoomFlag:
"""Enum for listing possible MSC3931 room version feature flags, for push rules"""
# MSC3932: Room version supports MSC1767 Extensible Events.
EXTENSIBLE_EVENTS = "org.matrix.msc3932.extensible_events"
@attr.s(slots=True, frozen=True, auto_attribs=True)
class RoomVersion:
"""An object which describes the unique attributes of a room version."""
@@ -91,6 +98,12 @@ class RoomVersion:
msc3787_knock_restricted_join_rule: bool
# MSC3667: Enforce integer power levels
msc3667_int_only_power_levels: bool
# MSC3931: Adds a push rule condition for "room version feature flags", making
# some push rules room version dependent. Note that adding a flag to this list
# is not enough to mark it "supported": the push rule evaluator also needs to
# support the flag. Unknown flags are ignored by the evaluator, making conditions
# fail if used.
msc3931_push_features: List[str] # values from PushRuleRoomFlag
class RoomVersions:
@@ -111,6 +124,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
V2 = RoomVersion(
"2",
@@ -129,6 +143,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
V3 = RoomVersion(
"3",
@@ -147,6 +162,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
V4 = RoomVersion(
"4",
@@ -165,6 +181,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
V5 = RoomVersion(
"5",
@@ -183,6 +200,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
V6 = RoomVersion(
"6",
@@ -201,6 +219,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
MSC2176 = RoomVersion(
"org.matrix.msc2176",
@@ -219,6 +238,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
V7 = RoomVersion(
"7",
@@ -237,6 +257,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
V8 = RoomVersion(
"8",
@@ -255,6 +276,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
V9 = RoomVersion(
"9",
@@ -273,6 +295,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
MSC3787 = RoomVersion(
"org.matrix.msc3787",
@@ -291,6 +314,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
V10 = RoomVersion(
"10",
@@ -309,6 +333,7 @@ class RoomVersions:
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
msc3931_push_features=[],
)
MSC2716v4 = RoomVersion(
"org.matrix.msc2716v4",
@@ -327,6 +352,27 @@ class RoomVersions:
msc2716_redactions=True,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3931_push_features=[],
)
MSC1767v10 = RoomVersion(
# MSC1767 (Extensible Events) based on room version "10"
"org.matrix.msc1767.10",
RoomDisposition.UNSTABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
msc3931_push_features=[PushRuleRoomFlag.EXTENSIBLE_EVENTS],
)

View File

@@ -28,7 +28,7 @@ FEDERATION_V1_PREFIX = FEDERATION_PREFIX + "/v1"
FEDERATION_V2_PREFIX = FEDERATION_PREFIX + "/v2"
FEDERATION_UNSTABLE_PREFIX = FEDERATION_PREFIX + "/unstable"
STATIC_PREFIX = "/_matrix/static"
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
SERVER_KEY_PREFIX = "/_matrix/key"
MEDIA_R0_PREFIX = "/_matrix/media/r0"
MEDIA_V3_PREFIX = "/_matrix/media/v3"
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"

View File

@@ -47,6 +47,7 @@ from twisted.internet.tcp import Port
from twisted.logger import LoggingFile, LogLevel
from twisted.protocols.tls import TLSMemoryBIOFactory
from twisted.python.threadpool import ThreadPool
from twisted.web.resource import Resource
import synapse.util.caches
from synapse.api.constants import MAX_PDU_SIZE
@@ -55,12 +56,13 @@ from synapse.app.phone_stats_home import start_phone_stats_home
from synapse.config import ConfigError
from synapse.config._base import format_config_error
from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import ManholeConfig
from synapse.config.server import ListenerConfig, ManholeConfig
from synapse.crypto import context_factory
from synapse.events.presence_router import load_legacy_presence_router
from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.events.third_party_rules import load_legacy_third_party_event_rules
from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.http.site import SynapseSite
from synapse.logging.context import PreserveLoggingContext
from synapse.logging.opentracing import init_tracer
from synapse.metrics import install_gc_manager, register_threadpool
@@ -264,26 +266,18 @@ def register_start(
reactor.callWhenRunning(lambda: defer.ensureDeferred(wrapper()))
def listen_metrics(
bind_addresses: Iterable[str], port: int, enable_legacy_metric_names: bool
) -> None:
def listen_metrics(bind_addresses: Iterable[str], port: int) -> None:
"""
Start Prometheus metrics server.
"""
from prometheus_client import start_http_server as start_http_server_prometheus
from synapse.metrics import (
RegistryProxy,
start_http_server as start_http_server_legacy,
)
from synapse.metrics import RegistryProxy
for host in bind_addresses:
logger.info("Starting metrics listener on %s:%d", host, port)
if enable_legacy_metric_names:
start_http_server_legacy(port, addr=host, registry=RegistryProxy)
else:
_set_prometheus_client_use_created_metrics(False)
start_http_server_prometheus(port, addr=host, registry=RegistryProxy)
_set_prometheus_client_use_created_metrics(False)
start_http_server_prometheus(port, addr=host, registry=RegistryProxy)
def _set_prometheus_client_use_created_metrics(new_value: bool) -> None:
@@ -357,6 +351,55 @@ def listen_tcp(
return r # type: ignore[return-value]
def listen_http(
listener_config: ListenerConfig,
root_resource: Resource,
version_string: str,
max_request_body_size: int,
context_factory: Optional[IOpenSSLContextFactory],
reactor: ISynapseReactor = reactor,
) -> List[Port]:
port = listener_config.port
bind_addresses = listener_config.bind_addresses
tls = listener_config.tls
assert listener_config.http_options is not None
site_tag = listener_config.http_options.tag
if site_tag is None:
site_tag = str(port)
site = SynapseSite(
"synapse.access.%s.%s" % ("https" if tls else "http", site_tag),
site_tag,
listener_config,
root_resource,
version_string,
max_request_body_size=max_request_body_size,
reactor=reactor,
)
if tls:
# refresh_certificate should have been called before this.
assert context_factory is not None
ports = listen_ssl(
bind_addresses,
port,
site,
context_factory,
reactor=reactor,
)
logger.info("Synapse now listening on TCP port %d (TLS)", port)
else:
ports = listen_tcp(
bind_addresses,
port,
site,
reactor=reactor,
)
logger.info("Synapse now listening on TCP port %d", port)
return ports
def listen_ssl(
bind_addresses: Collection[str],
port: int,
@@ -558,7 +601,7 @@ def reload_cache_config(config: HomeServerConfig) -> None:
logger.warning(f)
else:
logger.debug(
"New cache config. Was:\n %s\nNow:\n",
"New cache config. Was:\n %s\nNow:\n %s",
previous_cache_config.__dict__,
config.caches.__dict__,
)

View File

@@ -28,10 +28,6 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.events import EventBase
from synapse.handlers.admin import ExfiltrationWriter
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.server import HomeServer
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
@@ -40,10 +36,24 @@ from synapse.storage.databases.main.appservice import (
ApplicationServiceWorkerStore,
)
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.databases.main.devices import DeviceWorkerStore
from synapse.storage.databases.main.event_federation import EventFederationWorkerStore
from synapse.storage.databases.main.event_push_actions import (
EventPushActionsWorkerStore,
)
from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.storage.databases.main.filtering import FilteringWorkerStore
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.registration import RegistrationWorkerStore
from synapse.storage.databases.main.relations import RelationsWorkerStore
from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
from synapse.storage.databases.main.signatures import SignatureWorkerStore
from synapse.storage.databases.main.state import StateGroupWorkerStore
from synapse.storage.databases.main.stream import StreamWorkerStore
from synapse.storage.databases.main.tags import TagsWorkerStore
from synapse.storage.databases.main.user_erasure_store import UserErasureWorkerStore
from synapse.types import StateMap
from synapse.util import SYNAPSE_VERSION
from synapse.util.logcontext import LoggingContext
@@ -52,17 +62,25 @@ logger = logging.getLogger("synapse.app.admin_cmd")
class AdminCmdSlavedStore(
SlavedFilteringStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedDeviceStore,
FilteringWorkerStore,
DeviceWorkerStore,
TagsWorkerStore,
DeviceInboxWorkerStore,
AccountDataWorkerStore,
PushRulesWorkerStore,
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
RegistrationWorkerStore,
RoomMemberWorkerStore,
RelationsWorkerStore,
EventFederationWorkerStore,
EventPushActionsWorkerStore,
StateGroupWorkerStore,
SignatureWorkerStore,
UserErasureWorkerStore,
ReceiptsWorkerStore,
StreamWorkerStore,
EventsWorkerStore,
RegistrationWorkerStore,
RoomWorkerStore,
):
def __init__(

View File

@@ -55,13 +55,13 @@ import os
import signal
import sys
from types import FrameType
from typing import Any, Callable, List, Optional
from typing import Any, Callable, Dict, List, Optional
from twisted.internet.main import installReactor
# a list of the original signal handlers, before we installed our custom ones.
# We restore these in our child processes.
_original_signal_handlers: dict[int, Any] = {}
_original_signal_handlers: Dict[int, Any] = {}
class ProxiedReactor:

View File

@@ -14,21 +14,19 @@
# limitations under the License.
import logging
import sys
from typing import Dict, List, Optional, Tuple
from typing import Dict, List
from twisted.internet import address
from twisted.web.resource import Resource
import synapse
import synapse.events
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
from synapse.api.urls import (
CLIENT_API_PREFIX,
FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_R0_PREFIX,
MEDIA_V3_PREFIX,
SERVER_KEY_V2_PREFIX,
SERVER_KEY_PREFIX,
)
from synapse.app import _base
from synapse.app._base import (
@@ -43,53 +41,13 @@ from synapse.config.logger import setup_logging
from synapse.config.server import ListenerConfig
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.server import JsonResource, OptionsResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseRequest, SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.rest import ClientRestResource
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.client import (
account_data,
events,
initial_sync,
login,
presence,
profile,
push_rule,
read_marker,
receipts,
relations,
room,
room_batch,
room_keys,
sendtodevice,
sync,
tags,
user_directory,
versions,
voip,
)
from synapse.rest.client._base import client_patterns
from synapse.rest.client.account import ThreepidRestServlet, WhoamiRestServlet
from synapse.rest.client.devices import DevicesRestServlet
from synapse.rest.client.keys import (
KeyChangesServlet,
KeyQueryServlet,
OneTimeKeyServlet,
)
from synapse.rest.client.register import (
RegisterRestServlet,
RegistrationTokenValidityRestServlet,
)
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.key.v2 import KeyResource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import well_known_resource
from synapse.server import HomeServer
@@ -101,8 +59,16 @@ from synapse.storage.databases.main.appservice import (
from synapse.storage.databases.main.censor_events import CensorEventsStore
from synapse.storage.databases.main.client_ips import ClientIpWorkerStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.databases.main.devices import DeviceWorkerStore
from synapse.storage.databases.main.directory import DirectoryWorkerStore
from synapse.storage.databases.main.e2e_room_keys import EndToEndRoomKeyStore
from synapse.storage.databases.main.event_federation import EventFederationWorkerStore
from synapse.storage.databases.main.event_push_actions import (
EventPushActionsWorkerStore,
)
from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.storage.databases.main.filtering import FilteringWorkerStore
from synapse.storage.databases.main.keys import KeyStore
from synapse.storage.databases.main.lock import LockStore
from synapse.storage.databases.main.media_repository import MediaRepositoryStore
from synapse.storage.databases.main.metrics import ServerMetricsStore
@@ -111,118 +77,31 @@ from synapse.storage.databases.main.monthly_active_users import (
)
from synapse.storage.databases.main.presence import PresenceStore
from synapse.storage.databases.main.profile import ProfileWorkerStore
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
from synapse.storage.databases.main.pusher import PusherWorkerStore
from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.registration import RegistrationWorkerStore
from synapse.storage.databases.main.relations import RelationsWorkerStore
from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.storage.databases.main.room_batch import RoomBatchStore
from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
from synapse.storage.databases.main.search import SearchStore
from synapse.storage.databases.main.session import SessionStore
from synapse.storage.databases.main.signatures import SignatureWorkerStore
from synapse.storage.databases.main.state import StateGroupWorkerStore
from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.stream import StreamWorkerStore
from synapse.storage.databases.main.tags import TagsWorkerStore
from synapse.storage.databases.main.transactions import TransactionWorkerStore
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
from synapse.storage.databases.main.user_directory import UserDirectoryStore
from synapse.types import JsonDict
from synapse.storage.databases.main.user_erasure_store import UserErasureWorkerStore
from synapse.util import SYNAPSE_VERSION
from synapse.util.httpresourcetree import create_resource_tree
logger = logging.getLogger("synapse.app.generic_worker")
class KeyUploadServlet(RestServlet):
"""An implementation of the `KeyUploadServlet` that responds to read only
requests, but otherwise proxies through to the master instance.
"""
PATTERNS = client_patterns("/keys/upload(/(?P<device_id>[^/]+))?$")
def __init__(self, hs: HomeServer):
"""
Args:
hs: server
"""
super().__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastores().main
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker.worker_main_http_uri
async def on_POST(
self, request: SynapseRequest, device_id: Optional[str]
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
if device_id is not None:
# passing the device_id here is deprecated; however, we allow it
# for now for compatibility with older clients.
if requester.device_id is not None and device_id != requester.device_id:
logger.warning(
"Client uploading keys for a different device "
"(logged in as %s, uploading for %s)",
requester.device_id,
device_id,
)
else:
device_id = requester.device_id
if device_id is None:
raise SynapseError(
400, "To upload keys, you must pass device_id when authenticating"
)
if body:
# They're actually trying to upload something, proxy to main synapse.
# Proxy headers from the original request, such as the auth headers
# (in case the access token is there) and the original IP /
# User-Agent of the request.
headers = {
header: request.requestHeaders.getRawHeaders(header, [])
for header in (b"Authorization", b"User-Agent")
}
# Add the previous hop to the X-Forwarded-For header.
x_forwarded_for = request.requestHeaders.getRawHeaders(
b"X-Forwarded-For", []
)
# we use request.client here, since we want the previous hop, not the
# original client (as returned by request.getClientAddress()).
if isinstance(request.client, (address.IPv4Address, address.IPv6Address)):
previous_host = request.client.host.encode("ascii")
# If the header exists, add to the comma-separated list of the first
# instance of the header. Otherwise, generate a new header.
if x_forwarded_for:
x_forwarded_for = [x_forwarded_for[0] + b", " + previous_host]
x_forwarded_for.extend(x_forwarded_for[1:])
else:
x_forwarded_for = [previous_host]
headers[b"X-Forwarded-For"] = x_forwarded_for
# Replicate the original X-Forwarded-Proto header. Note that
# XForwardedForRequest overrides isSecure() to give us the original protocol
# used by the client, as opposed to the protocol used by our upstream proxy
# - which is what we want here.
headers[b"X-Forwarded-Proto"] = [
b"https" if request.isSecure() else b"http"
]
try:
result = await self.http_client.post_json_get_json(
self.main_uri + request.uri.decode("ascii"), body, headers=headers
)
except HttpResponseException as e:
raise e.to_synapse_error() from e
except RequestSendFailed as e:
raise SynapseError(502, "Failed to talk to master") from e
return 200, result
else:
# Just interested in counts.
result = await self.store.count_e2e_one_time_keys(user_id, device_id)
return 200, {"one_time_key_counts": result}
class GenericWorkerSlavedStore(
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
# rather than going via the correct worker.
@@ -232,26 +111,36 @@ class GenericWorkerSlavedStore(
EndToEndRoomKeyStore,
PresenceStore,
DeviceInboxWorkerStore,
SlavedDeviceStore,
SlavedPushRuleStore,
DeviceWorkerStore,
TagsWorkerStore,
AccountDataWorkerStore,
SlavedPusherStore,
CensorEventsStore,
ClientIpWorkerStore,
SlavedEventStore,
SlavedKeyStore,
# KeyStore isn't really safe to use from a worker, but for now we do so and hope that
# the races it creates aren't too bad.
KeyStore,
RoomWorkerStore,
RoomBatchStore,
DirectoryWorkerStore,
PushRulesWorkerStore,
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
ProfileWorkerStore,
SlavedFilteringStore,
FilteringWorkerStore,
MonthlyActiveUsersWorkerStore,
MediaRepositoryStore,
ServerMetricsStore,
PusherWorkerStore,
RoomMemberWorkerStore,
RelationsWorkerStore,
EventFederationWorkerStore,
EventPushActionsWorkerStore,
StateGroupWorkerStore,
SignatureWorkerStore,
UserErasureWorkerStore,
ReceiptsWorkerStore,
StreamWorkerStore,
EventsWorkerStore,
RegistrationWorkerStore,
SearchStore,
TransactionWorkerStore,
@@ -268,15 +157,9 @@ class GenericWorkerServer(HomeServer):
DATASTORE_CLASS = GenericWorkerSlavedStore # type: ignore
def _listen_http(self, listener_config: ListenerConfig) -> None:
port = listener_config.port
bind_addresses = listener_config.bind_addresses
assert listener_config.http_options is not None
site_tag = listener_config.http_options.tag
if site_tag is None:
site_tag = str(port)
# We always include a health resource.
resources: Dict[str, Resource] = {"/health": HealthResource()}
@@ -285,53 +168,15 @@ class GenericWorkerServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
resource: Resource = ClientRestResource(self)
RegisterRestServlet(self).register(resource)
RegistrationTokenValidityRestServlet(self).register(resource)
login.register_servlets(self, resource)
ThreepidRestServlet(self).register(resource)
WhoamiRestServlet(self).register(resource)
DevicesRestServlet(self).register(resource)
# Read-only
KeyUploadServlet(self).register(resource)
KeyQueryServlet(self).register(resource)
KeyChangesServlet(self).register(resource)
OneTimeKeyServlet(self).register(resource)
voip.register_servlets(self, resource)
push_rule.register_servlets(self, resource)
versions.register_servlets(self, resource)
profile.register_servlets(self, resource)
sync.register_servlets(self, resource)
events.register_servlets(self, resource)
room.register_servlets(self, resource, is_worker=True)
relations.register_servlets(self, resource)
room.register_deprecated_servlets(self, resource)
initial_sync.register_servlets(self, resource)
room_batch.register_servlets(self, resource)
room_keys.register_servlets(self, resource)
tags.register_servlets(self, resource)
account_data.register_servlets(self, resource)
receipts.register_servlets(self, resource)
read_marker.register_servlets(self, resource)
sendtodevice.register_servlets(self, resource)
user_directory.register_servlets(self, resource)
presence.register_servlets(self, resource)
resources.update({CLIENT_API_PREFIX: resource})
resources[CLIENT_API_PREFIX] = resource
resources.update(build_synapse_client_resource_tree(self))
resources.update({"/.well-known": well_known_resource(self)})
resources["/.well-known"] = well_known_resource(self)
elif name == "federation":
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
resources[FEDERATION_PREFIX] = TransportLayerServer(self)
elif name == "media":
if self.config.media.can_load_media_repo:
media_repo = self.get_media_repository_resource()
@@ -359,16 +204,12 @@ class GenericWorkerServer(HomeServer):
# Only load the openid resource separately if federation resource
# is not specified since federation resource includes openid
# resource.
resources.update(
{
FEDERATION_PREFIX: TransportLayerServer(
self, servlet_groups=["openid"]
)
}
resources[FEDERATION_PREFIX] = TransportLayerServer(
self, servlet_groups=["openid"]
)
if name in ["keys", "federation"]:
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
resources[SERVER_KEY_PREFIX] = KeyResource(self)
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
@@ -379,23 +220,15 @@ class GenericWorkerServer(HomeServer):
root_resource = create_resource_tree(resources, OptionsResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
max_request_body_size=max_request_body_size(self.config),
reactor=self.get_reactor(),
),
_base.listen_http(
listener_config,
root_resource,
self.version_string,
max_request_body_size(self.config),
self.tls_server_context_factory,
reactor=self.get_reactor(),
)
logger.info("Synapse worker now listening on port %d", port)
def start_listening(self) -> None:
for listener in self.config.worker.worker_listeners:
if listener.type == "http":
@@ -417,7 +250,6 @@ class GenericWorkerServer(HomeServer):
_base.listen_metrics(
listener.bind_addresses,
listener.port,
enable_legacy_metric_names=self.config.metrics.enable_legacy_metrics,
)
else:
logger.warning("Unsupported listener type: %s", listener.type)

View File

@@ -31,14 +31,13 @@ from synapse.api.urls import (
LEGACY_MEDIA_PREFIX,
MEDIA_R0_PREFIX,
MEDIA_V3_PREFIX,
SERVER_KEY_V2_PREFIX,
SERVER_KEY_PREFIX,
STATIC_PREFIX,
)
from synapse.app import _base
from synapse.app._base import (
handle_startup_exception,
listen_ssl,
listen_tcp,
listen_http,
max_request_body_size,
redirect_stdio_to_logs,
register_start,
@@ -53,14 +52,13 @@ from synapse.http.server import (
RootOptionsRedirectResource,
StaticResource,
)
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.rest import ClientRestResource
from synapse.rest.admin import AdminRestResource
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.key.v2 import KeyResource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import well_known_resource
from synapse.server import HomeServer
@@ -83,8 +81,6 @@ class SynapseHomeServer(HomeServer):
self, config: HomeServerConfig, listener_config: ListenerConfig
) -> Iterable[Port]:
port = listener_config.port
bind_addresses = listener_config.bind_addresses
tls = listener_config.tls
# Must exist since this is an HTTP listener.
assert listener_config.http_options is not None
site_tag = listener_config.http_options.tag
@@ -140,37 +136,15 @@ class SynapseHomeServer(HomeServer):
else:
root_resource = OptionsResource()
site = SynapseSite(
"synapse.access.%s.%s" % ("https" if tls else "http", site_tag),
site_tag,
ports = listen_http(
listener_config,
create_resource_tree(resources, root_resource),
self.version_string,
max_request_body_size=max_request_body_size(self.config),
max_request_body_size(self.config),
self.tls_server_context_factory,
reactor=self.get_reactor(),
)
if tls:
# refresh_certificate should have been called before this.
assert self.tls_server_context_factory is not None
ports = listen_ssl(
bind_addresses,
port,
site,
self.tls_server_context_factory,
reactor=self.get_reactor(),
)
logger.info("Synapse now listening on TCP port %d (TLS)", port)
else:
ports = listen_tcp(
bind_addresses,
port,
site,
reactor=self.get_reactor(),
)
logger.info("Synapse now listening on TCP port %d", port)
return ports
def _configure_named_resource(
@@ -215,30 +189,22 @@ class SynapseHomeServer(HomeServer):
consent_resource: Resource = ConsentResource(self)
if compress:
consent_resource = gz_wrap(consent_resource)
resources.update({"/_matrix/consent": consent_resource})
resources["/_matrix/consent"] = consent_resource
if name == "federation":
federation_resource: Resource = TransportLayerServer(self)
if compress:
federation_resource = gz_wrap(federation_resource)
resources.update({FEDERATION_PREFIX: federation_resource})
resources[FEDERATION_PREFIX] = federation_resource
if name == "openid":
resources.update(
{
FEDERATION_PREFIX: TransportLayerServer(
self, servlet_groups=["openid"]
)
}
resources[FEDERATION_PREFIX] = TransportLayerServer(
self, servlet_groups=["openid"]
)
if name in ["static", "client"]:
resources.update(
{
STATIC_PREFIX: StaticResource(
os.path.join(os.path.dirname(synapse.__file__), "static")
)
}
resources[STATIC_PREFIX] = StaticResource(
os.path.join(os.path.dirname(synapse.__file__), "static")
)
if name in ["media", "federation", "client"]:
@@ -257,7 +223,7 @@ class SynapseHomeServer(HomeServer):
)
if name in ["keys", "federation"]:
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
resources[SERVER_KEY_PREFIX] = KeyResource(self)
if name == "metrics" and self.config.metrics.enable_metrics:
metrics_resource: Resource = MetricsResource(RegistryProxy)
@@ -299,7 +265,6 @@ class SynapseHomeServer(HomeServer):
_base.listen_metrics(
listener.bind_addresses,
listener.port,
enable_legacy_metric_names=self.config.metrics.enable_legacy_metrics,
)
else:
# this shouldn't happen, as the listener type should have been checked

View File

@@ -32,9 +32,9 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# Type for the `device_one_time_key_counts` field in an appservice transaction
# Type for the `device_one_time_keys_count` field in an appservice transaction
# user ID -> {device ID -> {algorithm -> count}}
TransactionOneTimeKeyCounts = Dict[str, Dict[str, Dict[str, int]]]
TransactionOneTimeKeysCount = Dict[str, Dict[str, Dict[str, int]]]
# Type for the `device_unused_fallback_key_types` field in an appservice transaction
# user ID -> {device ID -> [algorithm]}
@@ -172,12 +172,24 @@ class ApplicationService:
Returns:
True if this service would like to know about this room.
"""
member_list = await store.get_users_in_room(
# We can use `get_local_users_in_room(...)` here because an application service
# can only be interested in local users of the server it's on (ignore any remote
# users that might match the user namespace regex).
#
# In the future, we can consider re-using
# `store.get_app_service_users_in_room` which is very similar to this
# function but has a slightly worse performance than this because we
# have an early escape-hatch if we find a single user that the
# appservice is interested in. The juice would be worth the squeeze if
# `store.get_app_service_users_in_room` was used in more places besides
# an experimental MSC. But for now we can avoid doing more work and
# barely using it later.
local_user_ids = await store.get_local_users_in_room(
room_id, on_invalidate=cache_context.invalidate
)
# check joined member events
for user_id in member_list:
for user_id in local_user_ids:
if self.is_interested_in_user(user_id):
return True
return False
@@ -364,7 +376,7 @@ class AppServiceTransaction:
events: List[EventBase],
ephemeral: List[JsonDict],
to_device_messages: List[JsonDict],
one_time_key_counts: TransactionOneTimeKeyCounts,
one_time_keys_count: TransactionOneTimeKeysCount,
unused_fallback_keys: TransactionUnusedFallbackKeys,
device_list_summary: DeviceListUpdates,
):
@@ -373,7 +385,7 @@ class AppServiceTransaction:
self.events = events
self.ephemeral = ephemeral
self.to_device_messages = to_device_messages
self.one_time_key_counts = one_time_key_counts
self.one_time_keys_count = one_time_keys_count
self.unused_fallback_keys = unused_fallback_keys
self.device_list_summary = device_list_summary
@@ -390,7 +402,7 @@ class AppServiceTransaction:
events=self.events,
ephemeral=self.ephemeral,
to_device_messages=self.to_device_messages,
one_time_key_counts=self.one_time_key_counts,
one_time_keys_count=self.one_time_keys_count,
unused_fallback_keys=self.unused_fallback_keys,
device_list_summary=self.device_list_summary,
txn_id=self.id,

View File

@@ -23,7 +23,7 @@ from synapse.api.constants import EventTypes, Membership, ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException
from synapse.appservice import (
ApplicationService,
TransactionOneTimeKeyCounts,
TransactionOneTimeKeysCount,
TransactionUnusedFallbackKeys,
)
from synapse.events import EventBase
@@ -262,7 +262,7 @@ class ApplicationServiceApi(SimpleHttpClient):
events: List[EventBase],
ephemeral: List[JsonDict],
to_device_messages: List[JsonDict],
one_time_key_counts: TransactionOneTimeKeyCounts,
one_time_keys_count: TransactionOneTimeKeysCount,
unused_fallback_keys: TransactionUnusedFallbackKeys,
device_list_summary: DeviceListUpdates,
txn_id: Optional[int] = None,
@@ -310,10 +310,13 @@ class ApplicationServiceApi(SimpleHttpClient):
# TODO: Update to stable prefixes once MSC3202 completes FCP merge
if service.msc3202_transaction_extensions:
if one_time_key_counts:
if one_time_keys_count:
body[
"org.matrix.msc3202.device_one_time_key_counts"
] = one_time_key_counts
] = one_time_keys_count
body[
"org.matrix.msc3202.device_one_time_keys_count"
] = one_time_keys_count
if unused_fallback_keys:
body[
"org.matrix.msc3202.device_unused_fallback_key_types"

View File

@@ -64,7 +64,7 @@ from typing import (
from synapse.appservice import (
ApplicationService,
ApplicationServiceState,
TransactionOneTimeKeyCounts,
TransactionOneTimeKeysCount,
TransactionUnusedFallbackKeys,
)
from synapse.appservice.api import ApplicationServiceApi
@@ -258,7 +258,7 @@ class _ServiceQueuer:
):
return
one_time_key_counts: Optional[TransactionOneTimeKeyCounts] = None
one_time_keys_count: Optional[TransactionOneTimeKeysCount] = None
unused_fallback_keys: Optional[TransactionUnusedFallbackKeys] = None
if (
@@ -269,7 +269,7 @@ class _ServiceQueuer:
# for the users which are mentioned in this transaction,
# as well as the appservice's sender.
(
one_time_key_counts,
one_time_keys_count,
unused_fallback_keys,
) = await self._compute_msc3202_otk_counts_and_fallback_keys(
service, events, ephemeral, to_device_messages_to_send
@@ -281,7 +281,7 @@ class _ServiceQueuer:
events,
ephemeral,
to_device_messages_to_send,
one_time_key_counts,
one_time_keys_count,
unused_fallback_keys,
device_list_summary,
)
@@ -296,7 +296,7 @@ class _ServiceQueuer:
events: Iterable[EventBase],
ephemerals: Iterable[JsonDict],
to_device_messages: Iterable[JsonDict],
) -> Tuple[TransactionOneTimeKeyCounts, TransactionUnusedFallbackKeys]:
) -> Tuple[TransactionOneTimeKeysCount, TransactionUnusedFallbackKeys]:
"""
Given a list of the events, ephemeral messages and to-device messages,
- first computes a list of application services users that may have
@@ -367,7 +367,7 @@ class _TransactionController:
events: List[EventBase],
ephemeral: Optional[List[JsonDict]] = None,
to_device_messages: Optional[List[JsonDict]] = None,
one_time_key_counts: Optional[TransactionOneTimeKeyCounts] = None,
one_time_keys_count: Optional[TransactionOneTimeKeysCount] = None,
unused_fallback_keys: Optional[TransactionUnusedFallbackKeys] = None,
device_list_summary: Optional[DeviceListUpdates] = None,
) -> None:
@@ -380,7 +380,7 @@ class _TransactionController:
events: The persistent events to include in the transaction.
ephemeral: The ephemeral events to include in the transaction.
to_device_messages: The to-device messages to include in the transaction.
one_time_key_counts: Counts of remaining one-time keys for relevant
one_time_keys_count: Counts of remaining one-time keys for relevant
appservice devices in the transaction.
unused_fallback_keys: Lists of unused fallback keys for relevant
appservice devices in the transaction.
@@ -397,7 +397,7 @@ class _TransactionController:
events=events,
ephemeral=ephemeral or [],
to_device_messages=to_device_messages or [],
one_time_key_counts=one_time_key_counts or {},
one_time_keys_count=one_time_keys_count or {},
unused_fallback_keys=unused_fallback_keys or {},
device_list_summary=device_list_summary or DeviceListUpdates(),
)

View File

@@ -16,6 +16,7 @@ from typing import Any, Optional
import attr
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.config._base import Config
from synapse.types import JsonDict
@@ -53,9 +54,6 @@ class ExperimentalConfig(Config):
# MSC3266 (room summary api)
self.msc3266_enabled: bool = experimental.get("msc3266_enabled", False)
# MSC3030 (Jump to date API endpoint)
self.msc3030_enabled: bool = experimental.get("msc3030_enabled", False)
# MSC2409 (this setting only relates to optionally sending to-device messages).
# Presence, typing and read receipt EDUs are already sent to application services that
# have opted in to receive them. If enabled, this adds to-device messages to that list.
@@ -98,6 +96,9 @@ class ExperimentalConfig(Config):
# MSC3773: Thread notifications
self.msc3773_enabled: bool = experimental.get("msc3773_enabled", False)
# MSC3664: Pushrules to match on related events
self.msc3664_enabled: bool = experimental.get("msc3664_enabled", False)
# MSC3848: Introduce errcodes for specific event sending failures
self.msc3848_enabled: bool = experimental.get("msc3848_enabled", False)
@@ -125,3 +126,13 @@ class ExperimentalConfig(Config):
self.msc3886_endpoint: Optional[str] = experimental.get(
"msc3886_endpoint", None
)
# MSC3912: Relation-based redactions.
self.msc3912_enabled: bool = experimental.get("msc3912_enabled", False)
# MSC1767 and friends: Extensible Events
self.msc1767_enabled: bool = experimental.get("msc1767_enabled", False)
if self.msc1767_enabled:
# Enable room version (and thus applicable push rules from MSC3931/3932)
version_id = RoomVersions.MSC1767v10.identifier
KNOWN_ROOM_VERSIONS[version_id] = RoomVersions.MSC1767v10

View File

@@ -53,7 +53,7 @@ DEFAULT_LOG_CONFIG = Template(
# Synapse also supports structured logging for machine readable logs which can
# be ingested by ELK stacks. See [2] for details.
#
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
# [1]: https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema
# [2]: https://matrix-org.github.io/synapse/latest/structured_logging.html
version: 1
@@ -317,10 +317,9 @@ def setup_logging(
Set up the logging subsystem.
Args:
config (LoggingConfig | synapse.config.worker.WorkerConfig):
configuration data
config: configuration data
use_worker_options (bool): True to use the 'worker_log_config' option
use_worker_options: True to use the 'worker_log_config' option
instead of 'log_config'.
logBeginner: The Twisted logBeginner to use.

View File

@@ -43,8 +43,6 @@ class MetricsConfig(Config):
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
self.enable_metrics = config.get("enable_metrics", False)
self.enable_legacy_metrics = config.get("enable_legacy_metrics", True)
self.report_stats = config.get("report_stats", None)
self.report_stats_endpoint = config.get(
"report_stats_endpoint", "https://matrix.org/report-usage-stats/push"

View File

@@ -123,6 +123,8 @@ OIDC_PROVIDER_CONFIG_SCHEMA = {
"userinfo_endpoint": {"type": "string"},
"jwks_uri": {"type": "string"},
"skip_verification": {"type": "boolean"},
"backchannel_logout_enabled": {"type": "boolean"},
"backchannel_logout_ignore_sub": {"type": "boolean"},
"user_profile_method": {
"type": "string",
"enum": ["auto", "userinfo_endpoint"],
@@ -292,6 +294,10 @@ def _parse_oidc_config_dict(
token_endpoint=oidc_config.get("token_endpoint"),
userinfo_endpoint=oidc_config.get("userinfo_endpoint"),
jwks_uri=oidc_config.get("jwks_uri"),
backchannel_logout_enabled=oidc_config.get("backchannel_logout_enabled", False),
backchannel_logout_ignore_sub=oidc_config.get(
"backchannel_logout_ignore_sub", False
),
skip_verification=oidc_config.get("skip_verification", False),
user_profile_method=oidc_config.get("user_profile_method", "auto"),
allow_existing_users=oidc_config.get("allow_existing_users", False),
@@ -368,6 +374,12 @@ class OidcProviderConfig:
# "openid" scope is used.
jwks_uri: Optional[str]
# Whether Synapse should react to backchannel logouts
backchannel_logout_enabled: bool
# Whether Synapse should ignore the `sub` claim in backchannel logouts or not.
backchannel_logout_ignore_sub: bool
# Whether to skip metadata verification
skip_verification: bool

View File

@@ -26,6 +26,7 @@ class PushConfig(Config):
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
push_config = config.get("push") or {}
self.push_include_content = push_config.get("include_content", True)
self.enable_push = push_config.get("enabled", True)
self.push_group_unread_count_by_room = push_config.get(
"group_unread_count_by_room", True
)

View File

@@ -150,8 +150,5 @@ class RatelimitConfig(Config):
self.rc_third_party_invite = RatelimitSettings(
config.get("rc_third_party_invite", {}),
defaults={
"per_second": self.rc_message.per_second,
"burst_count": self.rc_message.burst_count,
},
defaults={"per_second": 0.0025, "burst_count": 5},
)

View File

@@ -29,20 +29,6 @@ from ._base import (
)
from .server import DIRECT_TCP_ERROR, ListenerConfig, parse_listener_def
_FEDERATION_SENDER_WITH_SEND_FEDERATION_ENABLED_ERROR = """
The send_federation config option must be disabled in the main
synapse process before they can be run in a separate worker.
Please add ``send_federation: false`` to the main config
"""
_PUSHER_WITH_START_PUSHERS_ENABLED_ERROR = """
The start_pushers config option must be disabled in the main
synapse process before they can be run in a separate worker.
Please add ``start_pushers: false`` to the main config
"""
_DEPRECATED_WORKER_DUTY_OPTION_USED = """
The '%s' configuration option is deprecated and will be removed in a future
Synapse version. Please use ``%s: name_of_worker`` instead.
@@ -67,6 +53,7 @@ class InstanceLocationConfig:
host: str
port: int
tls: bool = False
@attr.s
@@ -149,13 +136,25 @@ class WorkerConfig(Config):
# The port on the main synapse for HTTP replication endpoint
self.worker_replication_http_port = config.get("worker_replication_http_port")
# The tls mode on the main synapse for HTTP replication endpoint.
# For backward compatibility this defaults to False.
self.worker_replication_http_tls = config.get(
"worker_replication_http_tls", False
)
# The shared secret used for authentication when connecting to the main synapse.
self.worker_replication_secret = config.get("worker_replication_secret", None)
self.worker_name = config.get("worker_name", self.worker_app)
self.instance_name = self.worker_name or "master"
# FIXME: Remove this check after a suitable amount of time.
self.worker_main_http_uri = config.get("worker_main_http_uri", None)
if self.worker_main_http_uri is not None:
logger.warning(
"The config option worker_main_http_uri is unused since Synapse 1.73. "
"It can be safely removed from your configuration."
)
# This option is really only here to support `--manhole` command line
# argument.
@@ -169,40 +168,12 @@ class WorkerConfig(Config):
)
)
# Handle federation sender configuration.
#
# There are two ways of configuring which instances handle federation
# sending:
# 1. The old way where "send_federation" is set to false and running a
# `synapse.app.federation_sender` worker app.
# 2. Specifying the workers sending federation in
# `federation_sender_instances`.
#
send_federation = config.get("send_federation", True)
federation_sender_instances = config.get("federation_sender_instances")
if federation_sender_instances is None:
# Default to an empty list, which means "another, unknown, worker is
# responsible for it".
federation_sender_instances = []
# If no federation sender instances are set we check if
# `send_federation` is set, which means use master
if send_federation:
federation_sender_instances = ["master"]
if self.worker_app == "synapse.app.federation_sender":
if send_federation:
# If we're running federation senders, and not using
# `federation_sender_instances`, then we should have
# explicitly set `send_federation` to false.
raise ConfigError(
_FEDERATION_SENDER_WITH_SEND_FEDERATION_ENABLED_ERROR
)
federation_sender_instances = [self.worker_name]
federation_sender_instances = self._worker_names_performing_this_duty(
config,
"send_federation",
"synapse.app.federation_sender",
"federation_sender_instances",
)
self.send_federation = self.instance_name in federation_sender_instances
self.federation_shard_config = ShardedWorkerHandlingConfig(
federation_sender_instances
@@ -269,27 +240,12 @@ class WorkerConfig(Config):
)
# Handle sharded push
start_pushers = config.get("start_pushers", True)
pusher_instances = config.get("pusher_instances")
if pusher_instances is None:
# Default to an empty list, which means "another, unknown, worker is
# responsible for it".
pusher_instances = []
# If no pushers instances are set we check if `start_pushers` is
# set, which means use master
if start_pushers:
pusher_instances = ["master"]
if self.worker_app == "synapse.app.pusher":
if start_pushers:
# If we're running pushers, and not using
# `pusher_instances`, then we should have explicitly set
# `start_pushers` to false.
raise ConfigError(_PUSHER_WITH_START_PUSHERS_ENABLED_ERROR)
pusher_instances = [self.instance_name]
pusher_instances = self._worker_names_performing_this_duty(
config,
"start_pushers",
"synapse.app.pusher",
"pusher_instances",
)
self.start_pushers = self.instance_name in pusher_instances
self.pusher_shard_config = ShardedWorkerHandlingConfig(pusher_instances)
@@ -412,6 +368,64 @@ class WorkerConfig(Config):
# (By this point, these are either the same value or only one is not None.)
return bool(new_option_should_run_here or legacy_option_should_run_here)
def _worker_names_performing_this_duty(
self,
config: Dict[str, Any],
legacy_option_name: str,
legacy_app_name: str,
modern_instance_list_name: str,
) -> List[str]:
"""
Retrieves the names of the workers handling a given duty, by either legacy
option or instance list.
There are two ways of configuring which instances handle a given duty, e.g.
for configuring pushers:
1. The old way where "start_pushers" is set to false and running a
`synapse.app.pusher'` worker app.
2. Specifying the workers sending federation in `pusher_instances`.
Args:
config: settings read from yaml.
legacy_option_name: the old way of enabling options. e.g. 'start_pushers'
legacy_app_name: The historical app name. e.g. 'synapse.app.pusher'
modern_instance_list_name: the string name of the new instance_list. e.g.
'pusher_instances'
Returns:
A list of worker instance names handling the given duty.
"""
legacy_option = config.get(legacy_option_name, True)
worker_instances = config.get(modern_instance_list_name)
if worker_instances is None:
# Default to an empty list, which means "another, unknown, worker is
# responsible for it".
worker_instances = []
# If no worker instances are set we check if the legacy option
# is set, which means use the main process.
if legacy_option:
worker_instances = ["master"]
if self.worker_app == legacy_app_name:
if legacy_option:
# If we're using `legacy_app_name`, and not using
# `modern_instance_list_name`, then we should have
# explicitly set `legacy_option_name` to false.
raise ConfigError(
f"The '{legacy_option_name}' config option must be disabled in "
"the main synapse process before they can be run in a separate "
"worker.\n"
f"Please add `{legacy_option_name}: false` to the main config.\n",
)
worker_instances = [self.worker_name]
return worker_instances
def read_arguments(self, args: argparse.Namespace) -> None:
# We support a bunch of command line arguments that override options in
# the config. A lot of these options have a worker_* prefix when running

View File

@@ -14,7 +14,6 @@
import abc
import logging
import urllib
from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Optional, Tuple
import attr
@@ -213,7 +212,7 @@ class Keyring:
def verify_json_objects_for_server(
self, server_and_json: Iterable[Tuple[str, dict, int]]
) -> List[defer.Deferred]:
) -> List["defer.Deferred[None]"]:
"""Bulk verifies signatures of json objects, bulk fetching keys as
necessary.
@@ -226,10 +225,9 @@ class Keyring:
valid.
Returns:
List<Deferred[None]>: for each input triplet, a deferred indicating success
or failure to verify each json object's signature for the given
server_name. The deferreds run their callbacks in the sentinel
logcontext.
For each input triplet, a deferred indicating success or failure to
verify each json object's signature for the given server_name. The
deferreds run their callbacks in the sentinel logcontext.
"""
return [
run_in_background(
@@ -814,31 +812,27 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
results = {}
async def get_key(key_to_fetch_item: _FetchKeyRequest) -> None:
async def get_keys(key_to_fetch_item: _FetchKeyRequest) -> None:
server_name = key_to_fetch_item.server_name
key_ids = key_to_fetch_item.key_ids
try:
keys = await self.get_server_verify_key_v2_direct(server_name, key_ids)
keys = await self.get_server_verify_keys_v2_direct(server_name)
results[server_name] = keys
except KeyLookupError as e:
logger.warning(
"Error looking up keys %s from %s: %s", key_ids, server_name, e
)
logger.warning("Error looking up keys from %s: %s", server_name, e)
except Exception:
logger.exception("Error getting keys %s from %s", key_ids, server_name)
logger.exception("Error getting keys from %s", server_name)
await yieldable_gather_results(get_key, keys_to_fetch)
await yieldable_gather_results(get_keys, keys_to_fetch)
return results
async def get_server_verify_key_v2_direct(
self, server_name: str, key_ids: Iterable[str]
async def get_server_verify_keys_v2_direct(
self, server_name: str
) -> Dict[str, FetchKeyResult]:
"""
Args:
server_name:
key_ids:
server_name: Server to request keys from
Returns:
Map from key ID to lookup result
@@ -846,57 +840,41 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
Raises:
KeyLookupError if there was a problem making the lookup
"""
keys: Dict[str, FetchKeyResult] = {}
for requested_key_id in key_ids:
# we may have found this key as a side-effect of asking for another.
if requested_key_id in keys:
continue
time_now_ms = self.clock.time_msec()
try:
response = await self.client.get_json(
destination=server_name,
path="/_matrix/key/v2/server/"
+ urllib.parse.quote(requested_key_id),
ignore_backoff=True,
# we only give the remote server 10s to respond. It should be an
# easy request to handle, so if it doesn't reply within 10s, it's
# probably not going to.
#
# Furthermore, when we are acting as a notary server, we cannot
# wait all day for all of the origin servers, as the requesting
# server will otherwise time out before we can respond.
#
# (Note that get_json may make 4 attempts, so this can still take
# almost 45 seconds to fetch the headers, plus up to another 60s to
# read the response).
timeout=10000,
)
except (NotRetryingDestination, RequestSendFailed) as e:
# these both have str() representations which we can't really improve
# upon
raise KeyLookupError(str(e))
except HttpResponseException as e:
raise KeyLookupError("Remote server returned an error: %s" % (e,))
assert isinstance(response, dict)
if response["server_name"] != server_name:
raise KeyLookupError(
"Expected a response for server %r not %r"
% (server_name, response["server_name"])
)
response_keys = await self.process_v2_response(
from_server=server_name,
response_json=response,
time_added_ms=time_now_ms,
time_now_ms = self.clock.time_msec()
try:
response = await self.client.get_json(
destination=server_name,
path="/_matrix/key/v2/server",
ignore_backoff=True,
# we only give the remote server 10s to respond. It should be an
# easy request to handle, so if it doesn't reply within 10s, it's
# probably not going to.
#
# Furthermore, when we are acting as a notary server, we cannot
# wait all day for all of the origin servers, as the requesting
# server will otherwise time out before we can respond.
#
# (Note that get_json may make 4 attempts, so this can still take
# almost 45 seconds to fetch the headers, plus up to another 60s to
# read the response).
timeout=10000,
)
await self.store.store_server_verify_keys(
server_name,
time_now_ms,
((server_name, key_id, key) for key_id, key in response_keys.items()),
)
keys.update(response_keys)
except (NotRetryingDestination, RequestSendFailed) as e:
# these both have str() representations which we can't really improve
# upon
raise KeyLookupError(str(e))
except HttpResponseException as e:
raise KeyLookupError("Remote server returned an error: %s" % (e,))
return keys
assert isinstance(response, dict)
if response["server_name"] != server_name:
raise KeyLookupError(
"Expected a response for server %r not %r"
% (server_name, response["server_name"])
)
return await self.process_v2_response(
from_server=server_name,
response_json=response,
time_added_ms=time_now_ms,
)

View File

@@ -597,8 +597,7 @@ def _event_type_from_format_version(
format_version: The event format version
Returns:
type: A type that can be initialized as per the initializer of
`FrozenEvent`
A type that can be initialized as per the initializer of `FrozenEvent`
"""
if format_version == EventFormatVersions.ROOM_V1_V2:

View File

@@ -128,6 +128,7 @@ class EventBuilder:
state_filter=StateFilter.from_types(
auth_types_for_event(self.room_version, self)
),
await_full_state=False,
)
auth_event_ids = self._event_auth_handler.compute_auth_events(
self, state_ids

View File

@@ -80,6 +80,18 @@ PDU_RETRY_TIME_MS = 1 * 60 * 1000
T = TypeVar("T")
@attr.s(frozen=True, slots=True, auto_attribs=True)
class PulledPduInfo:
"""
A result object that stores the PDU and info about it like which homeserver we
pulled it from (`pull_origin`)
"""
pdu: EventBase
# Which homeserver we pulled the PDU from
pull_origin: str
class InvalidResponseError(RuntimeError):
"""Helper for _try_destination_list: indicates that the server returned a response
we couldn't parse
@@ -114,7 +126,9 @@ class FederationClient(FederationBase):
self.hostname = hs.hostname
self.signing_key = hs.signing_key
self._get_pdu_cache: ExpiringCache[str, EventBase] = ExpiringCache(
# Cache mapping `event_id` to a tuple of the event itself and the `pull_origin`
# (which server we pulled the event from)
self._get_pdu_cache: ExpiringCache[str, Tuple[EventBase, str]] = ExpiringCache(
cache_name="get_pdu_cache",
clock=self._clock,
max_len=1000,
@@ -352,11 +366,11 @@ class FederationClient(FederationBase):
@tag_args
async def get_pdu(
self,
destinations: Iterable[str],
destinations: Collection[str],
event_id: str,
room_version: RoomVersion,
timeout: Optional[int] = None,
) -> Optional[EventBase]:
) -> Optional[PulledPduInfo]:
"""Requests the PDU with given origin and ID from the remote home
servers.
@@ -371,11 +385,11 @@ class FederationClient(FederationBase):
moving to the next destination. None indicates no timeout.
Returns:
The requested PDU, or None if we were unable to find it.
The requested PDU wrapped in `PulledPduInfo`, or None if we were unable to find it.
"""
logger.debug(
"get_pdu: event_id=%s from destinations=%s", event_id, destinations
"get_pdu(event_id=%s): from destinations=%s", event_id, destinations
)
# TODO: Rate limit the number of times we try and get the same event.
@@ -384,19 +398,25 @@ class FederationClient(FederationBase):
# it gets persisted to the database), so we cache the results of the lookup.
# Note that this is separate to the regular get_event cache which caches
# events once they have been persisted.
event = self._get_pdu_cache.get(event_id)
get_pdu_cache_entry = self._get_pdu_cache.get(event_id)
event = None
pull_origin = None
if get_pdu_cache_entry:
event, pull_origin = get_pdu_cache_entry
# If we don't see the event in the cache, go try to fetch it from the
# provided remote federated destinations
if not event:
else:
pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
# TODO: We can probably refactor this to use `_try_destination_list`
for destination in destinations:
now = self._clock.time_msec()
last_attempt = pdu_attempts.get(destination, 0)
if last_attempt + PDU_RETRY_TIME_MS > now:
logger.debug(
"get_pdu: skipping destination=%s because we tried it recently last_attempt=%s and we only check every %s (now=%s)",
"get_pdu(event_id=%s): skipping destination=%s because we tried it recently last_attempt=%s and we only check every %s (now=%s)",
event_id,
destination,
last_attempt,
PDU_RETRY_TIME_MS,
@@ -411,43 +431,48 @@ class FederationClient(FederationBase):
room_version=room_version,
timeout=timeout,
)
pull_origin = destination
pdu_attempts[destination] = now
if event:
# Prime the cache
self._get_pdu_cache[event.event_id] = event
self._get_pdu_cache[event.event_id] = (event, pull_origin)
# Now that we have an event, we can break out of this
# loop and stop asking other destinations.
break
except NotRetryingDestination as e:
logger.info("get_pdu(event_id=%s): %s", event_id, e)
continue
except FederationDeniedError:
logger.info(
"get_pdu(event_id=%s): Not attempting to fetch PDU from %s because the homeserver is not on our federation whitelist",
event_id,
destination,
)
continue
except SynapseError as e:
logger.info(
"Failed to get PDU %s from %s because %s",
"get_pdu(event_id=%s): Failed to get PDU from %s because %s",
event_id,
destination,
e,
)
continue
except NotRetryingDestination as e:
logger.info(str(e))
continue
except FederationDeniedError as e:
logger.info(str(e))
continue
except Exception as e:
pdu_attempts[destination] = now
logger.info(
"Failed to get PDU %s from %s because %s",
"get_pdu(event_id=%s): Failed to get PDU from %s because %s",
event_id,
destination,
e,
)
continue
if not event:
if not event or not pull_origin:
return None
# `event` now refers to an object stored in `get_pdu_cache`. Our
@@ -459,7 +484,7 @@ class FederationClient(FederationBase):
event.room_version,
)
return event_copy
return PulledPduInfo(event_copy, pull_origin)
@trace
@tag_args
@@ -699,12 +724,14 @@ class FederationClient(FederationBase):
pdu_origin = get_domain_from_id(pdu.sender)
if not res and pdu_origin != origin:
try:
res = await self.get_pdu(
pulled_pdu_info = await self.get_pdu(
destinations=[pdu_origin],
event_id=pdu.event_id,
room_version=room_version,
timeout=10000,
)
if pulled_pdu_info is not None:
res = pulled_pdu_info.pdu
except SynapseError:
pass
@@ -806,6 +833,7 @@ class FederationClient(FederationBase):
)
for destination in destinations:
# We don't want to ask our own server for information we don't have
if destination == self.server_name:
continue
@@ -814,9 +842,21 @@ class FederationClient(FederationBase):
except (
RequestSendFailed,
InvalidResponseError,
NotRetryingDestination,
) as e:
logger.warning("Failed to %s via %s: %s", description, destination, e)
# Skip to the next homeserver in the list to try.
continue
except NotRetryingDestination as e:
logger.info("%s: %s", description, e)
continue
except FederationDeniedError:
logger.info(
"%s: Not attempting to %s from %s because the homeserver is not on our federation whitelist",
description,
description,
destination,
)
continue
except UnsupportedRoomVersionError:
raise
except HttpResponseException as e:
@@ -1609,6 +1649,64 @@ class FederationClient(FederationBase):
return result
async def timestamp_to_event(
self, *, destinations: List[str], room_id: str, timestamp: int, direction: str
) -> Optional["TimestampToEventResponse"]:
"""
Calls each remote federating server from `destinations` asking for their closest
event to the given timestamp in the given direction until we get a response.
Also validates the response to always return the expected keys or raises an
error.
Args:
destinations: The domains of homeservers to try fetching from
room_id: Room to fetch the event from
timestamp: The point in time (inclusive) we should navigate from in
the given direction to find the closest event.
direction: ["f"|"b"] to indicate whether we should navigate forward
or backward from the given timestamp to find the closest event.
Returns:
A parsed TimestampToEventResponse including the closest event_id
and origin_server_ts or None if no destination has a response.
"""
async def _timestamp_to_event_from_destination(
destination: str,
) -> TimestampToEventResponse:
return await self._timestamp_to_event_from_destination(
destination, room_id, timestamp, direction
)
try:
# Loop through each homeserver candidate until we get a succesful response
timestamp_to_event_response = await self._try_destination_list(
"timestamp_to_event",
destinations,
# TODO: The requested timestamp may lie in a part of the
# event graph that the remote server *also* didn't have,
# in which case they will have returned another event
# which may be nowhere near the requested timestamp. In
# the future, we may need to reconcile that gap and ask
# other homeservers, and/or extend `/timestamp_to_event`
# to return events on *both* sides of the timestamp to
# help reconcile the gap faster.
_timestamp_to_event_from_destination,
# Since this endpoint is new, we should try other servers before giving up.
# We can safely remove this in a year (remove after 2023-11-16).
failover_on_unknown_endpoint=True,
)
return timestamp_to_event_response
except SynapseError as e:
logger.warn(
"timestamp_to_event(room_id=%s, timestamp=%s, direction=%s): encountered error when trying to fetch from destinations: %s",
room_id,
timestamp,
direction,
e,
)
return None
async def _timestamp_to_event_from_destination(
self, destination: str, room_id: str, timestamp: int, direction: str
) -> "TimestampToEventResponse":
"""

View File

@@ -74,6 +74,8 @@ from synapse.replication.http.federation import (
)
from synapse.storage.databases.main.events import PartialStateConflictError
from synapse.storage.databases.main.lock import Lock
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
from synapse.storage.roommember import MemberSummary
from synapse.types import JsonDict, StateMap, get_domain_from_id
from synapse.util import json_decoder, unwrapFirstError
from synapse.util.async_helpers import Linearizer, concurrently_execute, gather_results
@@ -481,6 +483,14 @@ class FederationServer(FederationBase):
pdu_results[pdu.event_id] = await process_pdu(pdu)
async def process_pdu(pdu: EventBase) -> JsonDict:
"""
Processes a pushed PDU sent to us via a `/send` transaction
Returns:
JsonDict representing a "PDU Processing Result" that will be bundled up
with the other processed PDU's in the `/send` transaction and sent back
to remote homeserver.
"""
event_id = pdu.event_id
with nested_logging_context(event_id):
try:
@@ -683,8 +693,9 @@ class FederationServer(FederationBase):
state_event_ids: Collection[str]
servers_in_room: Optional[Collection[str]]
if caller_supports_partial_state:
summary = await self.store.get_room_summary(room_id)
state_event_ids = _get_event_ids_for_partial_state_join(
event, prev_state_ids
event, prev_state_ids, summary
)
servers_in_room = await self.state.get_hosts_in_room_at_events(
room_id, event_ids=event.prev_event_ids()
@@ -1487,6 +1498,7 @@ class FederationHandlerRegistry:
def _get_event_ids_for_partial_state_join(
join_event: EventBase,
prev_state_ids: StateMap[str],
summary: Dict[str, MemberSummary],
) -> Collection[str]:
"""Calculate state to be retuned in a partial_state send_join
@@ -1513,8 +1525,19 @@ def _get_event_ids_for_partial_state_join(
if current_membership_event_id is not None:
state_event_ids.add(current_membership_event_id)
# TODO: return a few more members:
# - those with invites
# - those that are kicked? / banned
name_id = prev_state_ids.get((EventTypes.Name, ""))
canonical_alias_id = prev_state_ids.get((EventTypes.CanonicalAlias, ""))
if not name_id and not canonical_alias_id:
# Also include the hero members of the room (for DM rooms without a title).
# To do this properly, we should select the correct subset of membership events
# from `prev_state_ids`. Instead, we are lazier and use the (cached)
# `get_room_summary` function, which is based on the current state of the room.
# This introduces races; we choose to ignore them because a) they should be rare
# and b) even if it's wrong, joining servers will get the full state eventually.
heroes = extract_heroes_from_room_summary(summary, join_event.state_key)
for hero in heroes:
membership_event_id = prev_state_ids.get((EventTypes.Member, hero))
if membership_event_id:
state_event_ids.add(membership_event_id)
return state_event_ids

View File

@@ -434,7 +434,23 @@ class FederationSender(AbstractFederationSender):
# If there are no prev event IDs then the state is empty
# and so no remote servers in the room
destinations = set()
else:
if destinations is None:
# During partial join we use the set of servers that we got
# when beginning the join. It's still possible that we send
# events to servers that left the room in the meantime, but
# we consider that an acceptable risk since it is only our own
# events that we leak and not other server's ones.
partial_state_destinations = (
await self.store.get_partial_state_servers_at_join(
event.room_id
)
)
if len(partial_state_destinations) > 0:
destinations = partial_state_destinations
if destinations is None:
# We check the external cache for the destinations, which is
# stored per state group.
@@ -631,7 +647,7 @@ class FederationSender(AbstractFederationSender):
room_id = receipt.room_id
# Work out which remote servers should be poked and poke them.
domains_set = await self._storage_controllers.state.get_current_hosts_in_room(
domains_set = await self._storage_controllers.state.get_current_hosts_in_room_or_partial_state_approximation(
room_id
)
domains = [

View File

@@ -35,7 +35,7 @@ from synapse.logging import issue9533_logger
from synapse.logging.opentracing import SynapseTags, set_tag
from synapse.metrics import sent_transactions_counter
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import ReadReceipt
from synapse.types import JsonDict, ReadReceipt
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
from synapse.visibility import filter_events_for_server
@@ -136,8 +136,11 @@ class PerDestinationQueue:
# destination
self._pending_presence: Dict[str, UserPresenceState] = {}
# room_id -> receipt_type -> user_id -> receipt_dict
self._pending_rrs: Dict[str, Dict[str, Dict[str, dict]]] = {}
# List of room_id -> receipt_type -> user_id -> receipt_dict,
#
# Each receipt can only have a single receipt per
# (room ID, receipt type, user ID, thread ID) tuple.
self._pending_receipt_edus: List[Dict[str, Dict[str, Dict[str, dict]]]] = []
self._rrs_pending_flush = False
# stream_id of last successfully sent to-device message.
@@ -202,17 +205,53 @@ class PerDestinationQueue:
Args:
receipt: receipt to be queued
"""
self._pending_rrs.setdefault(receipt.room_id, {}).setdefault(
receipt.receipt_type, {}
)[receipt.user_id] = {"event_ids": receipt.event_ids, "data": receipt.data}
serialized_receipt: JsonDict = {
"event_ids": receipt.event_ids,
"data": receipt.data,
}
if receipt.thread_id is not None:
serialized_receipt["data"]["thread_id"] = receipt.thread_id
# Find which EDU to add this receipt to. There's three situations depending
# on the (room ID, receipt type, user, thread ID) tuple:
#
# 1. If it fully matches, clobber the information.
# 2. If it is missing, add the information.
# 3. If the subset tuple of (room ID, receipt type, user) matches, check
# the next EDU (or add a new EDU).
for edu in self._pending_receipt_edus:
receipt_content = edu.setdefault(receipt.room_id, {}).setdefault(
receipt.receipt_type, {}
)
# If this room ID, receipt type, user ID is not in this EDU, OR if
# the full tuple matches, use the current EDU.
if (
receipt.user_id not in receipt_content
or receipt_content[receipt.user_id].get("thread_id")
== receipt.thread_id
):
receipt_content[receipt.user_id] = serialized_receipt
break
# If no matching EDU was found, create a new one.
else:
self._pending_receipt_edus.append(
{
receipt.room_id: {
receipt.receipt_type: {receipt.user_id: serialized_receipt}
}
}
)
def flush_read_receipts_for_room(self, room_id: str) -> None:
# if we don't have any read-receipts for this room, it may be that we've already
# sent them out, so we don't need to flush.
if room_id not in self._pending_rrs:
return
self._rrs_pending_flush = True
self.attempt_new_transaction()
# If there are any pending receipts for this room then force-flush them
# in a new transaction.
for edu in self._pending_receipt_edus:
if room_id in edu:
self._rrs_pending_flush = True
self.attempt_new_transaction()
# No use in checking remaining EDUs if the room was found.
break
def send_keyed_edu(self, edu: Edu, key: Hashable) -> None:
self._pending_edus_keyed[(edu.edu_type, key)] = edu
@@ -351,7 +390,7 @@ class PerDestinationQueue:
self._pending_edus = []
self._pending_edus_keyed = {}
self._pending_presence = {}
self._pending_rrs = {}
self._pending_receipt_edus = []
self._start_catching_up()
except FederationDeniedError as e:
@@ -505,6 +544,7 @@ class PerDestinationQueue:
new_pdus = await filter_events_for_server(
self._storage_controllers,
self._destination,
self._server_name,
new_pdus,
redact=False,
)
@@ -542,22 +582,27 @@ class PerDestinationQueue:
self._destination, last_successful_stream_ordering
)
def _get_rr_edus(self, force_flush: bool) -> Iterable[Edu]:
if not self._pending_rrs:
def _get_receipt_edus(self, force_flush: bool, limit: int) -> Iterable[Edu]:
if not self._pending_receipt_edus:
return
if not force_flush and not self._rrs_pending_flush:
# not yet time for this lot
return
edu = Edu(
origin=self._server_name,
destination=self._destination,
edu_type=EduTypes.RECEIPT,
content=self._pending_rrs,
)
self._pending_rrs = {}
self._rrs_pending_flush = False
yield edu
# Send at most limit EDUs for receipts.
for content in self._pending_receipt_edus[:limit]:
yield Edu(
origin=self._server_name,
destination=self._destination,
edu_type=EduTypes.RECEIPT,
content=content,
)
self._pending_receipt_edus = self._pending_receipt_edus[limit:]
# If there are still pending read-receipts, don't reset the pending flush
# flag.
if not self._pending_receipt_edus:
self._rrs_pending_flush = False
def _pop_pending_edus(self, limit: int) -> List[Edu]:
pending_edus = self._pending_edus
@@ -644,40 +689,20 @@ class _TransactionQueueManager:
async def __aenter__(self) -> Tuple[List[EventBase], List[Edu]]:
# First we calculate the EDUs we want to send, if any.
# We start by fetching device related EDUs, i.e device updates and to
# device messages. We have to keep 2 free slots for presence and rr_edus.
device_edu_limit = MAX_EDUS_PER_TRANSACTION - 2
# There's a maximum number of EDUs that can be sent with a transaction,
# generally device updates and to-device messages get priority, but we
# want to ensure that there's room for some other EDUs as well.
#
# This is done by:
#
# * Add a presence EDU, if one exists.
# * Add up-to a small limit of read receipt EDUs.
# * Add to-device EDUs, but leave some space for device list updates.
# * Add device list updates EDUs.
# * If there's any remaining room, add other EDUs.
pending_edus = []
# We prioritize to-device messages so that existing encryption channels
# work. We also keep a few slots spare (by reducing the limit) so that
# we can still trickle out some device list updates.
(
to_device_edus,
device_stream_id,
) = await self.queue._get_to_device_message_edus(device_edu_limit - 10)
if to_device_edus:
self._device_stream_id = device_stream_id
else:
self.queue._last_device_stream_id = device_stream_id
device_edu_limit -= len(to_device_edus)
device_update_edus, dev_list_id = await self.queue._get_device_update_edus(
device_edu_limit
)
if device_update_edus:
self._device_list_id = dev_list_id
else:
self.queue._last_device_list_stream_id = dev_list_id
pending_edus = device_update_edus + to_device_edus
# Now add the read receipt EDU.
pending_edus.extend(self.queue._get_rr_edus(force_flush=False))
# And presence EDU.
# Add presence EDU.
if self.queue._pending_presence:
pending_edus.append(
Edu(
@@ -696,16 +721,47 @@ class _TransactionQueueManager:
)
self.queue._pending_presence = {}
# Finally add any other types of EDUs if there is room.
pending_edus.extend(
self.queue._pop_pending_edus(MAX_EDUS_PER_TRANSACTION - len(pending_edus))
# Add read receipt EDUs.
pending_edus.extend(self.queue._get_receipt_edus(force_flush=False, limit=5))
edu_limit = MAX_EDUS_PER_TRANSACTION - len(pending_edus)
# Next, prioritize to-device messages so that existing encryption channels
# work. We also keep a few slots spare (by reducing the limit) so that
# we can still trickle out some device list updates.
(
to_device_edus,
device_stream_id,
) = await self.queue._get_to_device_message_edus(edu_limit - 10)
if to_device_edus:
self._device_stream_id = device_stream_id
else:
self.queue._last_device_stream_id = device_stream_id
pending_edus.extend(to_device_edus)
edu_limit -= len(to_device_edus)
# Add device list update EDUs.
device_update_edus, dev_list_id = await self.queue._get_device_update_edus(
edu_limit
)
while (
len(pending_edus) < MAX_EDUS_PER_TRANSACTION
and self.queue._pending_edus_keyed
):
if device_update_edus:
self._device_list_id = dev_list_id
else:
self.queue._last_device_list_stream_id = dev_list_id
pending_edus.extend(device_update_edus)
edu_limit -= len(device_update_edus)
# Finally add any other types of EDUs if there is room.
other_edus = self.queue._pop_pending_edus(edu_limit)
pending_edus.extend(other_edus)
edu_limit -= len(other_edus)
while edu_limit > 0 and self.queue._pending_edus_keyed:
_, val = self.queue._pending_edus_keyed.popitem()
pending_edus.append(val)
edu_limit -= 1
# Now we look for any PDUs to send, by getting up to 50 PDUs from the
# queue
@@ -716,8 +772,10 @@ class _TransactionQueueManager:
# if we've decided to send a transaction anyway, and we have room, we
# may as well send any pending RRs
if len(pending_edus) < MAX_EDUS_PER_TRANSACTION:
pending_edus.extend(self.queue._get_rr_edus(force_flush=True))
if edu_limit:
pending_edus.extend(
self.queue._get_receipt_edus(force_flush=True, limit=edu_limit)
)
if self._pdus:
self._last_stream_ordering = self._pdus[

View File

@@ -185,9 +185,8 @@ class TransportLayerClient:
Raises:
Various exceptions when the request fails
"""
path = _create_path(
FEDERATION_UNSTABLE_PREFIX,
"/org.matrix.msc3030/timestamp_to_event/%s",
path = _create_v1_path(
"/timestamp_to_event/%s",
room_id,
)
@@ -280,12 +279,11 @@ class TransportLayerClient:
Note that this does not append any events to any graphs.
Args:
destination (str): address of remote homeserver
room_id (str): room to join/leave
user_id (str): user to be joined/left
membership (str): one of join/leave
params (dict[str, str|Iterable[str]]): Query parameters to include in the
request.
destination: address of remote homeserver
room_id: room to join/leave
user_id: user to be joined/left
membership: one of join/leave
params: Query parameters to include in the request.
Returns:
Succeeds when we get a 2xx HTTP response. The result

View File

@@ -25,7 +25,6 @@ from synapse.federation.transport.server._base import (
from synapse.federation.transport.server.federation import (
FEDERATION_SERVLET_CLASSES,
FederationAccountStatusServlet,
FederationTimestampLookupServlet,
)
from synapse.http.server import HttpServer, JsonResource
from synapse.http.servlet import (
@@ -291,13 +290,6 @@ def register_servlets(
)
for servletclass in SERVLET_GROUPS[servlet_group]:
# Only allow the `/timestamp_to_event` servlet if msc3030 is enabled
if (
servletclass == FederationTimestampLookupServlet
and not hs.config.experimental.msc3030_enabled
):
continue
# Only allow the `/account_status` servlet if msc3720 is enabled
if (
servletclass == FederationAccountStatusServlet

Some files were not shown because too many files have changed in this diff Show More