mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-07 01:20:16 +00:00
Compare commits
80 Commits
erikj/opti
...
dependabot
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
16beed5ffe | ||
|
|
69e9b75373 | ||
|
|
5d0514f29b | ||
|
|
4e5410fdae | ||
|
|
12d65a6778 | ||
|
|
1006c12eb2 | ||
|
|
57efc8c03e | ||
|
|
a5e16a4ab5 | ||
|
|
80ad02e10e | ||
|
|
9512b84a72 | ||
|
|
22aa925523 | ||
|
|
0ab99369a1 | ||
|
|
6ececb8f2a | ||
|
|
2ce7a1edf7 | ||
|
|
ec885ffd33 | ||
|
|
11bc9a1b3a | ||
|
|
c5b379de66 | ||
|
|
adda2a4613 | ||
|
|
5d47138b46 | ||
|
|
d025b5ab50 | ||
|
|
ae6179b382 | ||
|
|
5dd6157972 | ||
|
|
1266138b66 | ||
|
|
24975eca4d | ||
|
|
451a9dc7b9 | ||
|
|
f6a3e5e1c2 | ||
|
|
05576f0b4b | ||
|
|
60aebdb27e | ||
|
|
b1b4b2944d | ||
|
|
bdcc9fa388 | ||
|
|
6a0c21fabd | ||
|
|
f40641c29b | ||
|
|
1bb528ee44 | ||
|
|
165f4ca776 | ||
|
|
475e192cbe | ||
|
|
43040a4051 | ||
|
|
b3e2d10f39 | ||
|
|
a5986ac229 | ||
|
|
006251a5d0 | ||
|
|
422f3ecec1 | ||
|
|
4e90221d87 | ||
|
|
e2610de208 | ||
|
|
e8c8924b81 | ||
|
|
e8e0f0fad7 | ||
|
|
beb7a951f4 | ||
|
|
d34f827ed8 | ||
|
|
9920417723 | ||
|
|
316d635906 | ||
|
|
8bbe66a9b9 | ||
|
|
d4e3ad04cd | ||
|
|
55c0391cc8 | ||
|
|
81e0f57800 | ||
|
|
ae4862c38f | ||
|
|
602956ef64 | ||
|
|
444b565c76 | ||
|
|
8068f31146 | ||
|
|
5210565c12 | ||
|
|
de955293cf | ||
|
|
93889eb2e7 | ||
|
|
ece66ba61c | ||
|
|
ef9ef99f59 | ||
|
|
cfbddc258f | ||
|
|
302534c348 | ||
|
|
f144b4c7e9 | ||
|
|
13dea6949b | ||
|
|
386cabda83 | ||
|
|
f53a3a56e2 | ||
|
|
2fc43e4219 | ||
|
|
b0d2aca164 | ||
|
|
f68e8d0021 | ||
|
|
89e7609f5c | ||
|
|
b89a66f831 | ||
|
|
b066b3aa04 | ||
|
|
e4b0cd87cc | ||
|
|
985b3ab58d | ||
|
|
afc3af7763 | ||
|
|
af2da0e47a | ||
|
|
ac8c9ac50d | ||
|
|
443a9eb335 | ||
|
|
aad26cb93f |
@@ -53,7 +53,7 @@ if not IS_PR:
|
||||
"database": "sqlite",
|
||||
"extras": "all",
|
||||
}
|
||||
for version in ("3.9", "3.10", "3.11", "3.12")
|
||||
for version in ("3.9", "3.10", "3.11", "3.12", "3.13")
|
||||
)
|
||||
|
||||
trial_postgres_tests = [
|
||||
@@ -68,9 +68,9 @@ trial_postgres_tests = [
|
||||
if not IS_PR:
|
||||
trial_postgres_tests.append(
|
||||
{
|
||||
"python-version": "3.12",
|
||||
"python-version": "3.13",
|
||||
"database": "postgres",
|
||||
"postgres-version": "16",
|
||||
"postgres-version": "17",
|
||||
"extras": "all",
|
||||
}
|
||||
)
|
||||
|
||||
2
.github/workflows/docker.yml
vendored
2
.github/workflows/docker.yml
vendored
@@ -30,7 +30,7 @@ jobs:
|
||||
run: docker buildx inspect
|
||||
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@v3.6.0
|
||||
uses: sigstore/cosign-installer@v3.7.0
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
126
CHANGES.md
126
CHANGES.md
@@ -1,3 +1,129 @@
|
||||
# Synapse 1.117.0 (2024-10-15)
|
||||
|
||||
No significant changes since 1.117.0rc1.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.117.0rc1 (2024-10-08)
|
||||
|
||||
### Features
|
||||
|
||||
- Add config option `redis.password_path`. ([\#17717](https://github.com/element-hq/synapse/issues/17717))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix a rare bug introduced in v1.29.0 where invalidating a user's access token from a worker could raise an error. ([\#17779](https://github.com/element-hq/synapse/issues/17779))
|
||||
- In the response to `GET /_matrix/client/versions`, set the `unstable_features` flag for [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140) to `false` when server configuration disables support for delayed events. ([\#17780](https://github.com/element-hq/synapse/issues/17780))
|
||||
- Improve input validation and room membership checks in admin redaction API. ([\#17792](https://github.com/element-hq/synapse/issues/17792))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Clarify the docstring of `test_forget_when_not_left`. ([\#17628](https://github.com/element-hq/synapse/issues/17628))
|
||||
- Add documentation note about PYTHONMALLOC for accurate jemalloc memory tracking. Contributed by @hensg. ([\#17709](https://github.com/element-hq/synapse/issues/17709))
|
||||
- Remove spurious "TODO UPDATE ALL THIS" note in the Debian installation docs. ([\#17749](https://github.com/element-hq/synapse/issues/17749))
|
||||
- Explain how load balancing works for `federation_sender_instances`. ([\#17776](https://github.com/element-hq/synapse/issues/17776))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Minor performance increase for large accounts using sliding sync. ([\#17751](https://github.com/element-hq/synapse/issues/17751))
|
||||
- Increase performance of the notifier when there are many syncing users. ([\#17765](https://github.com/element-hq/synapse/issues/17765), [\#17766](https://github.com/element-hq/synapse/issues/17766))
|
||||
- Fix performance of streams that don't change often. ([\#17767](https://github.com/element-hq/synapse/issues/17767))
|
||||
- Improve performance of sliding sync connections that do not ask for any rooms. ([\#17768](https://github.com/element-hq/synapse/issues/17768))
|
||||
- Reduce overhead of sliding sync E2EE loops. ([\#17771](https://github.com/element-hq/synapse/issues/17771))
|
||||
- Sliding sync minor performance speed up using new table. ([\#17787](https://github.com/element-hq/synapse/issues/17787))
|
||||
- Sliding sync minor performance improvement by omitting unchanged data from incremental responses. ([\#17788](https://github.com/element-hq/synapse/issues/17788))
|
||||
- Speed up sliding sync when there are many active subscriptions. ([\#17789](https://github.com/element-hq/synapse/issues/17789))
|
||||
- Add missing license headers on new source files. ([\#17799](https://github.com/element-hq/synapse/issues/17799))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump phonenumbers from 8.13.45 to 8.13.46. ([\#17773](https://github.com/element-hq/synapse/issues/17773))
|
||||
* Bump python-multipart from 0.0.10 to 0.0.12. ([\#17772](https://github.com/element-hq/synapse/issues/17772))
|
||||
* Bump regex from 1.10.6 to 1.11.0. ([\#17770](https://github.com/element-hq/synapse/issues/17770))
|
||||
* Bump ruff from 0.6.7 to 0.6.8. ([\#17774](https://github.com/element-hq/synapse/issues/17774))
|
||||
|
||||
# Synapse 1.116.0 (2024-10-01)
|
||||
|
||||
No significant changes since 1.116.0rc2.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.116.0rc2 (2024-09-26)
|
||||
|
||||
### Features
|
||||
|
||||
- Add implementation of restricting who can overwrite a state event as proposed by [MSC3757](https://github.com/matrix-org/matrix-spec-proposals/pull/3757). ([\#17513](https://github.com/element-hq/synapse/issues/17513))
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.116.0rc1 (2024-09-25)
|
||||
|
||||
### Features
|
||||
|
||||
- Add initial implementation of delayed events as proposed by [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140). ([\#17326](https://github.com/element-hq/synapse/issues/17326))
|
||||
- Add an asynchronous Admin API endpoint [to redact all a user's events](https://element-hq.github.io/synapse/v1.116/admin_api/user_admin_api.html#redact-all-the-events-of-a-user),
|
||||
and [an endpoint to check on the status of that redaction task](https://element-hq.github.io/synapse/v1.116/admin_api/user_admin_api.html#check-the-status-of-a-redaction-process). ([\#17506](https://github.com/element-hq/synapse/issues/17506))
|
||||
- Add support for the `tags` and `not_tags` filters for [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17662](https://github.com/element-hq/synapse/issues/17662))
|
||||
- Guests can use the new media endpoints to download media, as described by [MSC4189](https://github.com/matrix-org/matrix-spec-proposals/pull/4189). ([\#17675](https://github.com/element-hq/synapse/issues/17675))
|
||||
- Add config option `turn_shared_secret_path`. ([\#17690](https://github.com/element-hq/synapse/issues/17690))
|
||||
- Return room tags in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync account data extension. ([\#17707](https://github.com/element-hq/synapse/issues/17707))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Make sure we get up-to-date state information when using the new [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync tables to derive room membership. ([\#17692](https://github.com/element-hq/synapse/issues/17692))
|
||||
- Fix bug where room account data would not correctly be sent down [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync for old rooms. ([\#17695](https://github.com/element-hq/synapse/issues/17695))
|
||||
- Fix a bug in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync which could prevent /sync from working for certain user accounts. ([\#17727](https://github.com/element-hq/synapse/issues/17727), [\#17733](https://github.com/element-hq/synapse/issues/17733))
|
||||
- Ignore invites from ignored users in Sliding Sync. ([\#17729](https://github.com/element-hq/synapse/issues/17729))
|
||||
- Fix bug in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync where the server would incorrectly return a negative bump stamp, which caused Element X apps to stop syncing. ([\#17748](https://github.com/element-hq/synapse/issues/17748))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Import pydantic objects from the `_pydantic_compat` module.
|
||||
This allows `check_pydantic_models.py` to mock those pydantic objects
|
||||
only in the synapse module, and not interfere with pydantic objects in
|
||||
external dependencies. ([\#17667](https://github.com/element-hq/synapse/issues/17667))
|
||||
- Use [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync tables as a bulk shortcut for getting the max `event_stream_ordering` of rooms. ([\#17693](https://github.com/element-hq/synapse/issues/17693))
|
||||
- Speed up [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync requests a bit where there are many room changes. ([\#17696](https://github.com/element-hq/synapse/issues/17696))
|
||||
- Refactor [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync filter unit tests so the sliding sync API has better test coverage. ([\#17703](https://github.com/element-hq/synapse/issues/17703))
|
||||
- Fetch `bump_stamp`s more efficiently in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17723](https://github.com/element-hq/synapse/issues/17723))
|
||||
- Shortcut for checking if certain background updates have completed (utilized in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync). ([\#17724](https://github.com/element-hq/synapse/issues/17724))
|
||||
- More efficiently fetch rooms for [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17725](https://github.com/element-hq/synapse/issues/17725))
|
||||
- Fix `_bulk_get_max_event_pos` being inefficient. ([\#17728](https://github.com/element-hq/synapse/issues/17728))
|
||||
- Add cache to `get_tags_for_room(...)`. ([\#17730](https://github.com/element-hq/synapse/issues/17730))
|
||||
- Small performance improvement in speeding up [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17731](https://github.com/element-hq/synapse/issues/17731))
|
||||
- Minor speed up of initial [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync requests. ([\#17734](https://github.com/element-hq/synapse/issues/17734))
|
||||
- Remove usage of the deprecated `cgi` module, deprecated in Python 3.11 and removed in Python 3.13. ([\#17741](https://github.com/element-hq/synapse/issues/17741))
|
||||
- Fix typing of a variable that is not `Unknown` anymore after updating `treq`. ([\#17744](https://github.com/element-hq/synapse/issues/17744))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump anyhow from 1.0.86 to 1.0.89. ([\#17685](https://github.com/element-hq/synapse/issues/17685), [\#17716](https://github.com/element-hq/synapse/issues/17716))
|
||||
* Bump bytes from 1.7.1 to 1.7.2. ([\#17743](https://github.com/element-hq/synapse/issues/17743))
|
||||
* Bump cryptography from 43.0.0 to 43.0.1. ([\#17689](https://github.com/element-hq/synapse/issues/17689))
|
||||
* Bump idna from 3.8 to 3.10. ([\#17758](https://github.com/element-hq/synapse/issues/17758))
|
||||
* Bump msgpack from 1.0.8 to 1.1.0. ([\#17759](https://github.com/element-hq/synapse/issues/17759))
|
||||
* Bump phonenumbers from 8.13.44 to 8.13.45. ([\#17762](https://github.com/element-hq/synapse/issues/17762))
|
||||
* Bump prometheus-client from 0.20.0 to 0.21.0. ([\#17746](https://github.com/element-hq/synapse/issues/17746))
|
||||
* Bump pyasn1 from 0.6.0 to 0.6.1. ([\#17714](https://github.com/element-hq/synapse/issues/17714))
|
||||
* Bump pyasn1-modules from 0.4.0 to 0.4.1. ([\#17747](https://github.com/element-hq/synapse/issues/17747))
|
||||
* Bump pydantic from 2.8.2 to 2.9.2. ([\#17756](https://github.com/element-hq/synapse/issues/17756))
|
||||
* Bump python-multipart from 0.0.9 to 0.0.10. ([\#17745](https://github.com/element-hq/synapse/issues/17745))
|
||||
* Bump ruff from 0.6.4 to 0.6.7. ([\#17715](https://github.com/element-hq/synapse/issues/17715), [\#17760](https://github.com/element-hq/synapse/issues/17760))
|
||||
* Bump sentry-sdk from 2.13.0 to 2.14.0. ([\#17712](https://github.com/element-hq/synapse/issues/17712))
|
||||
* Bump serde from 1.0.209 to 1.0.210. ([\#17686](https://github.com/element-hq/synapse/issues/17686))
|
||||
* Bump serde_json from 1.0.127 to 1.0.128. ([\#17687](https://github.com/element-hq/synapse/issues/17687))
|
||||
* Bump treq from 23.11.0 to 24.9.1. ([\#17744](https://github.com/element-hq/synapse/issues/17744))
|
||||
* Bump types-pyyaml from 6.0.12.20240808 to 6.0.12.20240917. ([\#17755](https://github.com/element-hq/synapse/issues/17755))
|
||||
* Bump types-requests from 2.32.0.20240712 to 2.32.0.20240914. ([\#17713](https://github.com/element-hq/synapse/issues/17713))
|
||||
* Bump types-setuptools from 74.1.0.20240907 to 75.1.0.20240917. ([\#17757](https://github.com/element-hq/synapse/issues/17757))
|
||||
|
||||
# Synapse 1.115.0 (2024-09-17)
|
||||
|
||||
No significant changes since 1.115.0rc2.
|
||||
|
||||
24
Cargo.lock
generated
24
Cargo.lock
generated
@@ -13,9 +13,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "anyhow"
|
||||
version = "1.0.89"
|
||||
version = "1.0.90"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "86fdf8605db99b54d3cd748a44c6d04df638eb5dafb219b135d0149bd0db01f6"
|
||||
checksum = "37bf3594c4c988a53154954629820791dde498571819ae4ca50ca811e060cc95"
|
||||
|
||||
[[package]]
|
||||
name = "arc-swap"
|
||||
@@ -67,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
|
||||
|
||||
[[package]]
|
||||
name = "bytes"
|
||||
version = "1.7.1"
|
||||
version = "1.7.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8318a53db07bb3f8dca91a600466bdb3f2eaadeedfdbcf02e1accbad9271ba50"
|
||||
checksum = "428d9aa8fbc0670b7b8d6030a7fadd0f86151cae55e4dbbece15f3780a3dfaf3"
|
||||
|
||||
[[package]]
|
||||
name = "cfg-if"
|
||||
@@ -444,9 +444,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "1.10.6"
|
||||
version = "1.11.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4219d74c6b67a3654a9fbebc4b419e22126d13d2f3c4a07ee0cb61ff79a79619"
|
||||
checksum = "38200e5ee88914975b69f657f0801b6f6dccafd44fd9326302a4aaeecfacb1d8"
|
||||
dependencies = [
|
||||
"aho-corasick",
|
||||
"memchr",
|
||||
@@ -456,9 +456,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "regex-automata"
|
||||
version = "0.4.6"
|
||||
version = "0.4.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "86b83b8b9847f9bf95ef68afb0b8e6cdb80f498442f5179a29fad448fcc1eaea"
|
||||
checksum = "368758f23274712b504848e9d5a6f010445cc8b87a7cdb4d7cbee666c1288da3"
|
||||
dependencies = [
|
||||
"aho-corasick",
|
||||
"memchr",
|
||||
@@ -467,9 +467,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "regex-syntax"
|
||||
version = "0.8.3"
|
||||
version = "0.8.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "adad44e29e4c806119491a7f06f03de4d1af22c3a680dd47f1e6e179439d1f56"
|
||||
checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
|
||||
|
||||
[[package]]
|
||||
name = "ryu"
|
||||
@@ -505,9 +505,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serde_json"
|
||||
version = "1.0.128"
|
||||
version = "1.0.132"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6ff5456707a1de34e7e37f2a6fd3d3f808c318259cbd01ab6377795054b483d8"
|
||||
checksum = "d726bfaff4b320266d395898905d0eba0345aae23b54aee3a737e260fd46db03"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"memchr",
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
Add initial implementation of delayed events as proposed by [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140).
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
Add an asynchronous Admin API endpoint [to redact all a user's events](https://element-hq.github.io/synapse/v1.116/admin_api/user_admin_api.html#redact-all-the-events-of-a-user),
|
||||
and [an endpoint to check on the status of that redaction task](https://element-hq.github.io/synapse/v1.116/admin_api/user_admin_api.html#check-the-status-of-a-redaction-process).
|
||||
1
changelog.d/17627.doc
Normal file
1
changelog.d/17627.doc
Normal file
@@ -0,0 +1 @@
|
||||
Clarify when the `user_may_invite` and `user_may_send_3pid_invite` module callbacks are called.
|
||||
@@ -1 +0,0 @@
|
||||
Add support for the `tags` and `not_tags` filters for simplified sliding sync.
|
||||
@@ -1,5 +0,0 @@
|
||||
Import pydantic objects from the `_pydantic_compat` module.
|
||||
|
||||
This allows `check_pydantic_models.py` to mock those pydantic objects
|
||||
only in the synapse module, and not interfere with pydantic objects in
|
||||
external dependencies.
|
||||
@@ -1 +0,0 @@
|
||||
Guests can use the new media endpoints to download media, as described by [MSC4189](https://github.com/matrix-org/matrix-spec-proposals/pull/4189).
|
||||
@@ -1 +0,0 @@
|
||||
Add config option `turn_shared_secret_path`.
|
||||
@@ -1 +0,0 @@
|
||||
Make sure we get up-to-date state information when using the new Sliding Sync tables to derive room membership.
|
||||
@@ -1 +0,0 @@
|
||||
Use Sliding Sync tables as a bulk shortcut for getting the max `event_stream_ordering` of rooms.
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug where room account data would not correctly be sent down sliding sync for old rooms.
|
||||
@@ -1 +0,0 @@
|
||||
Speed up sliding sync requests a bit where there are many room changes.
|
||||
@@ -1 +0,0 @@
|
||||
Refactor sliding sync filter unit tests so the sliding sync API has better test coverage.
|
||||
@@ -1 +0,0 @@
|
||||
Return room tags in Sliding Sync account data extension.
|
||||
1
changelog.d/17708.feature
Normal file
1
changelog.d/17708.feature
Normal file
@@ -0,0 +1 @@
|
||||
Added the `display_name_claim` option to the JWT configuration. This option allows specifying the claim key that contains the user's display name in the JWT payload.
|
||||
1
changelog.d/17718.misc
Normal file
1
changelog.d/17718.misc
Normal file
@@ -0,0 +1 @@
|
||||
Slight optimization when fetching state/events for Sliding Sync.
|
||||
@@ -1 +0,0 @@
|
||||
Fetch `bump_stamp`'s more efficiently in Sliding Sync.
|
||||
@@ -1 +0,0 @@
|
||||
Shortcut for checking if certain background updates have completed (utilized in Sliding Sync).
|
||||
@@ -1 +0,0 @@
|
||||
More efficiently fetch rooms for Sliding Sync.
|
||||
@@ -1 +0,0 @@
|
||||
Fix a bug in SSS which could prevent /sync from working for certain user accounts.
|
||||
@@ -1 +0,0 @@
|
||||
Fix `_bulk_get_max_event_pos` being inefficient.
|
||||
@@ -1 +0,0 @@
|
||||
Ignore invites from ignored users in Sliding Sync.
|
||||
@@ -1 +0,0 @@
|
||||
Add cache to `get_tags_for_room(...)`.
|
||||
@@ -1 +0,0 @@
|
||||
Small performance improvement in speeding up Sliding Sync.
|
||||
@@ -1 +0,0 @@
|
||||
Fix a bug in SSS which could prevent /sync from working for certain user accounts.
|
||||
@@ -1 +0,0 @@
|
||||
Minor speed up of initial sliding sync requests.
|
||||
1
changelog.d/17736.bugfix
Normal file
1
changelog.d/17736.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix saving of PNG thumbnails, when the original image is in the CMYK color space.
|
||||
1
changelog.d/17752.misc
Normal file
1
changelog.d/17752.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add Python 3.13 and Postgres 17 to the test matrix.
|
||||
1
changelog.d/17783.feature
Normal file
1
changelog.d/17783.feature
Normal file
@@ -0,0 +1 @@
|
||||
Implement [MSC4210](https://github.com/matrix-org/matrix-spec-proposals/pull/4210): Remove legacy mentions. Contributed by @tulir @ Beeper.
|
||||
1
changelog.d/17785.bugfix
Normal file
1
changelog.d/17785.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix bug with sliding sync where the server would not return state that was added to the `required_state` config.
|
||||
1
changelog.d/17786.misc
Normal file
1
changelog.d/17786.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add a test for downloading and thumbnailing a CMYK JPEG.
|
||||
1
changelog.d/17802.doc
Normal file
1
changelog.d/17802.doc
Normal file
@@ -0,0 +1 @@
|
||||
Correct documentation to refer to the `--config-path` argument instead of `--config-file`.
|
||||
1
changelog.d/17803.misc
Normal file
1
changelog.d/17803.misc
Normal file
@@ -0,0 +1 @@
|
||||
Test github token before running release script steps.
|
||||
1
changelog.d/17805.bugfix
Normal file
1
changelog.d/17805.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix bug with sliding sync where the server would not return state that was added to the `required_state` config.
|
||||
1
changelog.d/17824.misc
Normal file
1
changelog.d/17824.misc
Normal file
@@ -0,0 +1 @@
|
||||
Build debian packages for new Ubuntu versions, and stop building for no longer supported versions.
|
||||
1
changelog.d/17825.doc
Normal file
1
changelog.d/17825.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typo in `target_cache_memory_usage` docs.
|
||||
1
changelog.d/17826.misc
Normal file
1
changelog.d/17826.misc
Normal file
@@ -0,0 +1 @@
|
||||
Enable the `.org.matrix.msc4028.encrypted_event` push rule by default in accordance with [MSC4028](https://github.com/matrix-org/matrix-spec-proposals/pull/4028). Note that the corresponding experimental feature must still be switched on for this push rule to have any effect.
|
||||
1
changelog.d/17835.bugfix
Normal file
1
changelog.d/17835.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync that would cause rooms to stay forgotten and hidden even after rejoining.
|
||||
1
changelog.d/17842.misc
Normal file
1
changelog.d/17842.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix some typing issues uncovered by upgrading mypy to 1.11.x.
|
||||
@@ -20,8 +20,8 @@
|
||||
#
|
||||
|
||||
import argparse
|
||||
import cgi
|
||||
import datetime
|
||||
import html
|
||||
import json
|
||||
import urllib.request
|
||||
from typing import List
|
||||
@@ -85,7 +85,7 @@ def make_graph(pdus: List[dict], filename_prefix: str) -> None:
|
||||
"name": name,
|
||||
"type": pdu.get("pdu_type"),
|
||||
"state_key": pdu.get("state_key"),
|
||||
"content": cgi.escape(json.dumps(pdu.get("content")), quote=True),
|
||||
"content": html.escape(json.dumps(pdu.get("content")), quote=True),
|
||||
"time": t,
|
||||
"depth": pdu.get("depth"),
|
||||
}
|
||||
|
||||
30
debian/changelog
vendored
30
debian/changelog
vendored
@@ -1,3 +1,33 @@
|
||||
matrix-synapse-py3 (1.117.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.117.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Oct 2024 10:46:30 +0100
|
||||
|
||||
matrix-synapse-py3 (1.117.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.117.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 08 Oct 2024 14:37:11 +0100
|
||||
|
||||
matrix-synapse-py3 (1.116.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.116.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Oct 2024 11:14:07 +0100
|
||||
|
||||
matrix-synapse-py3 (1.116.0~rc2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.116.0rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 26 Sep 2024 13:28:43 +0000
|
||||
|
||||
matrix-synapse-py3 (1.116.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.116.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 25 Sep 2024 09:34:07 +0000
|
||||
|
||||
matrix-synapse-py3 (1.115.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.115.0.
|
||||
|
||||
@@ -76,8 +76,9 @@ _Changed in Synapse v1.62.0: `synapse.module_api.NOT_SPAM` and `synapse.module_a
|
||||
async def user_may_invite(inviter: str, invitee: str, room_id: str) -> Union["synapse.module_api.NOT_SPAM", "synapse.module_api.errors.Codes", bool]
|
||||
```
|
||||
|
||||
Called when processing an invitation. Both inviter and invitee are
|
||||
represented by their Matrix user ID (e.g. `@alice:example.com`).
|
||||
Called when processing an invitation, both when one is created locally or when
|
||||
receiving an invite over federation. Both inviter and invitee are represented by
|
||||
their Matrix user ID (e.g. `@alice:example.com`).
|
||||
|
||||
|
||||
The callback must return one of:
|
||||
@@ -112,7 +113,9 @@ async def user_may_send_3pid_invite(
|
||||
```
|
||||
|
||||
Called when processing an invitation using a third-party identifier (also called a 3PID,
|
||||
e.g. an email address or a phone number).
|
||||
e.g. an email address or a phone number). It is only called when a 3PID invite is created
|
||||
locally - not when one is received in a room over federation. If the 3PID is already associated
|
||||
with a Matrix ID, the spam check will go through the `user_may_invite` callback instead.
|
||||
|
||||
The inviter is represented by their Matrix user ID (e.g. `@alice:example.com`), and the
|
||||
invitee is represented by its medium (e.g. "email") and its address
|
||||
|
||||
@@ -52,8 +52,6 @@ architecture via <https://packages.matrix.org/debian/>.
|
||||
|
||||
To install the latest release:
|
||||
|
||||
TODO UPDATE ALL THIS
|
||||
|
||||
```sh
|
||||
sudo apt install -y lsb-release wget apt-transport-https
|
||||
sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
|
||||
@@ -316,7 +314,7 @@ sudo dnf group install "Development Tools"
|
||||
|
||||
*Note: The term "RHEL" below refers to both Red Hat Enterprise Linux and Rocky Linux. The distributions are 1:1 binary compatible.*
|
||||
|
||||
It's recommended to use the latest Python versions.
|
||||
It's recommended to use the latest Python versions.
|
||||
|
||||
RHEL 8 in particular ships with Python 3.6 by default which is EOL and therefore no longer supported by Synapse. RHEL 9 ship with Python 3.9 which is still supported by the Python core team as of this writing. However, newer Python versions provide significant performance improvements and they're available in official distributions' repositories. Therefore it's recommended to use them.
|
||||
|
||||
@@ -346,7 +344,7 @@ dnf install python3.12 python3.12-devel
|
||||
```
|
||||
Finally, install common prerequisites
|
||||
```bash
|
||||
dnf install libicu libicu-devel libpq5 libpq5-devel lz4 pkgconf
|
||||
dnf install libicu libicu-devel libpq5 libpq5-devel lz4 pkgconf
|
||||
dnf group install "Development Tools"
|
||||
```
|
||||
###### Using venv module instead of virtualenv command
|
||||
@@ -355,7 +353,7 @@ It's recommended to use Python venv module directly rather than the virtualenv c
|
||||
* On RHEL 9, virtualenv is only available on [EPEL](https://docs.fedoraproject.org/en-US/epel/).
|
||||
* On RHEL 8, virtualenv is based on Python 3.6. It does not support creating 3.11/3.12 virtual environments.
|
||||
|
||||
Here's an example of creating Python 3.12 virtual environment and installing Synapse from PyPI.
|
||||
Here's an example of creating Python 3.12 virtual environment and installing Synapse from PyPI.
|
||||
|
||||
```bash
|
||||
mkdir -p ~/synapse
|
||||
|
||||
@@ -255,6 +255,8 @@ line to `/etc/default/matrix-synapse`:
|
||||
|
||||
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2
|
||||
|
||||
*Note*: You may need to set `PYTHONMALLOC=malloc` to ensure that `jemalloc` can accurately calculate memory usage. By default, Python uses its internal small-object allocator, which may interfere with jemalloc's ability to track memory consumption correctly. This could prevent the [cache_autotuning](../configuration/config_documentation.md#caches-and-associated-values) feature from functioning as expected, as the Python allocator may not reach the memory threshold set by `max_cache_memory_usage`, thus not triggering the cache eviction process.
|
||||
|
||||
This made a significant difference on Python 2.7 - it's unclear how
|
||||
much of an improvement it provides on Python 3.x.
|
||||
|
||||
|
||||
@@ -1434,7 +1434,7 @@ number of entries that can be stored.
|
||||
Please see the [Config Conventions](#config-conventions) for information on how to specify memory size and cache expiry
|
||||
durations.
|
||||
* `max_cache_memory_usage` sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
|
||||
They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
|
||||
They will continue to be evicted until the memory usage drops below the `target_cache_memory_usage`, set in
|
||||
the setting below, or until the `min_cache_ttl` is hit. There is no default value for this option.
|
||||
* `target_cache_memory_usage` sets a rough target for the desired memory usage of the caches. There is no default value
|
||||
for this option.
|
||||
@@ -3722,6 +3722,8 @@ Additional sub-options for this setting include:
|
||||
Required if `enabled` is set to true.
|
||||
* `subject_claim`: Name of the claim containing a unique identifier for the user.
|
||||
Optional, defaults to `sub`.
|
||||
* `display_name_claim`: Name of the claim containing the display name for the user. Optional.
|
||||
If provided, the display name will be set to the value of this claim upon first login.
|
||||
* `issuer`: The issuer to validate the "iss" claim against. Optional. If provided the
|
||||
"iss" claim will be required and validated for all JSON web tokens.
|
||||
* `audiences`: A list of audiences to validate the "aud" claim against. Optional.
|
||||
@@ -3736,6 +3738,7 @@ jwt_config:
|
||||
secret: "provided-by-your-issuer"
|
||||
algorithm: "provided-by-your-issuer"
|
||||
subject_claim: "name_of_claim"
|
||||
display_name_claim: "name_of_claim"
|
||||
issuer: "provided-by-your-issuer"
|
||||
audiences:
|
||||
- "provided-by-your-issuer"
|
||||
@@ -4368,7 +4371,13 @@ It is possible to scale the processes that handle sending outbound federation re
|
||||
by running a [`generic_worker`](../../workers.md#synapseappgeneric_worker) and adding it's [`worker_name`](#worker_name) to
|
||||
a `federation_sender_instances` map. Doing so will remove handling of this function from
|
||||
the main process. Multiple workers can be added to this map, in which case the work is
|
||||
balanced across them.
|
||||
balanced across them.
|
||||
|
||||
The way that the load balancing works is any outbound federation request will be assigned
|
||||
to a federation sender worker based on the hash of the destination server name. This
|
||||
means that all requests being sent to the same destination will be processed by the same
|
||||
worker instance. Multiple `federation_sender_instances` are useful if there is a federation
|
||||
with multiple servers.
|
||||
|
||||
This configuration setting must be shared between all workers handling federation
|
||||
sending, and if changed all federation sender workers must be stopped at the same time
|
||||
@@ -4518,6 +4527,9 @@ This setting has the following sub-options:
|
||||
* `path`: The full path to a local Unix socket file. **If this is used, `host` and
|
||||
`port` are ignored.** Defaults to `/tmp/redis.sock'
|
||||
* `password`: Optional password if configured on the Redis instance.
|
||||
* `password_path`: Alternative to `password`, reading the password from an
|
||||
external file. The file should be a plain text file, containing only the
|
||||
password. Synapse reads the password from the given file once at startup.
|
||||
* `dbid`: Optional redis dbid if needs to connect to specific redis logical db.
|
||||
* `use_tls`: Whether to use tls connection. Defaults to false.
|
||||
* `certificate_file`: Optional path to the certificate file
|
||||
@@ -4531,13 +4543,16 @@ This setting has the following sub-options:
|
||||
|
||||
_Changed in Synapse 1.85.0: Added path option to use a local Unix socket_
|
||||
|
||||
_Changed in Synapse 1.116.0: Added password\_path_
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
redis:
|
||||
enabled: true
|
||||
host: localhost
|
||||
port: 6379
|
||||
password: <secret_password>
|
||||
password_path: <path_to_the_password_file>
|
||||
# OR password: <secret_password>
|
||||
dbid: <dbid>
|
||||
#use_tls: True
|
||||
#certificate_file: <path_to_the_certificate_file>
|
||||
|
||||
@@ -177,11 +177,11 @@ The following applies to Synapse installations that have been installed from sou
|
||||
|
||||
You can start the main Synapse process with Poetry by running the following command:
|
||||
```console
|
||||
poetry run synapse_homeserver --config-file [your homeserver.yaml]
|
||||
poetry run synapse_homeserver --config-path [your homeserver.yaml]
|
||||
```
|
||||
For worker setups, you can run the following command
|
||||
```console
|
||||
poetry run synapse_worker --config-file [your homeserver.yaml] --config-file [your worker.yaml]
|
||||
poetry run synapse_worker --config-path [your homeserver.yaml] --config-path [your worker.yaml]
|
||||
```
|
||||
## Available worker applications
|
||||
|
||||
|
||||
806
poetry.lock
generated
806
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.115.0"
|
||||
version = "1.117.0"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
@@ -320,7 +320,7 @@ all = [
|
||||
# failing on new releases. Keeping lower bounds loose here means that dependabot
|
||||
# can bump versions without having to update the content-hash in the lockfile.
|
||||
# This helps prevents merge conflicts when running a batch of dependabot updates.
|
||||
ruff = "0.6.5"
|
||||
ruff = "0.6.9"
|
||||
# Type checking only works with the pydantic.v1 compat module from pydantic v2
|
||||
pydantic = "^2"
|
||||
|
||||
|
||||
@@ -60,6 +60,7 @@ fn bench_match_exact(b: &mut Bencher) {
|
||||
true,
|
||||
vec![],
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -105,6 +106,7 @@ fn bench_match_word(b: &mut Bencher) {
|
||||
true,
|
||||
vec![],
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -150,6 +152,7 @@ fn bench_match_word_miss(b: &mut Bencher) {
|
||||
true,
|
||||
vec![],
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -195,6 +198,7 @@ fn bench_eval_message(b: &mut Bencher) {
|
||||
true,
|
||||
vec![],
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -205,6 +209,7 @@ fn bench_eval_message(b: &mut Bencher) {
|
||||
false,
|
||||
false,
|
||||
false,
|
||||
false,
|
||||
);
|
||||
|
||||
b.iter(|| eval.run(&rules, Some("bob"), Some("person")));
|
||||
|
||||
@@ -81,7 +81,7 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
|
||||
))]),
|
||||
actions: Cow::Borrowed(&[Action::Notify]),
|
||||
default: true,
|
||||
default_enabled: false,
|
||||
default_enabled: true,
|
||||
},
|
||||
PushRule {
|
||||
rule_id: Cow::Borrowed("global/override/.m.rule.suppress_notices"),
|
||||
|
||||
@@ -105,6 +105,9 @@ pub struct PushRuleEvaluator {
|
||||
/// If MSC3931 (room version feature flags) is enabled. Usually controlled by the same
|
||||
/// flag as MSC1767 (extensible events core).
|
||||
msc3931_enabled: bool,
|
||||
|
||||
// If MSC4210 (remove legacy mentions) is enabled.
|
||||
msc4210_enabled: bool,
|
||||
}
|
||||
|
||||
#[pymethods]
|
||||
@@ -122,6 +125,7 @@ impl PushRuleEvaluator {
|
||||
related_event_match_enabled,
|
||||
room_version_feature_flags,
|
||||
msc3931_enabled,
|
||||
msc4210_enabled,
|
||||
))]
|
||||
pub fn py_new(
|
||||
flattened_keys: BTreeMap<String, JsonValue>,
|
||||
@@ -133,6 +137,7 @@ impl PushRuleEvaluator {
|
||||
related_event_match_enabled: bool,
|
||||
room_version_feature_flags: Vec<String>,
|
||||
msc3931_enabled: bool,
|
||||
msc4210_enabled: bool,
|
||||
) -> Result<Self, Error> {
|
||||
let body = match flattened_keys.get("content.body") {
|
||||
Some(JsonValue::Value(SimpleJsonValue::Str(s))) => s.clone().into_owned(),
|
||||
@@ -150,6 +155,7 @@ impl PushRuleEvaluator {
|
||||
related_event_match_enabled,
|
||||
room_version_feature_flags,
|
||||
msc3931_enabled,
|
||||
msc4210_enabled,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -176,7 +182,8 @@ impl PushRuleEvaluator {
|
||||
|
||||
// For backwards-compatibility the legacy mention rules are disabled
|
||||
// if the event contains the 'm.mentions' property.
|
||||
if self.has_mentions
|
||||
// Additionally, MSC4210 always disables the legacy rules.
|
||||
if (self.has_mentions || self.msc4210_enabled)
|
||||
&& (rule_id == "global/override/.m.rule.contains_display_name"
|
||||
|| rule_id == "global/content/.m.rule.contains_user_name"
|
||||
|| rule_id == "global/override/.m.rule.roomnotif")
|
||||
@@ -526,6 +533,7 @@ fn push_rule_evaluator() {
|
||||
true,
|
||||
vec![],
|
||||
true,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -555,6 +563,7 @@ fn test_requires_room_version_supports_condition() {
|
||||
false,
|
||||
flags,
|
||||
true,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -582,7 +591,7 @@ fn test_requires_room_version_supports_condition() {
|
||||
};
|
||||
let rules = PushRules::new(vec![custom_rule]);
|
||||
result = evaluator.run(
|
||||
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false),
|
||||
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false, false),
|
||||
None,
|
||||
None,
|
||||
);
|
||||
|
||||
@@ -534,6 +534,7 @@ pub struct FilteredPushRules {
|
||||
msc3381_polls_enabled: bool,
|
||||
msc3664_enabled: bool,
|
||||
msc4028_push_encrypted_events: bool,
|
||||
msc4210_enabled: bool,
|
||||
}
|
||||
|
||||
#[pymethods]
|
||||
@@ -546,6 +547,7 @@ impl FilteredPushRules {
|
||||
msc3381_polls_enabled: bool,
|
||||
msc3664_enabled: bool,
|
||||
msc4028_push_encrypted_events: bool,
|
||||
msc4210_enabled: bool,
|
||||
) -> Self {
|
||||
Self {
|
||||
push_rules,
|
||||
@@ -554,6 +556,7 @@ impl FilteredPushRules {
|
||||
msc3381_polls_enabled,
|
||||
msc3664_enabled,
|
||||
msc4028_push_encrypted_events,
|
||||
msc4210_enabled,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -596,6 +599,14 @@ impl FilteredPushRules {
|
||||
return false;
|
||||
}
|
||||
|
||||
if self.msc4210_enabled
|
||||
&& (rule.rule_id == "global/override/.m.rule.contains_display_name"
|
||||
|| rule.rule_id == "global/content/.m.rule.contains_user_name"
|
||||
|| rule.rule_id == "global/override/.m.rule.roomnotif")
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
true
|
||||
})
|
||||
.map(|r| {
|
||||
|
||||
@@ -32,8 +32,8 @@ DISTS = (
|
||||
"debian:sid", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24)
|
||||
"ubuntu:focal", # 20.04 LTS (EOL 2025-04) (our EOL forced by Python 3.8 is 2024-10-14)
|
||||
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)
|
||||
"ubuntu:lunar", # 23.04 (EOL 2024-01) (our EOL forced by Python 3.11 is 2027-10-24)
|
||||
"ubuntu:mantic", # 23.10 (EOL 2024-07) (our EOL forced by Python 3.11 is 2027-10-24)
|
||||
"ubuntu:noble", # 24.04 LTS (EOL 2029-06)
|
||||
"ubuntu:oracular", # 24.10 (EOL 2025-07)
|
||||
"debian:trixie", # (EOL not specified yet)
|
||||
)
|
||||
|
||||
|
||||
@@ -220,6 +220,7 @@ test_packages=(
|
||||
./tests/msc3874
|
||||
./tests/msc3890
|
||||
./tests/msc3391
|
||||
./tests/msc3757
|
||||
./tests/msc3930
|
||||
./tests/msc3902
|
||||
./tests/msc3967
|
||||
|
||||
@@ -360,7 +360,7 @@ def is_cacheable(
|
||||
# For a type alias, check if the underlying real type is cachable.
|
||||
return is_cacheable(mypy.types.get_proper_type(rt), signature, verbose)
|
||||
|
||||
elif isinstance(rt, UninhabitedType) and rt.is_noreturn:
|
||||
elif isinstance(rt, UninhabitedType):
|
||||
# There is no return value, just consider it cachable. This is only used
|
||||
# in tests.
|
||||
return True, None
|
||||
|
||||
@@ -40,7 +40,7 @@ import commonmark
|
||||
import git
|
||||
from click.exceptions import ClickException
|
||||
from git import GitCommandError, Repo
|
||||
from github import Github
|
||||
from github import BadCredentialsException, Github
|
||||
from packaging import version
|
||||
|
||||
|
||||
@@ -323,10 +323,8 @@ def tag(gh_token: Optional[str]) -> None:
|
||||
def _tag(gh_token: Optional[str]) -> None:
|
||||
"""Tags the release and generates a draft GitHub release"""
|
||||
|
||||
if gh_token:
|
||||
# Test that the GH Token is valid before continuing.
|
||||
gh = Github(gh_token)
|
||||
gh.get_user()
|
||||
# Test that the GH Token is valid before continuing.
|
||||
check_valid_gh_token(gh_token)
|
||||
|
||||
# Make sure we're in a git repo.
|
||||
repo = get_repo_and_check_clean_checkout()
|
||||
@@ -469,10 +467,8 @@ def upload(gh_token: Optional[str]) -> None:
|
||||
def _upload(gh_token: Optional[str]) -> None:
|
||||
"""Upload release to pypi."""
|
||||
|
||||
if gh_token:
|
||||
# Test that the GH Token is valid before continuing.
|
||||
gh = Github(gh_token)
|
||||
gh.get_user()
|
||||
# Test that the GH Token is valid before continuing.
|
||||
check_valid_gh_token(gh_token)
|
||||
|
||||
current_version = get_package_version()
|
||||
tag_name = f"v{current_version}"
|
||||
@@ -569,10 +565,8 @@ def wait_for_actions(gh_token: Optional[str]) -> None:
|
||||
|
||||
|
||||
def _wait_for_actions(gh_token: Optional[str]) -> None:
|
||||
if gh_token:
|
||||
# Test that the GH Token is valid before continuing.
|
||||
gh = Github(gh_token)
|
||||
gh.get_user()
|
||||
# Test that the GH Token is valid before continuing.
|
||||
check_valid_gh_token(gh_token)
|
||||
|
||||
# Find out the version and tag name.
|
||||
current_version = get_package_version()
|
||||
@@ -806,6 +800,22 @@ def get_repo_and_check_clean_checkout(
|
||||
return repo
|
||||
|
||||
|
||||
def check_valid_gh_token(gh_token: Optional[str]) -> None:
|
||||
"""Check that a github token is valid, if supplied"""
|
||||
|
||||
if not gh_token:
|
||||
# No github token supplied, so nothing to do.
|
||||
return
|
||||
|
||||
try:
|
||||
gh = Github(gh_token)
|
||||
|
||||
# We need to lookup name to trigger a request.
|
||||
_name = gh.get_user().name
|
||||
except BadCredentialsException as e:
|
||||
raise click.ClickException(f"Github credentials are bad: {e}")
|
||||
|
||||
|
||||
def find_ref(repo: git.Repo, ref_name: str) -> Optional[git.HEAD]:
|
||||
"""Find the branch/ref, looking first locally then in the remote."""
|
||||
if ref_name in repo.references:
|
||||
|
||||
@@ -338,7 +338,7 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
logger.exception("Failed to introspect token")
|
||||
raise SynapseError(503, "Unable to introspect the access token")
|
||||
|
||||
logger.info(f"Introspection result: {introspection_result!r}")
|
||||
logger.debug("Introspection result: %r", introspection_result)
|
||||
|
||||
# TODO: introspection verification should be more extensive, especially:
|
||||
# - verify the audience
|
||||
|
||||
@@ -107,6 +107,8 @@ class RoomVersion:
|
||||
# support the flag. Unknown flags are ignored by the evaluator, making conditions
|
||||
# fail if used.
|
||||
msc3931_push_features: Tuple[str, ...] # values from PushRuleRoomFlag
|
||||
# MSC3757: Restricting who can overwrite a state event
|
||||
msc3757_enabled: bool
|
||||
|
||||
|
||||
class RoomVersions:
|
||||
@@ -128,6 +130,7 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=False,
|
||||
enforce_int_power_levels=False,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
V2 = RoomVersion(
|
||||
"2",
|
||||
@@ -147,6 +150,7 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=False,
|
||||
enforce_int_power_levels=False,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
V3 = RoomVersion(
|
||||
"3",
|
||||
@@ -166,6 +170,7 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=False,
|
||||
enforce_int_power_levels=False,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
V4 = RoomVersion(
|
||||
"4",
|
||||
@@ -185,6 +190,7 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=False,
|
||||
enforce_int_power_levels=False,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
V5 = RoomVersion(
|
||||
"5",
|
||||
@@ -204,6 +210,7 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=False,
|
||||
enforce_int_power_levels=False,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
V6 = RoomVersion(
|
||||
"6",
|
||||
@@ -223,6 +230,7 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=False,
|
||||
enforce_int_power_levels=False,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
V7 = RoomVersion(
|
||||
"7",
|
||||
@@ -242,6 +250,7 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=False,
|
||||
enforce_int_power_levels=False,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
V8 = RoomVersion(
|
||||
"8",
|
||||
@@ -261,6 +270,7 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=False,
|
||||
enforce_int_power_levels=False,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
V9 = RoomVersion(
|
||||
"9",
|
||||
@@ -280,6 +290,7 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=False,
|
||||
enforce_int_power_levels=False,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
V10 = RoomVersion(
|
||||
"10",
|
||||
@@ -299,6 +310,7 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=True,
|
||||
enforce_int_power_levels=True,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
MSC1767v10 = RoomVersion(
|
||||
# MSC1767 (Extensible Events) based on room version "10"
|
||||
@@ -319,6 +331,28 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=True,
|
||||
enforce_int_power_levels=True,
|
||||
msc3931_push_features=(PushRuleRoomFlag.EXTENSIBLE_EVENTS,),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
MSC3757v10 = RoomVersion(
|
||||
# MSC3757 (Restricting who can overwrite a state event) based on room version "10"
|
||||
"org.matrix.msc3757.10",
|
||||
RoomDisposition.UNSTABLE,
|
||||
EventFormatVersions.ROOM_V4_PLUS,
|
||||
StateResolutionVersions.V2,
|
||||
enforce_key_validity=True,
|
||||
special_case_aliases_auth=False,
|
||||
strict_canonicaljson=True,
|
||||
limit_notifications_power_levels=True,
|
||||
implicit_room_creator=False,
|
||||
updated_redaction_rules=False,
|
||||
restricted_join_rule=True,
|
||||
restricted_join_rule_fix=True,
|
||||
knock_join_rule=True,
|
||||
msc3389_relation_redactions=False,
|
||||
knock_restricted_join_rule=True,
|
||||
enforce_int_power_levels=True,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=True,
|
||||
)
|
||||
V11 = RoomVersion(
|
||||
"11",
|
||||
@@ -338,6 +372,28 @@ class RoomVersions:
|
||||
knock_restricted_join_rule=True,
|
||||
enforce_int_power_levels=True,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=False,
|
||||
)
|
||||
MSC3757v11 = RoomVersion(
|
||||
# MSC3757 (Restricting who can overwrite a state event) based on room version "11"
|
||||
"org.matrix.msc3757.11",
|
||||
RoomDisposition.UNSTABLE,
|
||||
EventFormatVersions.ROOM_V4_PLUS,
|
||||
StateResolutionVersions.V2,
|
||||
enforce_key_validity=True,
|
||||
special_case_aliases_auth=False,
|
||||
strict_canonicaljson=True,
|
||||
limit_notifications_power_levels=True,
|
||||
implicit_room_creator=True, # Used by MSC3820
|
||||
updated_redaction_rules=True, # Used by MSC3820
|
||||
restricted_join_rule=True,
|
||||
restricted_join_rule_fix=True,
|
||||
knock_join_rule=True,
|
||||
msc3389_relation_redactions=False,
|
||||
knock_restricted_join_rule=True,
|
||||
enforce_int_power_levels=True,
|
||||
msc3931_push_features=(),
|
||||
msc3757_enabled=True,
|
||||
)
|
||||
|
||||
|
||||
@@ -355,6 +411,8 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
|
||||
RoomVersions.V9,
|
||||
RoomVersions.V10,
|
||||
RoomVersions.V11,
|
||||
RoomVersions.MSC3757v10,
|
||||
RoomVersions.MSC3757v11,
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
#
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
# Copyright 2016 OpenMarket Ltd
|
||||
# Copyright (C) 2023 New Vector, Ltd
|
||||
# Copyright (C) 2023-2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
|
||||
@@ -447,3 +447,6 @@ class ExperimentalConfig(Config):
|
||||
|
||||
# MSC4151: Report room API (Client-Server API)
|
||||
self.msc4151_enabled: bool = experimental.get("msc4151_enabled", False)
|
||||
|
||||
# MSC4210: Remove legacy mentions
|
||||
self.msc4210_enabled: bool = experimental.get("msc4210_enabled", False)
|
||||
|
||||
@@ -38,6 +38,7 @@ class JWTConfig(Config):
|
||||
self.jwt_algorithm = jwt_config["algorithm"]
|
||||
|
||||
self.jwt_subject_claim = jwt_config.get("subject_claim", "sub")
|
||||
self.jwt_display_name_claim = jwt_config.get("display_name_claim")
|
||||
|
||||
# The issuer and audiences are optional, if provided, it is asserted
|
||||
# that the claims exist on the JWT.
|
||||
@@ -49,5 +50,6 @@ class JWTConfig(Config):
|
||||
self.jwt_secret = None
|
||||
self.jwt_algorithm = None
|
||||
self.jwt_subject_claim = None
|
||||
self.jwt_display_name_claim = None
|
||||
self.jwt_issuer = None
|
||||
self.jwt_audiences = None
|
||||
|
||||
@@ -21,10 +21,15 @@
|
||||
|
||||
from typing import Any
|
||||
|
||||
from synapse.config._base import Config
|
||||
from synapse.config._base import Config, ConfigError, read_file
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util.check_dependencies import check_requirements
|
||||
|
||||
CONFLICTING_PASSWORD_OPTS_ERROR = """\
|
||||
You have configured both `redis.password` and `redis.password_path`.
|
||||
These are mutually incompatible.
|
||||
"""
|
||||
|
||||
|
||||
class RedisConfig(Config):
|
||||
section = "redis"
|
||||
@@ -43,6 +48,17 @@ class RedisConfig(Config):
|
||||
self.redis_path = redis_config.get("path", None)
|
||||
self.redis_dbid = redis_config.get("dbid", None)
|
||||
self.redis_password = redis_config.get("password")
|
||||
redis_password_path = redis_config.get("password_path")
|
||||
if redis_password_path:
|
||||
if self.redis_password:
|
||||
raise ConfigError(CONFLICTING_PASSWORD_OPTS_ERROR)
|
||||
self.redis_password = read_file(
|
||||
redis_password_path,
|
||||
(
|
||||
"redis",
|
||||
"password_path",
|
||||
),
|
||||
).strip()
|
||||
|
||||
self.redis_use_tls = redis_config.get("use_tls", False)
|
||||
self.redis_certificate = redis_config.get("certificate_file", None)
|
||||
|
||||
@@ -388,6 +388,7 @@ LENIENT_EVENT_BYTE_LIMITS_ROOM_VERSIONS = {
|
||||
RoomVersions.V9,
|
||||
RoomVersions.V10,
|
||||
RoomVersions.MSC1767v10,
|
||||
RoomVersions.MSC3757v10,
|
||||
}
|
||||
|
||||
|
||||
@@ -790,9 +791,10 @@ def get_send_level(
|
||||
|
||||
|
||||
def _can_send_event(event: "EventBase", auth_events: StateMap["EventBase"]) -> bool:
|
||||
state_key = event.get_state_key()
|
||||
power_levels_event = get_power_level_event(auth_events)
|
||||
|
||||
send_level = get_send_level(event.type, event.get("state_key"), power_levels_event)
|
||||
send_level = get_send_level(event.type, state_key, power_levels_event)
|
||||
user_level = get_user_power_level(event.user_id, auth_events)
|
||||
|
||||
if user_level < send_level:
|
||||
@@ -803,11 +805,34 @@ def _can_send_event(event: "EventBase", auth_events: StateMap["EventBase"]) -> b
|
||||
errcode=Codes.INSUFFICIENT_POWER,
|
||||
)
|
||||
|
||||
# Check state_key
|
||||
if hasattr(event, "state_key"):
|
||||
if event.state_key.startswith("@"):
|
||||
if event.state_key != event.user_id:
|
||||
raise AuthError(403, "You are not allowed to set others state")
|
||||
if (
|
||||
state_key is not None
|
||||
and state_key.startswith("@")
|
||||
and state_key != event.user_id
|
||||
):
|
||||
if event.room_version.msc3757_enabled:
|
||||
try:
|
||||
colon_idx = state_key.index(":", 1)
|
||||
suffix_idx = state_key.find("_", colon_idx + 1)
|
||||
state_key_user_id = (
|
||||
state_key[:suffix_idx] if suffix_idx != -1 else state_key
|
||||
)
|
||||
if not UserID.is_valid(state_key_user_id):
|
||||
raise ValueError
|
||||
except ValueError:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"State key neither equals a valid user ID, nor starts with one plus an underscore",
|
||||
errcode=Codes.BAD_JSON,
|
||||
)
|
||||
if (
|
||||
# sender is owner of the state key
|
||||
state_key_user_id == event.user_id
|
||||
# sender has higher PL than the owner of the state key
|
||||
or user_level > get_user_power_level(state_key_user_id, auth_events)
|
||||
):
|
||||
return True
|
||||
raise AuthError(403, "You are not allowed to set others state")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
@@ -443,8 +443,8 @@ class AdminHandler:
|
||||
["m.room.member", "m.room.message"],
|
||||
)
|
||||
if not event_ids:
|
||||
# there's nothing to redact
|
||||
return TaskStatus.COMPLETE, result, None
|
||||
# nothing to redact in this room
|
||||
continue
|
||||
|
||||
events = await self._store.get_events_as_list(event_ids)
|
||||
for event in events:
|
||||
|
||||
@@ -1,3 +1,17 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, List, Optional, Set, Tuple
|
||||
|
||||
|
||||
@@ -48,6 +48,7 @@ from synapse.metrics.background_process_metrics import (
|
||||
wrap_as_background_process,
|
||||
)
|
||||
from synapse.storage.databases.main.client_ips import DeviceLastConnectionInfo
|
||||
from synapse.storage.databases.main.roommember import EventIdMembership
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.types import (
|
||||
DeviceListUpdates,
|
||||
@@ -222,7 +223,6 @@ class DeviceWorkerHandler:
|
||||
return changed
|
||||
|
||||
@trace
|
||||
@measure_func("device.get_user_ids_changed")
|
||||
@cancellable
|
||||
async def get_user_ids_changed(
|
||||
self, user_id: str, from_token: StreamToken
|
||||
@@ -290,9 +290,11 @@ class DeviceWorkerHandler:
|
||||
memberships_to_fetch.add(delta.prev_event_id)
|
||||
|
||||
# Fetch all the memberships for the membership events
|
||||
event_id_to_memberships = await self.store.get_membership_from_event_ids(
|
||||
memberships_to_fetch
|
||||
)
|
||||
event_id_to_memberships: Mapping[str, Optional[EventIdMembership]] = {}
|
||||
if memberships_to_fetch:
|
||||
event_id_to_memberships = await self.store.get_membership_from_event_ids(
|
||||
memberships_to_fetch
|
||||
)
|
||||
|
||||
joined_invited_knocked = (
|
||||
Membership.JOIN,
|
||||
@@ -349,7 +351,6 @@ class DeviceWorkerHandler:
|
||||
|
||||
return device_list_updates
|
||||
|
||||
@measure_func("_generate_sync_entry_for_device_list")
|
||||
async def generate_sync_entry_for_device_list(
|
||||
self,
|
||||
user_id: str,
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
from typing import TYPE_CHECKING
|
||||
from typing import TYPE_CHECKING, Optional, Tuple
|
||||
|
||||
from authlib.jose import JsonWebToken, JWTClaims
|
||||
from authlib.jose.errors import BadSignatureError, InvalidClaimError, JoseError
|
||||
@@ -36,11 +36,12 @@ class JwtHandler:
|
||||
|
||||
self.jwt_secret = hs.config.jwt.jwt_secret
|
||||
self.jwt_subject_claim = hs.config.jwt.jwt_subject_claim
|
||||
self.jwt_display_name_claim = hs.config.jwt.jwt_display_name_claim
|
||||
self.jwt_algorithm = hs.config.jwt.jwt_algorithm
|
||||
self.jwt_issuer = hs.config.jwt.jwt_issuer
|
||||
self.jwt_audiences = hs.config.jwt.jwt_audiences
|
||||
|
||||
def validate_login(self, login_submission: JsonDict) -> str:
|
||||
def validate_login(self, login_submission: JsonDict) -> Tuple[str, Optional[str]]:
|
||||
"""
|
||||
Authenticates the user for the /login API
|
||||
|
||||
@@ -49,7 +50,8 @@ class JwtHandler:
|
||||
(including 'type' and other relevant fields)
|
||||
|
||||
Returns:
|
||||
The user ID that is logging in.
|
||||
A tuple of (user_id, display_name) of the user that is logging in.
|
||||
If the JWT does not contain a display name, the second element of the tuple will be None.
|
||||
|
||||
Raises:
|
||||
LoginError if there was an authentication problem.
|
||||
@@ -109,4 +111,10 @@ class JwtHandler:
|
||||
if user is None:
|
||||
raise LoginError(403, "Invalid JWT", errcode=Codes.FORBIDDEN)
|
||||
|
||||
return UserID(user, self.hs.hostname).to_string()
|
||||
default_display_name = None
|
||||
if self.jwt_display_name_claim:
|
||||
display_name_claim = claims.get(self.jwt_display_name_claim)
|
||||
if display_name_claim is not None:
|
||||
default_display_name = display_name_claim
|
||||
|
||||
return UserID(user, self.hs.hostname).to_string(), default_display_name
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
|
||||
import logging
|
||||
from itertools import chain
|
||||
from typing import TYPE_CHECKING, Dict, List, Mapping, Optional, Set, Tuple
|
||||
from typing import TYPE_CHECKING, AbstractSet, Dict, List, Mapping, Optional, Set, Tuple
|
||||
|
||||
from prometheus_client import Histogram
|
||||
from typing_extensions import assert_never
|
||||
@@ -49,6 +49,7 @@ from synapse.types import (
|
||||
Requester,
|
||||
SlidingSyncStreamToken,
|
||||
StateMap,
|
||||
StrCollection,
|
||||
StreamKeyType,
|
||||
StreamToken,
|
||||
)
|
||||
@@ -293,7 +294,6 @@ class SlidingSyncHandler:
|
||||
# to record rooms as having updates even if there might not actually
|
||||
# be anything new for the user (e.g. due to event filters, events
|
||||
# having happened after the user left, etc).
|
||||
unsent_room_ids = []
|
||||
if from_token:
|
||||
# The set of rooms that the client (may) care about, but aren't
|
||||
# in any list range (or subscribed to).
|
||||
@@ -305,15 +305,24 @@ class SlidingSyncHandler:
|
||||
# TODO: Replace this with something faster. When we land the
|
||||
# sliding sync tables that record the most recent event
|
||||
# positions we can use that.
|
||||
missing_event_map_by_room = (
|
||||
await self.store.get_room_events_stream_for_rooms(
|
||||
room_ids=missing_rooms,
|
||||
from_key=to_token.room_key,
|
||||
to_key=from_token.stream_token.room_key,
|
||||
limit=1,
|
||||
unsent_room_ids: StrCollection
|
||||
if await self.store.have_finished_sliding_sync_background_jobs():
|
||||
unsent_room_ids = await (
|
||||
self.store.get_rooms_that_have_updates_since_sliding_sync_table(
|
||||
room_ids=missing_rooms,
|
||||
from_key=from_token.stream_token.room_key,
|
||||
)
|
||||
)
|
||||
)
|
||||
unsent_room_ids = list(missing_event_map_by_room)
|
||||
else:
|
||||
missing_event_map_by_room = (
|
||||
await self.store.get_room_events_stream_for_rooms(
|
||||
room_ids=missing_rooms,
|
||||
from_key=to_token.room_key,
|
||||
to_key=from_token.stream_token.room_key,
|
||||
limit=1,
|
||||
)
|
||||
)
|
||||
unsent_room_ids = list(missing_event_map_by_room)
|
||||
|
||||
new_connection_state.rooms.record_unsent_rooms(
|
||||
unsent_room_ids, from_token.stream_token.room_key
|
||||
@@ -443,13 +452,11 @@ class SlidingSyncHandler:
|
||||
to_token=to_token,
|
||||
)
|
||||
|
||||
event_map = await self.store.get_events(list(state_ids.values()))
|
||||
events = await self.store.get_events_as_list(list(state_ids.values()))
|
||||
|
||||
state_map = {}
|
||||
for key, event_id in state_ids.items():
|
||||
event = event_map.get(event_id)
|
||||
if event:
|
||||
state_map[key] = event
|
||||
for event in events:
|
||||
state_map[(event.type, event.state_key)] = event
|
||||
|
||||
return state_map
|
||||
|
||||
@@ -513,6 +520,8 @@ class SlidingSyncHandler:
|
||||
|
||||
state_reset_out_of_room = True
|
||||
|
||||
prev_room_sync_config = previous_connection_state.room_configs.get(room_id)
|
||||
|
||||
# Determine whether we should limit the timeline to the token range.
|
||||
#
|
||||
# We should return historical messages (before token range) in the
|
||||
@@ -541,7 +550,6 @@ class SlidingSyncHandler:
|
||||
# or `limited` mean for clients that interpret them correctly. In future this
|
||||
# behavior is almost certainly going to change.
|
||||
#
|
||||
# TODO: Also handle changes to `required_state`
|
||||
from_bound = None
|
||||
initial = True
|
||||
ignore_timeline_bound = False
|
||||
@@ -562,7 +570,6 @@ class SlidingSyncHandler:
|
||||
|
||||
log_kv({"sliding_sync.room_status": room_status})
|
||||
|
||||
prev_room_sync_config = previous_connection_state.room_configs.get(room_id)
|
||||
if prev_room_sync_config is not None:
|
||||
# Check if the timeline limit has increased, if so ignore the
|
||||
# timeline bound and record the change (see "XXX: Odd behavior"
|
||||
@@ -573,8 +580,6 @@ class SlidingSyncHandler:
|
||||
):
|
||||
ignore_timeline_bound = True
|
||||
|
||||
# TODO: Check for changes in `required_state``
|
||||
|
||||
log_kv(
|
||||
{
|
||||
"sliding_sync.from_bound": from_bound,
|
||||
@@ -988,6 +993,10 @@ class SlidingSyncHandler:
|
||||
include_others=required_state_filter.include_others,
|
||||
)
|
||||
|
||||
# The required state map to store in the room sync config, if it has
|
||||
# changed.
|
||||
changed_required_state_map: Optional[Mapping[str, AbstractSet[str]]] = None
|
||||
|
||||
# We can return all of the state that was requested if this was the first
|
||||
# time we've sent the room down this connection.
|
||||
room_state: StateMap[EventBase] = {}
|
||||
@@ -1001,6 +1010,29 @@ class SlidingSyncHandler:
|
||||
else:
|
||||
assert from_bound is not None
|
||||
|
||||
if prev_room_sync_config is not None:
|
||||
# Check if there are any changes to the required state config
|
||||
# that we need to handle.
|
||||
changed_required_state_map, added_state_filter = (
|
||||
_required_state_changes(
|
||||
user.to_string(),
|
||||
previous_room_config=prev_room_sync_config,
|
||||
room_sync_config=room_sync_config,
|
||||
state_deltas=room_state_delta_id_map,
|
||||
)
|
||||
)
|
||||
|
||||
if added_state_filter:
|
||||
# Some state entries got added, so we pull out the current
|
||||
# state for them. If we don't do this we'd only send down new deltas.
|
||||
state_ids = await self.get_current_state_ids_at(
|
||||
room_id=room_id,
|
||||
room_membership_for_user_at_to_token=room_membership_for_user_at_to_token,
|
||||
state_filter=added_state_filter,
|
||||
to_token=to_token,
|
||||
)
|
||||
room_state_delta_id_map.update(state_ids)
|
||||
|
||||
events = await self.store.get_events(
|
||||
state_filter.filter_state(room_state_delta_id_map).values()
|
||||
)
|
||||
@@ -1048,25 +1080,64 @@ class SlidingSyncHandler:
|
||||
)
|
||||
)
|
||||
|
||||
# Figure out the last bump event in the room
|
||||
#
|
||||
# By default, just choose the membership event position for any non-join membership
|
||||
bump_stamp = room_membership_for_user_at_to_token.event_pos.stream
|
||||
# Figure out the last bump event in the room. If the bump stamp hasn't
|
||||
# changed we omit it from the response.
|
||||
bump_stamp = None
|
||||
|
||||
always_return_bump_stamp = (
|
||||
# We use the membership event position for any non-join
|
||||
room_membership_for_user_at_to_token.membership != Membership.JOIN
|
||||
# We didn't fetch any timeline events but we should still check for
|
||||
# a bump_stamp that might be somewhere
|
||||
or limited is None
|
||||
# There might be a bump event somewhere before the timeline events
|
||||
# that we fetched, that we didn't previously send down
|
||||
or limited is True
|
||||
# Always give the client some frame of reference if this is the
|
||||
# first time they are seeing the room down the connection
|
||||
or initial
|
||||
)
|
||||
|
||||
# If we're joined to the room, we need to find the last bump event before the
|
||||
# `to_token`
|
||||
if room_membership_for_user_at_to_token.membership == Membership.JOIN:
|
||||
# Try and get a bump stamp, if not we just fall back to the
|
||||
# membership token.
|
||||
# Try and get a bump stamp
|
||||
new_bump_stamp = await self._get_bump_stamp(
|
||||
room_id, to_token, timeline_events
|
||||
room_id,
|
||||
to_token,
|
||||
timeline_events,
|
||||
check_outside_timeline=always_return_bump_stamp,
|
||||
)
|
||||
if new_bump_stamp is not None:
|
||||
bump_stamp = new_bump_stamp
|
||||
|
||||
unstable_expanded_timeline = False
|
||||
prev_room_sync_config = previous_connection_state.room_configs.get(room_id)
|
||||
if bump_stamp is None and always_return_bump_stamp:
|
||||
# By default, just choose the membership event position for any non-join membership
|
||||
bump_stamp = room_membership_for_user_at_to_token.event_pos.stream
|
||||
|
||||
if bump_stamp is not None and bump_stamp < 0:
|
||||
# We never want to send down negative stream orderings, as you can't
|
||||
# sensibly compare positive and negative stream orderings (they have
|
||||
# different meanings).
|
||||
#
|
||||
# A negative bump stamp here can only happen if the stream ordering
|
||||
# of the membership event is negative (and there are no further bump
|
||||
# stamps), which can happen if the server leaves and deletes a room,
|
||||
# and then rejoins it.
|
||||
#
|
||||
# To deal with this, we just set the bump stamp to zero, which will
|
||||
# shove this room to the bottom of the list. This is OK as the
|
||||
# moment a new message happens in the room it will get put into a
|
||||
# sensible order again.
|
||||
bump_stamp = 0
|
||||
|
||||
room_sync_required_state_map_to_persist = room_sync_config.required_state_map
|
||||
if changed_required_state_map:
|
||||
room_sync_required_state_map_to_persist = changed_required_state_map
|
||||
|
||||
# Record the `room_sync_config` if we're `ignore_timeline_bound` (which means
|
||||
# that the `timeline_limit` has increased)
|
||||
unstable_expanded_timeline = False
|
||||
if ignore_timeline_bound:
|
||||
# FIXME: We signal the fact that we're sending down more events to
|
||||
# the client by setting `unstable_expanded_timeline` to true (see
|
||||
@@ -1075,7 +1146,7 @@ class SlidingSyncHandler:
|
||||
|
||||
new_connection_state.room_configs[room_id] = RoomSyncConfig(
|
||||
timeline_limit=room_sync_config.timeline_limit,
|
||||
required_state_map=room_sync_config.required_state_map,
|
||||
required_state_map=room_sync_required_state_map_to_persist,
|
||||
)
|
||||
elif prev_room_sync_config is not None:
|
||||
# If the result is `limited` then we need to record that the
|
||||
@@ -1104,10 +1175,14 @@ class SlidingSyncHandler:
|
||||
):
|
||||
new_connection_state.room_configs[room_id] = RoomSyncConfig(
|
||||
timeline_limit=room_sync_config.timeline_limit,
|
||||
required_state_map=room_sync_config.required_state_map,
|
||||
required_state_map=room_sync_required_state_map_to_persist,
|
||||
)
|
||||
|
||||
# TODO: Record changes in required_state.
|
||||
elif changed_required_state_map is not None:
|
||||
new_connection_state.room_configs[room_id] = RoomSyncConfig(
|
||||
timeline_limit=room_sync_config.timeline_limit,
|
||||
required_state_map=room_sync_required_state_map_to_persist,
|
||||
)
|
||||
|
||||
else:
|
||||
new_connection_state.room_configs[room_id] = room_sync_config
|
||||
@@ -1140,14 +1215,23 @@ class SlidingSyncHandler:
|
||||
|
||||
@trace
|
||||
async def _get_bump_stamp(
|
||||
self, room_id: str, to_token: StreamToken, timeline: List[EventBase]
|
||||
self,
|
||||
room_id: str,
|
||||
to_token: StreamToken,
|
||||
timeline: List[EventBase],
|
||||
check_outside_timeline: bool,
|
||||
) -> Optional[int]:
|
||||
"""Get a bump stamp for the room, if we have a bump event
|
||||
"""Get a bump stamp for the room, if we have a bump event and it has
|
||||
changed.
|
||||
|
||||
Args:
|
||||
room_id
|
||||
to_token: The upper bound of token to return
|
||||
timeline: The list of events we have fetched.
|
||||
limited: If the timeline was limited.
|
||||
check_outside_timeline: Whether we need to check for bump stamp for
|
||||
events before the timeline if we didn't find a bump stamp in
|
||||
the timeline events.
|
||||
"""
|
||||
|
||||
# First check the timeline events we're returning to see if one of
|
||||
@@ -1167,6 +1251,11 @@ class SlidingSyncHandler:
|
||||
if new_bump_stamp > 0:
|
||||
return new_bump_stamp
|
||||
|
||||
if not check_outside_timeline:
|
||||
# If we are not a limited sync, then we know the bump stamp can't
|
||||
# have changed.
|
||||
return None
|
||||
|
||||
# We can quickly query for the latest bump event in the room using the
|
||||
# sliding sync tables.
|
||||
latest_room_bump_stamp = await self.store.get_latest_bump_stamp_for_room(
|
||||
@@ -1226,3 +1315,185 @@ class SlidingSyncHandler:
|
||||
return new_bump_event_pos.stream
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _required_state_changes(
|
||||
user_id: str,
|
||||
*,
|
||||
previous_room_config: "RoomSyncConfig",
|
||||
room_sync_config: RoomSyncConfig,
|
||||
state_deltas: StateMap[str],
|
||||
) -> Tuple[Optional[Mapping[str, AbstractSet[str]]], StateFilter]:
|
||||
"""Calculates the changes between the required state room config from the
|
||||
previous requests compared with the current request.
|
||||
|
||||
This does two things. First, it calculates if we need to update the room
|
||||
config due to changes to required state. Secondly, it works out which state
|
||||
entries we need to pull from current state and return due to the state entry
|
||||
now appearing in the required state when it previously wasn't (on top of the
|
||||
state deltas).
|
||||
|
||||
This function tries to ensure to handle the case where a state entry is
|
||||
added, removed and then added again to the required state. In that case we
|
||||
only want to re-send that entry down sync if it has changed.
|
||||
|
||||
Returns:
|
||||
A 2-tuple of updated required state config (or None if there is no update)
|
||||
and the state filter to use to fetch extra current state that we need to
|
||||
return.
|
||||
"""
|
||||
|
||||
prev_required_state_map = previous_room_config.required_state_map
|
||||
request_required_state_map = room_sync_config.required_state_map
|
||||
|
||||
if prev_required_state_map == request_required_state_map:
|
||||
# There has been no change. Return immediately.
|
||||
return None, StateFilter.none()
|
||||
|
||||
prev_wildcard = prev_required_state_map.get(StateValues.WILDCARD, set())
|
||||
request_wildcard = request_required_state_map.get(StateValues.WILDCARD, set())
|
||||
|
||||
# If we were previously fetching everything ("*", "*"), always update the effective
|
||||
# room required state config to match the request. And since we we're previously
|
||||
# already fetching everything, we don't have to fetch anything now that they've
|
||||
# narrowed.
|
||||
if StateValues.WILDCARD in prev_wildcard:
|
||||
return request_required_state_map, StateFilter.none()
|
||||
|
||||
# If a event type wildcard has been added or removed we don't try and do
|
||||
# anything fancy, and instead always update the effective room required
|
||||
# state config to match the request.
|
||||
if request_wildcard - prev_wildcard:
|
||||
# Some keys were added, so we need to fetch everything
|
||||
return request_required_state_map, StateFilter.all()
|
||||
if prev_wildcard - request_wildcard:
|
||||
# Keys were only removed, so we don't have to fetch everything.
|
||||
return request_required_state_map, StateFilter.none()
|
||||
|
||||
# Contains updates to the required state map compared with the previous room
|
||||
# config. This has the same format as `RoomSyncConfig.required_state`
|
||||
changes: Dict[str, AbstractSet[str]] = {}
|
||||
|
||||
# The set of types/state keys that we need to fetch and return to the
|
||||
# client. Passed to `StateFilter.from_types(...)`
|
||||
added: List[Tuple[str, Optional[str]]] = []
|
||||
|
||||
# First we calculate what, if anything, has been *added*.
|
||||
for event_type in (
|
||||
prev_required_state_map.keys() | request_required_state_map.keys()
|
||||
):
|
||||
old_state_keys = prev_required_state_map.get(event_type, set())
|
||||
request_state_keys = request_required_state_map.get(event_type, set())
|
||||
|
||||
if old_state_keys == request_state_keys:
|
||||
# No change to this type
|
||||
continue
|
||||
|
||||
if not request_state_keys - old_state_keys:
|
||||
# Nothing *added*, so we skip. Removals happen below.
|
||||
continue
|
||||
|
||||
# Always update changes to include the newly added keys
|
||||
changes[event_type] = request_state_keys
|
||||
|
||||
if StateValues.WILDCARD in old_state_keys:
|
||||
# We were previously fetching everything for this type, so we don't need to
|
||||
# fetch anything new.
|
||||
continue
|
||||
|
||||
# Record the new state keys to fetch for this type.
|
||||
if StateValues.WILDCARD in request_state_keys:
|
||||
# If we have added a wildcard then we always just fetch everything.
|
||||
added.append((event_type, None))
|
||||
else:
|
||||
for state_key in request_state_keys - old_state_keys:
|
||||
if state_key == StateValues.ME:
|
||||
added.append((event_type, user_id))
|
||||
elif state_key == StateValues.LAZY:
|
||||
# We handle lazy loading separately (outside this function),
|
||||
# so don't need to explicitly add anything here.
|
||||
#
|
||||
# LAZY values should also be ignore for event types that are
|
||||
# not membership.
|
||||
pass
|
||||
else:
|
||||
added.append((event_type, state_key))
|
||||
|
||||
added_state_filter = StateFilter.from_types(added)
|
||||
|
||||
# Convert the list of state deltas to map from type to state_keys that have
|
||||
# changed.
|
||||
changed_types_to_state_keys: Dict[str, Set[str]] = {}
|
||||
for event_type, state_key in state_deltas:
|
||||
changed_types_to_state_keys.setdefault(event_type, set()).add(state_key)
|
||||
|
||||
# Figure out what changes we need to apply to the effective required state
|
||||
# config.
|
||||
for event_type, changed_state_keys in changed_types_to_state_keys.items():
|
||||
old_state_keys = prev_required_state_map.get(event_type, set())
|
||||
request_state_keys = request_required_state_map.get(event_type, set())
|
||||
|
||||
if old_state_keys == request_state_keys:
|
||||
# No change.
|
||||
continue
|
||||
|
||||
if request_state_keys - old_state_keys:
|
||||
# We've expanded the set of state keys, so we just clobber the
|
||||
# current set with the new set.
|
||||
#
|
||||
# We could also ensure that we keep entries where the state hasn't
|
||||
# changed, but are no longer in the requested required state, but
|
||||
# that's a sufficient edge case that we can ignore (as its only a
|
||||
# performance optimization).
|
||||
changes[event_type] = request_state_keys
|
||||
continue
|
||||
|
||||
old_state_key_wildcard = StateValues.WILDCARD in old_state_keys
|
||||
request_state_key_wildcard = StateValues.WILDCARD in request_state_keys
|
||||
|
||||
if old_state_key_wildcard != request_state_key_wildcard:
|
||||
# If a state_key wildcard has been added or removed, we always update the
|
||||
# effective room required state config to match the request.
|
||||
changes[event_type] = request_state_keys
|
||||
continue
|
||||
|
||||
if event_type == EventTypes.Member:
|
||||
old_state_key_lazy = StateValues.LAZY in old_state_keys
|
||||
request_state_key_lazy = StateValues.LAZY in request_state_keys
|
||||
|
||||
if old_state_key_lazy != request_state_key_lazy:
|
||||
# If a "$LAZY" has been added or removed we always update the effective room
|
||||
# required state config to match the request.
|
||||
changes[event_type] = request_state_keys
|
||||
continue
|
||||
|
||||
# Handle "$ME" values by adding "$ME" if the state key matches the user
|
||||
# ID.
|
||||
if user_id in changed_state_keys:
|
||||
changed_state_keys.add(StateValues.ME)
|
||||
|
||||
# At this point there are no wildcards and no additions to the set of
|
||||
# state keys requested, only deletions.
|
||||
#
|
||||
# We only remove state keys from the effective state if they've been
|
||||
# removed from the request *and* the state has changed. This ensures
|
||||
# that if a client removes and then re-adds a state key, we only send
|
||||
# down the associated current state event if its changed (rather than
|
||||
# sending down the same event twice).
|
||||
invalidated = (old_state_keys - request_state_keys) & changed_state_keys
|
||||
if invalidated:
|
||||
changes[event_type] = old_state_keys - invalidated
|
||||
|
||||
if changes:
|
||||
# Update the required state config based on the changes.
|
||||
new_required_state_map = dict(prev_required_state_map)
|
||||
for event_type, state_keys in changes.items():
|
||||
if state_keys:
|
||||
new_required_state_map[event_type] = state_keys
|
||||
else:
|
||||
# Remove entries with empty state keys.
|
||||
new_required_state_map.pop(event_type, None)
|
||||
|
||||
return new_required_state_map, added_state_filter
|
||||
else:
|
||||
return None, added_state_filter
|
||||
|
||||
@@ -572,7 +572,8 @@ class SlidingSyncExtensionHandler:
|
||||
|
||||
# Now record which rooms are now up to data, and which rooms have
|
||||
# pending updates to send.
|
||||
new_connection_state.account_data.record_sent_rooms(relevant_room_ids)
|
||||
new_connection_state.account_data.record_sent_rooms(previously_rooms.keys())
|
||||
new_connection_state.account_data.record_sent_rooms(initial_rooms)
|
||||
missing_updates = (
|
||||
all_updates_since_the_from_token.keys() - relevant_room_ids
|
||||
)
|
||||
@@ -763,9 +764,10 @@ class SlidingSyncExtensionHandler:
|
||||
|
||||
room_id_to_receipt_map[room_id] = {"type": type, "content": content}
|
||||
|
||||
# Now we update the per-connection state to track which receipts we have
|
||||
# and haven't sent down.
|
||||
new_connection_state.receipts.record_sent_rooms(relevant_room_ids)
|
||||
# Update the per-connection state to track which rooms we have sent
|
||||
# all the receipts for.
|
||||
new_connection_state.receipts.record_sent_rooms(previously_rooms.keys())
|
||||
new_connection_state.receipts.record_sent_rooms(initial_rooms)
|
||||
|
||||
if from_token:
|
||||
# Now find the set of rooms that may have receipts that we're not sending
|
||||
|
||||
@@ -123,6 +123,19 @@ class SlidingSyncInterestedRooms:
|
||||
newly_left_rooms: AbstractSet[str]
|
||||
dm_room_ids: AbstractSet[str]
|
||||
|
||||
@staticmethod
|
||||
def empty() -> "SlidingSyncInterestedRooms":
|
||||
return SlidingSyncInterestedRooms(
|
||||
lists={},
|
||||
relevant_room_map={},
|
||||
relevant_rooms_to_send_map={},
|
||||
all_rooms=set(),
|
||||
room_membership_for_user_map={},
|
||||
newly_joined_rooms=set(),
|
||||
newly_left_rooms=set(),
|
||||
dm_room_ids=set(),
|
||||
)
|
||||
|
||||
|
||||
def filter_membership_for_sync(
|
||||
*,
|
||||
@@ -181,6 +194,14 @@ class SlidingSyncRoomLists:
|
||||
from_token: Optional[StreamToken],
|
||||
) -> SlidingSyncInterestedRooms:
|
||||
"""Fetch the set of rooms that match the request"""
|
||||
has_lists = sync_config.lists is not None and len(sync_config.lists) > 0
|
||||
has_room_subscriptions = (
|
||||
sync_config.room_subscriptions is not None
|
||||
and len(sync_config.room_subscriptions) > 0
|
||||
)
|
||||
|
||||
if not has_lists and not has_room_subscriptions:
|
||||
return SlidingSyncInterestedRooms.empty()
|
||||
|
||||
if await self.store.have_finished_sliding_sync_background_jobs():
|
||||
return await self._compute_interested_rooms_new_tables(
|
||||
@@ -479,6 +500,16 @@ class SlidingSyncRoomLists:
|
||||
# depending on the `required_state` requested (see below).
|
||||
partial_state_rooms = await self.store.get_partial_rooms()
|
||||
|
||||
# Fetch any rooms that we have not already fetched from the database.
|
||||
subscription_sliding_sync_rooms = (
|
||||
await self.store.get_sliding_sync_room_for_user_batch(
|
||||
user_id,
|
||||
sync_config.room_subscriptions.keys()
|
||||
- room_membership_for_user_map.keys(),
|
||||
)
|
||||
)
|
||||
room_membership_for_user_map.update(subscription_sliding_sync_rooms)
|
||||
|
||||
for (
|
||||
room_id,
|
||||
room_subscription,
|
||||
@@ -486,17 +517,11 @@ class SlidingSyncRoomLists:
|
||||
# Check if we have a membership for the room, but didn't pull it out
|
||||
# above. This could be e.g. a leave that we don't pull out by
|
||||
# default.
|
||||
current_room_entry = (
|
||||
await self.store.get_sliding_sync_room_for_user(
|
||||
user_id, room_id
|
||||
)
|
||||
)
|
||||
current_room_entry = room_membership_for_user_map.get(room_id)
|
||||
if not current_room_entry:
|
||||
# TODO: Handle rooms the user isn't in.
|
||||
continue
|
||||
|
||||
room_membership_for_user_map[room_id] = current_room_entry
|
||||
|
||||
all_rooms.add(room_id)
|
||||
|
||||
# Take the superset of the `RoomSyncConfig` for each room.
|
||||
|
||||
@@ -1039,7 +1039,7 @@ class _MultipartParserProtocol(protocol.Protocol):
|
||||
self.deferred = deferred
|
||||
self.boundary = boundary
|
||||
self.max_length = max_length
|
||||
self.parser = None
|
||||
self.parser: Optional[multipart.MultipartParser] = None
|
||||
self.multipart_response = MultipartResponse()
|
||||
self.has_redirect = False
|
||||
self.in_json = False
|
||||
@@ -1097,7 +1097,7 @@ class _MultipartParserProtocol(protocol.Protocol):
|
||||
self.deferred.errback()
|
||||
self.file_length += end - start
|
||||
|
||||
callbacks = {
|
||||
callbacks: "multipart.multipart.MultipartCallbacks" = {
|
||||
"on_header_field": on_header_field,
|
||||
"on_header_value": on_header_value,
|
||||
"on_part_data": on_part_data,
|
||||
@@ -1113,7 +1113,7 @@ class _MultipartParserProtocol(protocol.Protocol):
|
||||
self.transport.abortConnection()
|
||||
|
||||
try:
|
||||
self.parser.write(incoming_data) # type: ignore[attr-defined]
|
||||
self.parser.write(incoming_data)
|
||||
except Exception as e:
|
||||
logger.warning(f"Exception writing to multipart parser: {e}")
|
||||
self.deferred.errback()
|
||||
|
||||
@@ -19,7 +19,6 @@
|
||||
#
|
||||
#
|
||||
import abc
|
||||
import cgi
|
||||
import codecs
|
||||
import logging
|
||||
import random
|
||||
@@ -792,7 +791,7 @@ class MatrixFederationHttpClient:
|
||||
url_str,
|
||||
_flatten_response_never_received(e),
|
||||
)
|
||||
body = None
|
||||
body = b""
|
||||
|
||||
exc = HttpResponseException(
|
||||
response.code, response_phrase, body
|
||||
@@ -1813,8 +1812,9 @@ def check_content_type_is(headers: Headers, expected_content_type: str) -> None:
|
||||
)
|
||||
|
||||
c_type = content_type_headers[0].decode("ascii") # only the first header
|
||||
val, options = cgi.parse_header(c_type)
|
||||
if val != expected_content_type:
|
||||
# Extract the 'essence' of the mimetype, removing any parameter
|
||||
c_type_parsed = c_type.split(";", 1)[0].strip()
|
||||
if c_type_parsed != expected_content_type:
|
||||
raise RequestSendFailed(
|
||||
RuntimeError(
|
||||
f"Remote server sent Content-Type header of '{c_type}', not '{expected_content_type}'",
|
||||
|
||||
@@ -206,7 +206,7 @@ class Thumbnailer:
|
||||
def _encode_image(self, output_image: Image.Image, output_type: str) -> BytesIO:
|
||||
output_bytes_io = BytesIO()
|
||||
fmt = self.FORMATS[output_type]
|
||||
if fmt == "JPEG":
|
||||
if fmt == "JPEG" or fmt == "PNG" and output_image.mode == "CMYK":
|
||||
output_image = output_image.convert("RGB")
|
||||
output_image.save(output_bytes_io, fmt, quality=80)
|
||||
return output_bytes_io
|
||||
|
||||
@@ -41,6 +41,7 @@ import attr
|
||||
from prometheus_client import Counter
|
||||
|
||||
from twisted.internet import defer
|
||||
from twisted.internet.defer import Deferred
|
||||
|
||||
from synapse.api.constants import EduTypes, EventTypes, HistoryVisibility, Membership
|
||||
from synapse.api.errors import AuthError
|
||||
@@ -52,6 +53,7 @@ from synapse.logging.opentracing import log_kv, start_active_span
|
||||
from synapse.metrics import LaterGauge
|
||||
from synapse.streams.config import PaginationConfig
|
||||
from synapse.types import (
|
||||
ISynapseReactor,
|
||||
JsonDict,
|
||||
MultiWriterStreamToken,
|
||||
PersistedEventPosition,
|
||||
@@ -61,8 +63,11 @@ from synapse.types import (
|
||||
StreamToken,
|
||||
UserID,
|
||||
)
|
||||
from synapse.util.async_helpers import ObservableDeferred, timeout_deferred
|
||||
from synapse.util.async_helpers import (
|
||||
timeout_deferred,
|
||||
)
|
||||
from synapse.util.metrics import Measure
|
||||
from synapse.util.stringutils import shortstr
|
||||
from synapse.visibility import filter_events_for_client
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -89,18 +94,6 @@ def count(func: Callable[[T], bool], it: Iterable[T]) -> int:
|
||||
return n
|
||||
|
||||
|
||||
class _NotificationListener:
|
||||
"""This represents a single client connection to the events stream.
|
||||
The events stream handler will have yielded to the deferred, so to
|
||||
notify the handler it is sufficient to resolve the deferred.
|
||||
"""
|
||||
|
||||
__slots__ = ["deferred"]
|
||||
|
||||
def __init__(self, deferred: "defer.Deferred"):
|
||||
self.deferred = deferred
|
||||
|
||||
|
||||
class _NotifierUserStream:
|
||||
"""This represents a user connected to the event stream.
|
||||
It tracks the most recent stream token for that user.
|
||||
@@ -113,59 +106,49 @@ class _NotifierUserStream:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
reactor: ISynapseReactor,
|
||||
user_id: str,
|
||||
rooms: StrCollection,
|
||||
current_token: StreamToken,
|
||||
time_now_ms: int,
|
||||
):
|
||||
self.reactor = reactor
|
||||
self.user_id = user_id
|
||||
self.rooms = set(rooms)
|
||||
self.current_token = current_token
|
||||
|
||||
# The last token for which we should wake up any streams that have a
|
||||
# token that comes before it. This gets updated every time we get poked.
|
||||
# We start it at the current token since if we get any streams
|
||||
# that have a token from before we have no idea whether they should be
|
||||
# woken up or not, so lets just wake them up.
|
||||
self.last_notified_token = current_token
|
||||
self.current_token = current_token
|
||||
self.last_notified_ms = time_now_ms
|
||||
|
||||
self.notify_deferred: ObservableDeferred[StreamToken] = ObservableDeferred(
|
||||
defer.Deferred()
|
||||
)
|
||||
# Set of listeners that we need to wake up when there has been a change.
|
||||
self.listeners: Set[Deferred[StreamToken]] = set()
|
||||
|
||||
def notify(
|
||||
def update_and_fetch_deferreds(
|
||||
self,
|
||||
stream_key: StreamKeyType,
|
||||
stream_id: Union[int, RoomStreamToken, MultiWriterStreamToken],
|
||||
current_token: StreamToken,
|
||||
time_now_ms: int,
|
||||
) -> None:
|
||||
"""Notify any listeners for this user of a new event from an
|
||||
event source.
|
||||
) -> Collection["Deferred[StreamToken]"]:
|
||||
"""Update the stream for this user because of a new event from an
|
||||
event source, and return the set of deferreds to wake up.
|
||||
|
||||
Args:
|
||||
stream_key: The stream the event came from.
|
||||
stream_id: The new id for the stream the event came from.
|
||||
current_token: The new current token.
|
||||
time_now_ms: The current time in milliseconds.
|
||||
|
||||
Returns:
|
||||
The set of deferreds that need to be called.
|
||||
"""
|
||||
self.current_token = self.current_token.copy_and_advance(stream_key, stream_id)
|
||||
self.last_notified_token = self.current_token
|
||||
self.current_token = current_token
|
||||
self.last_notified_ms = time_now_ms
|
||||
notify_deferred = self.notify_deferred
|
||||
|
||||
log_kv(
|
||||
{
|
||||
"notify": self.user_id,
|
||||
"stream": stream_key,
|
||||
"stream_id": stream_id,
|
||||
"listeners": self.count_listeners(),
|
||||
}
|
||||
)
|
||||
listeners = self.listeners
|
||||
self.listeners = set()
|
||||
|
||||
users_woken_by_stream_counter.labels(stream_key).inc()
|
||||
|
||||
with PreserveLoggingContext():
|
||||
self.notify_deferred = ObservableDeferred(defer.Deferred())
|
||||
notify_deferred.callback(self.current_token)
|
||||
return listeners
|
||||
|
||||
def remove(self, notifier: "Notifier") -> None:
|
||||
"""Remove this listener from all the indexes in the Notifier
|
||||
@@ -179,9 +162,9 @@ class _NotifierUserStream:
|
||||
notifier.user_to_user_stream.pop(self.user_id)
|
||||
|
||||
def count_listeners(self) -> int:
|
||||
return len(self.notify_deferred.observers())
|
||||
return len(self.listeners)
|
||||
|
||||
def new_listener(self, token: StreamToken) -> _NotificationListener:
|
||||
def new_listener(self, token: StreamToken) -> "Deferred[StreamToken]":
|
||||
"""Returns a deferred that is resolved when there is a new token
|
||||
greater than the given token.
|
||||
|
||||
@@ -191,10 +174,17 @@ class _NotifierUserStream:
|
||||
"""
|
||||
# Immediately wake up stream if something has already since happened
|
||||
# since their last token.
|
||||
if self.last_notified_token != token:
|
||||
return _NotificationListener(defer.succeed(self.current_token))
|
||||
else:
|
||||
return _NotificationListener(self.notify_deferred.observe())
|
||||
if token != self.current_token:
|
||||
return defer.succeed(self.current_token)
|
||||
|
||||
# Create a new deferred and add it to the set of listeners. We add a
|
||||
# cancel handler to remove it from the set again, to handle timeouts.
|
||||
deferred: "Deferred[StreamToken]" = Deferred(
|
||||
canceller=lambda d: self.listeners.discard(d)
|
||||
)
|
||||
self.listeners.add(deferred)
|
||||
|
||||
return deferred
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
@@ -247,6 +237,7 @@ class Notifier:
|
||||
# List of callbacks to be notified when a lock is released
|
||||
self._lock_released_callback: List[Callable[[str, str, str], None]] = []
|
||||
|
||||
self.reactor = hs.get_reactor()
|
||||
self.clock = hs.get_clock()
|
||||
self.appservice_handler = hs.get_application_service_handler()
|
||||
self._pusher_pool = hs.get_pusherpool()
|
||||
@@ -342,14 +333,25 @@ class Notifier:
|
||||
# Wake up all related user stream notifiers
|
||||
user_streams = self.room_to_user_streams.get(room_id, set())
|
||||
time_now_ms = self.clock.time_msec()
|
||||
current_token = self.event_sources.get_current_token()
|
||||
|
||||
listeners: List["Deferred[StreamToken]"] = []
|
||||
for user_stream in user_streams:
|
||||
try:
|
||||
user_stream.notify(
|
||||
StreamKeyType.UN_PARTIAL_STATED_ROOMS, new_token, time_now_ms
|
||||
listeners.extend(
|
||||
user_stream.update_and_fetch_deferreds(current_token, time_now_ms)
|
||||
)
|
||||
except Exception:
|
||||
logger.exception("Failed to notify listener")
|
||||
|
||||
with PreserveLoggingContext():
|
||||
for listener in listeners:
|
||||
listener.callback(current_token)
|
||||
|
||||
users_woken_by_stream_counter.labels(StreamKeyType.UN_PARTIAL_STATED_ROOMS).inc(
|
||||
len(user_streams)
|
||||
)
|
||||
|
||||
# Poke the replication so that other workers also see the write to
|
||||
# the un-partial-stated rooms stream.
|
||||
self.notify_replication()
|
||||
@@ -519,12 +521,16 @@ class Notifier:
|
||||
rooms = rooms or []
|
||||
|
||||
with Measure(self.clock, "on_new_event"):
|
||||
user_streams = set()
|
||||
user_streams: Set[_NotifierUserStream] = set()
|
||||
|
||||
log_kv(
|
||||
{
|
||||
"waking_up_explicit_users": len(users),
|
||||
"waking_up_explicit_rooms": len(rooms),
|
||||
"users": shortstr(users),
|
||||
"rooms": shortstr(rooms),
|
||||
"stream": stream_key,
|
||||
"stream_id": new_token,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -544,12 +550,27 @@ class Notifier:
|
||||
)
|
||||
|
||||
time_now_ms = self.clock.time_msec()
|
||||
current_token = self.event_sources.get_current_token()
|
||||
listeners: List["Deferred[StreamToken]"] = []
|
||||
for user_stream in user_streams:
|
||||
try:
|
||||
user_stream.notify(stream_key, new_token, time_now_ms)
|
||||
listeners.extend(
|
||||
user_stream.update_and_fetch_deferreds(
|
||||
current_token, time_now_ms
|
||||
)
|
||||
)
|
||||
except Exception:
|
||||
logger.exception("Failed to notify listener")
|
||||
|
||||
# We resolve all these deferreds in one go so that we only need to
|
||||
# call `PreserveLoggingContext` once, as it has a bunch of overhead
|
||||
# (to calculate performance stats)
|
||||
with PreserveLoggingContext():
|
||||
for listener in listeners:
|
||||
listener.callback(current_token)
|
||||
|
||||
users_woken_by_stream_counter.labels(stream_key).inc(len(user_streams))
|
||||
|
||||
self.notify_replication()
|
||||
|
||||
# Notify appservices.
|
||||
@@ -586,6 +607,7 @@ class Notifier:
|
||||
if room_ids is None:
|
||||
room_ids = await self.store.get_rooms_for_user(user_id)
|
||||
user_stream = _NotifierUserStream(
|
||||
reactor=self.reactor,
|
||||
user_id=user_id,
|
||||
rooms=room_ids,
|
||||
current_token=current_token,
|
||||
@@ -608,8 +630,8 @@ class Notifier:
|
||||
# Now we wait for the _NotifierUserStream to be told there
|
||||
# is a new token.
|
||||
listener = user_stream.new_listener(prev_token)
|
||||
listener.deferred = timeout_deferred(
|
||||
listener.deferred,
|
||||
listener = timeout_deferred(
|
||||
listener,
|
||||
(end_time - now) / 1000.0,
|
||||
self.hs.get_reactor(),
|
||||
)
|
||||
@@ -622,7 +644,7 @@ class Notifier:
|
||||
)
|
||||
|
||||
with PreserveLoggingContext():
|
||||
await listener.deferred
|
||||
await listener
|
||||
|
||||
log_kv(
|
||||
{
|
||||
|
||||
@@ -436,6 +436,7 @@ class BulkPushRuleEvaluator:
|
||||
self._related_event_match_enabled,
|
||||
event.room_version.msc3931_push_features,
|
||||
self.hs.config.experimental.msc1767_enabled, # MSC3931 flag
|
||||
self.hs.config.experimental.msc4210_enabled,
|
||||
)
|
||||
|
||||
for uid, rules in rules_by_user.items():
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2023 New Vector, Ltd
|
||||
# Copyright (C) 2023-2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
|
||||
@@ -1,3 +1,17 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, Optional, Tuple
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ class ReplicationRemovePusherRestServlet(ReplicationEndpoint):
|
||||
|
||||
"""
|
||||
|
||||
NAME = "add_user_account_data"
|
||||
NAME = "remove_pusher"
|
||||
PATH_ARGS = ("user_id",)
|
||||
CACHE = False
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union
|
||||
|
||||
import attr
|
||||
|
||||
from synapse._pydantic_compat import StrictBool
|
||||
from synapse._pydantic_compat import StrictBool, StrictInt, StrictStr
|
||||
from synapse.api.constants import Direction, UserTypes
|
||||
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||
from synapse.http.servlet import (
|
||||
@@ -1421,40 +1421,39 @@ class RedactUser(RestServlet):
|
||||
self._store = hs.get_datastores().main
|
||||
self.admin_handler = hs.get_admin_handler()
|
||||
|
||||
class PostBody(RequestBodyModel):
|
||||
rooms: List[StrictStr]
|
||||
reason: Optional[StrictStr]
|
||||
limit: Optional[StrictInt]
|
||||
|
||||
async def on_POST(
|
||||
self, request: SynapseRequest, user_id: str
|
||||
) -> Tuple[int, JsonDict]:
|
||||
requester = await self._auth.get_user_by_req(request)
|
||||
await assert_user_is_admin(self._auth, requester)
|
||||
|
||||
body = parse_json_object_from_request(request, allow_empty_body=True)
|
||||
rooms = body.get("rooms")
|
||||
if rooms is None:
|
||||
# parse provided user id to check that it is valid
|
||||
UserID.from_string(user_id)
|
||||
|
||||
body = parse_and_validate_json_object_from_request(request, self.PostBody)
|
||||
|
||||
limit = body.limit
|
||||
if limit and limit <= 0:
|
||||
raise SynapseError(
|
||||
HTTPStatus.BAD_REQUEST, "Must provide a value for rooms."
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
"If limit is provided it must be a non-negative integer greater than 0.",
|
||||
)
|
||||
|
||||
reason = body.get("reason")
|
||||
if reason:
|
||||
if not isinstance(reason, str):
|
||||
raise SynapseError(
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
"If a reason is provided it must be a string.",
|
||||
)
|
||||
|
||||
limit = body.get("limit")
|
||||
if limit:
|
||||
if not isinstance(limit, int) or limit <= 0:
|
||||
raise SynapseError(
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
"If limit is provided it must be a non-negative integer greater than 0.",
|
||||
)
|
||||
|
||||
rooms = body.rooms
|
||||
if not rooms:
|
||||
rooms = await self._store.get_rooms_for_user(user_id)
|
||||
current_rooms = list(await self._store.get_rooms_for_user(user_id))
|
||||
banned_rooms = list(
|
||||
await self._store.get_rooms_user_currently_banned_from(user_id)
|
||||
)
|
||||
rooms = current_rooms + banned_rooms
|
||||
|
||||
redact_id = await self.admin_handler.start_redact_events(
|
||||
user_id, list(rooms), requester.serialize(), reason, limit
|
||||
user_id, rooms, requester.serialize(), body.reason, limit
|
||||
)
|
||||
|
||||
return HTTPStatus.OK, {"redact_id": redact_id}
|
||||
|
||||
@@ -1,3 +1,17 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
|
||||
# This module contains REST servlets to do with delayed events: /delayed_events/<paths>
|
||||
|
||||
import logging
|
||||
|
||||
@@ -363,6 +363,7 @@ class LoginRestServlet(RestServlet):
|
||||
login_submission: JsonDict,
|
||||
callback: Optional[Callable[[LoginResponse], Awaitable[None]]] = None,
|
||||
create_non_existent_users: bool = False,
|
||||
default_display_name: Optional[str] = None,
|
||||
ratelimit: bool = True,
|
||||
auth_provider_id: Optional[str] = None,
|
||||
should_issue_refresh_token: bool = False,
|
||||
@@ -410,7 +411,8 @@ class LoginRestServlet(RestServlet):
|
||||
canonical_uid = await self.auth_handler.check_user_exists(user_id)
|
||||
if not canonical_uid:
|
||||
canonical_uid = await self.registration_handler.register_user(
|
||||
localpart=UserID.from_string(user_id).localpart
|
||||
localpart=UserID.from_string(user_id).localpart,
|
||||
default_display_name=default_display_name,
|
||||
)
|
||||
user_id = canonical_uid
|
||||
|
||||
@@ -546,11 +548,14 @@ class LoginRestServlet(RestServlet):
|
||||
Returns:
|
||||
The body of the JSON response.
|
||||
"""
|
||||
user_id = self.hs.get_jwt_handler().validate_login(login_submission)
|
||||
user_id, default_display_name = self.hs.get_jwt_handler().validate_login(
|
||||
login_submission
|
||||
)
|
||||
return await self._complete_login(
|
||||
user_id,
|
||||
login_submission,
|
||||
create_non_existent_users=True,
|
||||
default_display_name=default_display_name,
|
||||
should_issue_refresh_token=should_issue_refresh_token,
|
||||
request_info=request_info,
|
||||
)
|
||||
|
||||
@@ -1010,11 +1010,13 @@ class SlidingSyncRestServlet(RestServlet):
|
||||
serialized_rooms: Dict[str, JsonDict] = {}
|
||||
for room_id, room_result in rooms.items():
|
||||
serialized_rooms[room_id] = {
|
||||
"bump_stamp": room_result.bump_stamp,
|
||||
"notification_count": room_result.notification_count,
|
||||
"highlight_count": room_result.highlight_count,
|
||||
}
|
||||
|
||||
if room_result.bump_stamp is not None:
|
||||
serialized_rooms[room_id]["bump_stamp"] = room_result.bump_stamp
|
||||
|
||||
if room_result.joined_count is not None:
|
||||
serialized_rooms[room_id]["joined_count"] = room_result.joined_count
|
||||
|
||||
|
||||
@@ -172,7 +172,7 @@ class VersionsRestServlet(RestServlet):
|
||||
)
|
||||
),
|
||||
# MSC4140: Delayed events
|
||||
"org.matrix.msc4140": True,
|
||||
"org.matrix.msc4140": bool(self.config.server.max_event_delay_ms),
|
||||
# MSC4151: Report room API (Client-Server API)
|
||||
"org.matrix.msc4151": self.config.experimental.msc4151_enabled,
|
||||
# Simplified sliding sync
|
||||
|
||||
@@ -1,3 +1,17 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
|
||||
import logging
|
||||
from typing import List, NewType, Optional, Tuple
|
||||
|
||||
|
||||
@@ -1863,10 +1863,10 @@ class PersistEventsStore:
|
||||
txn.execute_batch(
|
||||
f"""
|
||||
INSERT INTO sliding_sync_membership_snapshots
|
||||
(room_id, user_id, sender, membership_event_id, membership, event_stream_ordering, event_instance_name
|
||||
(room_id, user_id, sender, membership_event_id, membership, forgotten, event_stream_ordering, event_instance_name
|
||||
{("," + ", ".join(sliding_sync_snapshot_keys)) if sliding_sync_snapshot_keys else ""})
|
||||
VALUES (
|
||||
?, ?, ?, ?, ?,
|
||||
?, ?, ?, ?, ?, ?,
|
||||
(SELECT stream_ordering FROM events WHERE event_id = ?),
|
||||
(SELECT COALESCE(instance_name, 'master') FROM events WHERE event_id = ?)
|
||||
{("," + ", ".join("?" for _ in sliding_sync_snapshot_values)) if sliding_sync_snapshot_values else ""}
|
||||
@@ -1876,6 +1876,7 @@ class PersistEventsStore:
|
||||
sender = EXCLUDED.sender,
|
||||
membership_event_id = EXCLUDED.membership_event_id,
|
||||
membership = EXCLUDED.membership,
|
||||
forgotten = EXCLUDED.forgotten,
|
||||
event_stream_ordering = EXCLUDED.event_stream_ordering
|
||||
{("," + ", ".join(f"{key} = EXCLUDED.{key}" for key in sliding_sync_snapshot_keys)) if sliding_sync_snapshot_keys else ""}
|
||||
""",
|
||||
@@ -1886,6 +1887,9 @@ class PersistEventsStore:
|
||||
membership_info.sender,
|
||||
membership_info.membership_event_id,
|
||||
membership_info.membership,
|
||||
# Since this is a new membership, it isn't forgotten anymore (which
|
||||
# matches how Synapse currently thinks about the forgotten status)
|
||||
0,
|
||||
# XXX: We do not use `membership_info.membership_event_stream_ordering` here
|
||||
# because it is an unreliable value. See XXX note above.
|
||||
membership_info.membership_event_id,
|
||||
@@ -2901,6 +2905,9 @@ class PersistEventsStore:
|
||||
"sender": event.sender,
|
||||
"membership_event_id": event.event_id,
|
||||
"membership": event.membership,
|
||||
# Since this is a new membership, it isn't forgotten anymore (which
|
||||
# matches how Synapse currently thinks about the forgotten status)
|
||||
"forgotten": 0,
|
||||
"event_stream_ordering": event.internal_metadata.stream_ordering,
|
||||
"event_instance_name": event.internal_metadata.instance_name,
|
||||
}
|
||||
|
||||
@@ -304,6 +304,12 @@ class EventsBackgroundUpdatesStore(StreamWorkerStore, StateDeltasStore, SQLBaseS
|
||||
_BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_BG_UPDATE,
|
||||
self._sliding_sync_membership_snapshots_bg_update,
|
||||
)
|
||||
# Add a background update to fix data integrity issue in the
|
||||
# `sliding_sync_membership_snapshots` -> `forgotten` column
|
||||
self.db_pool.updates.register_background_update_handler(
|
||||
_BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_FIX_FORGOTTEN_COLUMN_BG_UPDATE,
|
||||
self._sliding_sync_membership_snapshots_fix_forgotten_column_bg_update,
|
||||
)
|
||||
|
||||
# We want this to run on the main database at startup before we start processing
|
||||
# events.
|
||||
@@ -2429,6 +2435,118 @@ class EventsBackgroundUpdatesStore(StreamWorkerStore, StateDeltasStore, SQLBaseS
|
||||
|
||||
return len(memberships_to_update_rows)
|
||||
|
||||
async def _sliding_sync_membership_snapshots_fix_forgotten_column_bg_update(
|
||||
self, progress: JsonDict, batch_size: int
|
||||
) -> int:
|
||||
"""
|
||||
Background update to update the `sliding_sync_membership_snapshots` ->
|
||||
`forgotten` column to be in sync with the `room_memberships` table.
|
||||
|
||||
Because of previously flawed code (now fixed); any room that someone has
|
||||
forgotten and subsequently re-joined or had any new membership on, we need to go
|
||||
and update the column to match the `room_memberships` table as it has fallen out
|
||||
of sync.
|
||||
"""
|
||||
last_event_stream_ordering = progress.get(
|
||||
"last_event_stream_ordering", -(1 << 31)
|
||||
)
|
||||
|
||||
def _txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> int:
|
||||
"""
|
||||
Returns:
|
||||
The number of rows updated.
|
||||
"""
|
||||
|
||||
# To simplify things, we can just recheck any row in
|
||||
# `sliding_sync_membership_snapshots` with `forgotten=1`
|
||||
txn.execute(
|
||||
"""
|
||||
SELECT
|
||||
s.room_id,
|
||||
s.user_id,
|
||||
s.membership_event_id,
|
||||
s.event_stream_ordering,
|
||||
m.forgotten
|
||||
FROM sliding_sync_membership_snapshots AS s
|
||||
INNER JOIN room_memberships AS m ON (s.membership_event_id = m.event_id)
|
||||
WHERE s.event_stream_ordering > ?
|
||||
AND s.forgotten = 1
|
||||
ORDER BY s.event_stream_ordering ASC
|
||||
LIMIT ?
|
||||
""",
|
||||
(last_event_stream_ordering, batch_size),
|
||||
)
|
||||
|
||||
memberships_to_update_rows = cast(
|
||||
List[Tuple[str, str, str, int, int]],
|
||||
txn.fetchall(),
|
||||
)
|
||||
if not memberships_to_update_rows:
|
||||
return 0
|
||||
|
||||
# Assemble the values to update
|
||||
#
|
||||
# (room_id, user_id)
|
||||
key_values: List[Tuple[str, str]] = []
|
||||
# (forgotten,)
|
||||
value_values: List[Tuple[int]] = []
|
||||
for (
|
||||
room_id,
|
||||
user_id,
|
||||
_membership_event_id,
|
||||
_event_stream_ordering,
|
||||
forgotten,
|
||||
) in memberships_to_update_rows:
|
||||
key_values.append(
|
||||
(
|
||||
room_id,
|
||||
user_id,
|
||||
)
|
||||
)
|
||||
value_values.append((forgotten,))
|
||||
|
||||
# Update all of the rows in one go
|
||||
self.db_pool.simple_update_many_txn(
|
||||
txn,
|
||||
table="sliding_sync_membership_snapshots",
|
||||
key_names=("room_id", "user_id"),
|
||||
key_values=key_values,
|
||||
value_names=("forgotten",),
|
||||
value_values=value_values,
|
||||
)
|
||||
|
||||
# Update the progress
|
||||
(
|
||||
_room_id,
|
||||
_user_id,
|
||||
_membership_event_id,
|
||||
event_stream_ordering,
|
||||
_forgotten,
|
||||
) = memberships_to_update_rows[-1]
|
||||
self.db_pool.updates._background_update_progress_txn(
|
||||
txn,
|
||||
_BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_FIX_FORGOTTEN_COLUMN_BG_UPDATE,
|
||||
{
|
||||
"last_event_stream_ordering": event_stream_ordering,
|
||||
},
|
||||
)
|
||||
|
||||
return len(memberships_to_update_rows)
|
||||
|
||||
num_rows = await self.db_pool.runInteraction(
|
||||
"_sliding_sync_membership_snapshots_fix_forgotten_column_bg_update",
|
||||
_txn,
|
||||
)
|
||||
|
||||
if not num_rows:
|
||||
await self.db_pool.updates._end_background_update(
|
||||
_BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_FIX_FORGOTTEN_COLUMN_BG_UPDATE
|
||||
)
|
||||
|
||||
return num_rows
|
||||
|
||||
|
||||
def _resolve_stale_data_in_sliding_sync_tables(
|
||||
txn: LoggingTransaction,
|
||||
|
||||
@@ -61,7 +61,13 @@ from synapse.logging.context import (
|
||||
current_context,
|
||||
make_deferred_yieldable,
|
||||
)
|
||||
from synapse.logging.opentracing import start_active_span, tag_args, trace
|
||||
from synapse.logging.opentracing import (
|
||||
SynapseTags,
|
||||
set_tag,
|
||||
start_active_span,
|
||||
tag_args,
|
||||
trace,
|
||||
)
|
||||
from synapse.metrics.background_process_metrics import (
|
||||
run_as_background_process,
|
||||
wrap_as_background_process,
|
||||
@@ -525,6 +531,7 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
|
||||
return event
|
||||
|
||||
@trace
|
||||
async def get_events(
|
||||
self,
|
||||
event_ids: Collection[str],
|
||||
@@ -556,6 +563,11 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
Returns:
|
||||
A mapping from event_id to event.
|
||||
"""
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
|
||||
str(len(event_ids)),
|
||||
)
|
||||
|
||||
events = await self.get_events_as_list(
|
||||
event_ids,
|
||||
redact_behaviour=redact_behaviour,
|
||||
@@ -603,6 +615,10 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
Note that the returned list may be smaller than the list of event
|
||||
IDs if not all events could be fetched.
|
||||
"""
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
|
||||
str(len(event_ids)),
|
||||
)
|
||||
|
||||
if not event_ids:
|
||||
return []
|
||||
@@ -723,10 +739,11 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
|
||||
return events
|
||||
|
||||
@trace
|
||||
@cancellable
|
||||
async def get_unredacted_events_from_cache_or_db(
|
||||
self,
|
||||
event_ids: Iterable[str],
|
||||
event_ids: Collection[str],
|
||||
allow_rejected: bool = False,
|
||||
) -> Dict[str, EventCacheEntry]:
|
||||
"""Fetch a bunch of events from the cache or the database.
|
||||
@@ -748,6 +765,11 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
Returns:
|
||||
map from event id to result
|
||||
"""
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
|
||||
str(len(event_ids)),
|
||||
)
|
||||
|
||||
# Shortcut: check if we have any events in the *in memory* cache - this function
|
||||
# may be called repeatedly for the same event so at this point we cannot reach
|
||||
# out to any external cache for performance reasons. The external cache is
|
||||
@@ -936,7 +958,7 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
events, update_metrics=update_metrics
|
||||
)
|
||||
|
||||
missing_event_ids = (e for e in events if e not in event_map)
|
||||
missing_event_ids = [e for e in events if e not in event_map]
|
||||
event_map.update(
|
||||
await self._get_events_from_external_cache(
|
||||
events=missing_event_ids,
|
||||
@@ -946,8 +968,9 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
|
||||
return event_map
|
||||
|
||||
@trace
|
||||
async def _get_events_from_external_cache(
|
||||
self, events: Iterable[str], update_metrics: bool = True
|
||||
self, events: Collection[str], update_metrics: bool = True
|
||||
) -> Dict[str, EventCacheEntry]:
|
||||
"""Fetch events from any configured external cache.
|
||||
|
||||
@@ -957,6 +980,10 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
events: list of event_ids to fetch
|
||||
update_metrics: Whether to update the cache hit ratio metrics
|
||||
"""
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX + "events.length",
|
||||
str(len(events)),
|
||||
)
|
||||
event_map = {}
|
||||
|
||||
for event_id in events:
|
||||
@@ -1222,6 +1249,7 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
with PreserveLoggingContext():
|
||||
self.hs.get_reactor().callFromThread(fire_errback, e)
|
||||
|
||||
@trace
|
||||
async def _get_events_from_db(
|
||||
self, event_ids: Collection[str]
|
||||
) -> Dict[str, EventCacheEntry]:
|
||||
@@ -1240,6 +1268,11 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
map from event id to result. May return extra events which
|
||||
weren't asked for.
|
||||
"""
|
||||
set_tag(
|
||||
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
|
||||
str(len(event_ids)),
|
||||
)
|
||||
|
||||
fetched_event_ids: Set[str] = set()
|
||||
fetched_events: Dict[str, _EventRow] = {}
|
||||
|
||||
|
||||
@@ -109,6 +109,7 @@ def _load_rules(
|
||||
msc3664_enabled=experimental_config.msc3664_enabled,
|
||||
msc3381_polls_enabled=experimental_config.msc3381_polls_enabled,
|
||||
msc4028_push_encrypted_events=experimental_config.msc4028_push_encrypted_events,
|
||||
msc4210_enabled=experimental_config.msc4210_enabled,
|
||||
)
|
||||
|
||||
return filtered_rules
|
||||
|
||||
@@ -711,6 +711,27 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
|
||||
return {row[0] for row in txn}
|
||||
|
||||
async def get_rooms_user_currently_banned_from(
|
||||
self, user_id: str
|
||||
) -> FrozenSet[str]:
|
||||
"""Returns a set of room_ids the user is currently banned from.
|
||||
|
||||
If a remote user only returns rooms this server is currently
|
||||
participating in.
|
||||
"""
|
||||
room_ids = await self.db_pool.simple_select_onecol(
|
||||
table="current_state_events",
|
||||
keyvalues={
|
||||
"type": EventTypes.Member,
|
||||
"membership": Membership.BAN,
|
||||
"state_key": user_id,
|
||||
},
|
||||
retcol="room_id",
|
||||
desc="get_rooms_user_currently_banned_from",
|
||||
)
|
||||
|
||||
return frozenset(room_ids)
|
||||
|
||||
@cached(max_entries=500000, iterable=True)
|
||||
async def get_rooms_for_user(self, user_id: str) -> FrozenSet[str]:
|
||||
"""Returns a set of room_ids the user is currently joined to.
|
||||
@@ -1354,6 +1375,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
keyvalues={"user_id": user_id, "room_id": room_id},
|
||||
updatevalues={"forgotten": 1},
|
||||
)
|
||||
# Handle updating the `sliding_sync_membership_snapshots` table
|
||||
self.db_pool.simple_update_txn(
|
||||
txn,
|
||||
table="sliding_sync_membership_snapshots",
|
||||
@@ -1499,6 +1521,57 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
"get_sliding_sync_room_for_user", get_sliding_sync_room_for_user_txn
|
||||
)
|
||||
|
||||
async def get_sliding_sync_room_for_user_batch(
|
||||
self, user_id: str, room_ids: StrCollection
|
||||
) -> Dict[str, RoomsForUserSlidingSync]:
|
||||
"""Get the sliding sync room entry for the given user and rooms."""
|
||||
|
||||
if not room_ids:
|
||||
return {}
|
||||
|
||||
def get_sliding_sync_room_for_user_batch_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> Dict[str, RoomsForUserSlidingSync]:
|
||||
clause, args = make_in_list_sql_clause(
|
||||
self.database_engine, "m.room_id", room_ids
|
||||
)
|
||||
sql = f"""
|
||||
SELECT m.room_id, m.sender, m.membership, m.membership_event_id,
|
||||
r.room_version,
|
||||
m.event_instance_name, m.event_stream_ordering,
|
||||
m.has_known_state,
|
||||
COALESCE(j.room_type, m.room_type),
|
||||
COALESCE(j.is_encrypted, m.is_encrypted)
|
||||
FROM sliding_sync_membership_snapshots AS m
|
||||
INNER JOIN rooms AS r USING (room_id)
|
||||
LEFT JOIN sliding_sync_joined_rooms AS j ON (j.room_id = m.room_id AND m.membership = 'join')
|
||||
WHERE m.forgotten = 0
|
||||
AND {clause}
|
||||
AND user_id = ?
|
||||
"""
|
||||
args.append(user_id)
|
||||
txn.execute(sql, args)
|
||||
|
||||
return {
|
||||
row[0]: RoomsForUserSlidingSync(
|
||||
room_id=row[0],
|
||||
sender=row[1],
|
||||
membership=row[2],
|
||||
event_id=row[3],
|
||||
room_version_id=row[4],
|
||||
event_pos=PersistedEventPosition(row[5], row[6]),
|
||||
has_known_state=bool(row[7]),
|
||||
room_type=row[8],
|
||||
is_encrypted=row[9],
|
||||
)
|
||||
for row in txn
|
||||
}
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"get_sliding_sync_room_for_user_batch",
|
||||
get_sliding_sync_room_for_user_batch_txn,
|
||||
)
|
||||
|
||||
|
||||
class RoomMemberBackgroundUpdateStore(SQLBaseStore):
|
||||
def __init__(
|
||||
|
||||
@@ -386,8 +386,8 @@ class SlidingSyncStore(SQLBaseStore):
|
||||
required_state_map: Dict[int, Dict[str, Set[str]]] = {}
|
||||
for row in rows:
|
||||
state = required_state_map[row[0]] = {}
|
||||
for event_type, state_keys in db_to_json(row[1]):
|
||||
state[event_type] = set(state_keys)
|
||||
for event_type, state_key in db_to_json(row[1]):
|
||||
state.setdefault(event_type, set()).add(state_key)
|
||||
|
||||
# Get all the room configs, looking up the required state from the map
|
||||
# above.
|
||||
|
||||
@@ -751,6 +751,48 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
if self._events_stream_cache.has_entity_changed(room_id, from_id)
|
||||
}
|
||||
|
||||
async def get_rooms_that_have_updates_since_sliding_sync_table(
|
||||
self,
|
||||
room_ids: StrCollection,
|
||||
from_key: RoomStreamToken,
|
||||
) -> StrCollection:
|
||||
"""Return the rooms that probably have had updates since the given
|
||||
token (changes that are > `from_key`)."""
|
||||
# If the stream change cache is valid for the stream token, we can just
|
||||
# use the result of that.
|
||||
if from_key.stream >= self._events_stream_cache.get_earliest_known_position():
|
||||
return self._events_stream_cache.get_entities_changed(
|
||||
room_ids, from_key.stream
|
||||
)
|
||||
|
||||
def get_rooms_that_have_updates_since_sliding_sync_table_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> StrCollection:
|
||||
sql = """
|
||||
SELECT room_id
|
||||
FROM sliding_sync_joined_rooms
|
||||
WHERE {clause}
|
||||
AND event_stream_ordering > ?
|
||||
"""
|
||||
|
||||
results: Set[str] = set()
|
||||
for batch in batch_iter(room_ids, 1000):
|
||||
clause, args = make_in_list_sql_clause(
|
||||
self.database_engine, "room_id", batch
|
||||
)
|
||||
|
||||
args.append(from_key.stream)
|
||||
txn.execute(sql.format(clause=clause), args)
|
||||
|
||||
results.update(row[0] for row in txn)
|
||||
|
||||
return results
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"get_rooms_that_have_updates_since_sliding_sync_table",
|
||||
get_rooms_that_have_updates_since_sliding_sync_table_txn,
|
||||
)
|
||||
|
||||
async def paginate_room_events_by_stream_ordering(
|
||||
self,
|
||||
*,
|
||||
|
||||
@@ -153,6 +153,8 @@ Changes in SCHEMA_VERSION = 87
|
||||
Changes in SCHEMA_VERSION = 88
|
||||
- MSC4140: Add `delayed_events` table that keeps track of events that are to
|
||||
be posted in response to a resettable timeout or an on-demand action.
|
||||
- Add background update to fix data integrity issue in the
|
||||
`sliding_sync_membership_snapshots` -> `forgotten` column
|
||||
"""
|
||||
|
||||
|
||||
|
||||
@@ -1,3 +1,16 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2024 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
CREATE TABLE delayed_events (
|
||||
delay_id TEXT NOT NULL,
|
||||
user_localpart TEXT NOT NULL,
|
||||
|
||||
@@ -0,0 +1,21 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2024 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
-- Add a background update to update the `sliding_sync_membership_snapshots` ->
|
||||
-- `forgotten` column to be in sync with the `room_memberships` table.
|
||||
--
|
||||
-- For any room that someone has forgotten and subsequently re-joined or had any new
|
||||
-- membership on, we need to go and update the column to match the `room_memberships`
|
||||
-- table as it has fallen out of sync.
|
||||
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||
(8802, 'sliding_sync_membership_snapshots_fix_forgotten_column_bg_update', '{}');
|
||||
@@ -48,6 +48,7 @@ class FilteredPushRules:
|
||||
msc3381_polls_enabled: bool,
|
||||
msc3664_enabled: bool,
|
||||
msc4028_push_encrypted_events: bool,
|
||||
msc4210_enabled: bool,
|
||||
): ...
|
||||
def rules(self) -> Collection[Tuple[PushRule, bool]]: ...
|
||||
|
||||
@@ -65,6 +66,7 @@ class PushRuleEvaluator:
|
||||
related_event_match_enabled: bool,
|
||||
room_version_feature_flags: Tuple[str, ...],
|
||||
msc3931_enabled: bool,
|
||||
msc4210_enabled: bool,
|
||||
): ...
|
||||
def run(
|
||||
self,
|
||||
|
||||
@@ -158,6 +158,7 @@ class SlidingSyncResult:
|
||||
name changes to mark the room as unread and bump it to the top. For
|
||||
encrypted rooms, we just have to consider any activity as a bump because we
|
||||
can't see the content and the client has to figure it out for themselves.
|
||||
This may not be included if there hasn't been a change.
|
||||
joined_count: The number of users with membership of join, including the client's
|
||||
own user ID. (same as sync `v2 m.joined_member_count`)
|
||||
invited_count: The number of users with membership of invite. (same as sync v2
|
||||
@@ -193,7 +194,7 @@ class SlidingSyncResult:
|
||||
limited: Optional[bool]
|
||||
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||
num_live: Optional[int]
|
||||
bump_stamp: int
|
||||
bump_stamp: Optional[int]
|
||||
joined_count: Optional[int]
|
||||
invited_count: Optional[int]
|
||||
notification_count: int
|
||||
|
||||
@@ -616,6 +616,13 @@ class StateFilter:
|
||||
|
||||
return False
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
"""Returns true if this state filter will match any state, or false if
|
||||
this is the empty filter"""
|
||||
if self.include_others:
|
||||
return True
|
||||
return bool(self.types)
|
||||
|
||||
|
||||
_ALL_STATE_FILTER = StateFilter(types=immutabledict(), include_others=True)
|
||||
_ALL_NON_MEMBER_STATE_FILTER = StateFilter(
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user