mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-09 01:30:18 +00:00
Compare commits
123 Commits
erikj/sync
...
s7evink/fi
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3306a02e70 | ||
|
|
61d24a34c0 | ||
|
|
92b38c1afd | ||
|
|
a8e313836d | ||
|
|
7c9684b5dc | ||
|
|
f1e8d2d15a | ||
|
|
10428046e4 | ||
|
|
6eb98a4f1c | ||
|
|
950ba844f7 | ||
|
|
8b8d74d12f | ||
|
|
261e746281 | ||
|
|
993644ded0 | ||
|
|
a5d25bb623 | ||
|
|
f162c92f2a | ||
|
|
9ce489be5e | ||
|
|
fae75b0376 | ||
|
|
f77bfbfa30 | ||
|
|
1892ba5f67 | ||
|
|
a51daffba5 | ||
|
|
b05b2e14bb | ||
|
|
a308d99f30 | ||
|
|
a9fc1fd112 | ||
|
|
6a11bdf01d | ||
|
|
8fea190a1f | ||
|
|
81c19c4cd2 | ||
|
|
aaa3c36420 | ||
|
|
3e7eb45eb1 | ||
|
|
bab37dfc6f | ||
|
|
9f9ec92526 | ||
|
|
ff7b27013e | ||
|
|
e1f5f0fbb8 | ||
|
|
8c9f2743bc | ||
|
|
b076941a36 | ||
|
|
8bbe65f3c0 | ||
|
|
b7faf01f26 | ||
|
|
4f7f6ee9a0 | ||
|
|
a640b318df | ||
|
|
34b7586446 | ||
|
|
70b0e38603 | ||
|
|
f31360e34b | ||
|
|
3ad38b644d | ||
|
|
f7bca60920 | ||
|
|
9a3cb5a245 | ||
|
|
208630bbb0 | ||
|
|
44ac2aa3b6 | ||
|
|
11db575218 | ||
|
|
30e9f6e469 | ||
|
|
eb62d12063 | ||
|
|
ceb3686dcd | ||
|
|
1dfa59b238 | ||
|
|
bef6568537 | ||
|
|
244a255065 | ||
|
|
932cb0a928 | ||
|
|
2dad718265 | ||
|
|
5d8446298c | ||
|
|
d845e939a9 | ||
|
|
23727869c7 | ||
|
|
c270355349 | ||
|
|
e3db7b2d81 | ||
|
|
2b620e0a15 | ||
|
|
39731bb205 | ||
|
|
1d6186265a | ||
|
|
46de0ee16b | ||
|
|
b221f0b84b | ||
|
|
b2c55bd049 | ||
|
|
ed583d9c81 | ||
|
|
f76dc9923c | ||
|
|
7e997fb8b1 | ||
|
|
dbc2290cbe | ||
|
|
2f6b86e79a | ||
|
|
37f9876ccf | ||
|
|
8b449a8ce6 | ||
|
|
53db8a914e | ||
|
|
e4868f8a1e | ||
|
|
dcad81082c | ||
|
|
c56b070e6f | ||
|
|
be726724a8 | ||
|
|
62ae56a4ac | ||
|
|
808dab0699 | ||
|
|
34306be5aa | ||
|
|
be4a16ff44 | ||
|
|
568051c0f0 | ||
|
|
ebbabfe782 | ||
|
|
69ac4b6a6e | ||
|
|
729026e604 | ||
|
|
bdf37ad4c4 | ||
|
|
8bbc98e66d | ||
|
|
4b9f4c2abf | ||
|
|
e8ee784c75 | ||
|
|
48c1307911 | ||
|
|
d225b6b3eb | ||
|
|
1daae43f3a | ||
|
|
a9ee832e48 | ||
|
|
13a99fba1b | ||
|
|
e3a0681ecf | ||
|
|
de05a64246 | ||
|
|
d221512498 | ||
|
|
ed0face8ad | ||
|
|
73529d3732 | ||
|
|
1648337775 | ||
|
|
dc8ddc6472 | ||
|
|
d3f9afd8d9 | ||
|
|
43c865f7c9 | ||
|
|
71d83477cb | ||
|
|
6a01af59e1 | ||
|
|
f583f1dce4 | ||
|
|
a574de0062 | ||
|
|
3fee32ed6b | ||
|
|
5884f0a956 | ||
|
|
79924aebef | ||
|
|
574aa53126 | ||
|
|
83894180b2 | ||
|
|
429ecb7564 | ||
|
|
9e1acea051 | ||
|
|
899d33f2ba | ||
|
|
df11af14db | ||
|
|
d88ba45db9 | ||
|
|
14f2b1eb00 | ||
|
|
2af729a193 | ||
|
|
0de0689ae8 | ||
|
|
4c44020838 | ||
|
|
4f6194492a | ||
|
|
ab62aa09da |
2
.github/workflows/docker.yml
vendored
2
.github/workflows/docker.yml
vendored
@@ -30,7 +30,7 @@ jobs:
|
||||
run: docker buildx inspect
|
||||
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@v3.5.0
|
||||
uses: sigstore/cosign-installer@v3.6.0
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
4
.github/workflows/tests.yml
vendored
4
.github/workflows/tests.yml
vendored
@@ -305,7 +305,7 @@ jobs:
|
||||
- lint-readme
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: matrix-org/done-action@v2
|
||||
- uses: matrix-org/done-action@v3
|
||||
with:
|
||||
needs: ${{ toJSON(needs) }}
|
||||
|
||||
@@ -737,7 +737,7 @@ jobs:
|
||||
- linting-done
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: matrix-org/done-action@v2
|
||||
- uses: matrix-org/done-action@v3
|
||||
with:
|
||||
needs: ${{ toJSON(needs) }}
|
||||
|
||||
|
||||
216
CHANGES.md
216
CHANGES.md
@@ -1,3 +1,219 @@
|
||||
# Synapse 1.114.0rc1 (2024-08-20)
|
||||
|
||||
### Features
|
||||
|
||||
- Add a flag to `/versions`, `org.matrix.simplified_msc3575`, to indicate whether experimental sliding sync support has been enabled. ([\#17571](https://github.com/element-hq/synapse/issues/17571))
|
||||
- Handle changes in `timeline_limit` in experimental sliding sync. ([\#17579](https://github.com/element-hq/synapse/issues/17579))
|
||||
- Correctly track read receipts that should be sent down in experimental sliding sync. ([\#17575](https://github.com/element-hq/synapse/issues/17575), [\#17589](https://github.com/element-hq/synapse/issues/17589), [\#17592](https://github.com/element-hq/synapse/issues/17592))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Start handlers for new media endpoints when media resource configured. ([\#17483](https://github.com/element-hq/synapse/issues/17483))
|
||||
- Fix timeline ordering (using `stream_ordering` instead of topological ordering) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17510](https://github.com/element-hq/synapse/issues/17510))
|
||||
- Fix experimental sliding sync implementation to remember any updates in rooms that were not sent down immediately. ([\#17535](https://github.com/element-hq/synapse/issues/17535))
|
||||
- Better exclude partially stated rooms if we must await full state in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17538](https://github.com/element-hq/synapse/issues/17538))
|
||||
- Handle lower-case http headers in `_Mulitpart_Parser_Protocol`. ([\#17545](https://github.com/element-hq/synapse/issues/17545))
|
||||
- Fix fetching federation signing keys from servers that omit `old_verify_keys`. Contributed by @tulir @ Beeper. ([\#17568](https://github.com/element-hq/synapse/issues/17568))
|
||||
- Fix bug where we would respond with an error when a remote server asked for media that had a length of 0, using the new multipart federation media endpoint. ([\#17570](https://github.com/element-hq/synapse/issues/17570))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Clarify default behaviour of the
|
||||
[`auto_accept_invites.worker_to_run_on`](https://element-hq.github.io/synapse/develop/usage/configuration/config_documentation.html#auto-accept-invites)
|
||||
option. ([\#17515](https://github.com/element-hq/synapse/issues/17515))
|
||||
- Improve docstrings for profile methods. ([\#17559](https://github.com/element-hq/synapse/issues/17559))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Add more tracing to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17514](https://github.com/element-hq/synapse/issues/17514))
|
||||
- Fixup comment in sliding sync implementation. ([\#17531](https://github.com/element-hq/synapse/issues/17531))
|
||||
- Replace override of deprecated method `HTTPAdapter.get_connection` with `get_connection_with_tls_context`. ([\#17536](https://github.com/element-hq/synapse/issues/17536))
|
||||
- Fix performance of device lists in `/key/changes` and sliding sync. ([\#17537](https://github.com/element-hq/synapse/issues/17537), [\#17548](https://github.com/element-hq/synapse/issues/17548))
|
||||
- Bump setuptools from 67.6.0 to 72.1.0. ([\#17542](https://github.com/element-hq/synapse/issues/17542))
|
||||
- Add a utility function for generating random event IDs. ([\#17557](https://github.com/element-hq/synapse/issues/17557))
|
||||
- Speed up responding to media requests. ([\#17558](https://github.com/element-hq/synapse/issues/17558), [\#17561](https://github.com/element-hq/synapse/issues/17561), [\#17564](https://github.com/element-hq/synapse/issues/17564), [\#17566](https://github.com/element-hq/synapse/issues/17566), [\#17567](https://github.com/element-hq/synapse/issues/17567), [\#17569](https://github.com/element-hq/synapse/issues/17569))
|
||||
- Test github token before running release script steps. ([\#17562](https://github.com/element-hq/synapse/issues/17562))
|
||||
- Reduce log spam of multipart files. ([\#17563](https://github.com/element-hq/synapse/issues/17563))
|
||||
- Refactor per-connection state in experimental sliding sync handler. ([\#17574](https://github.com/element-hq/synapse/issues/17574))
|
||||
- Add histogram metrics for sliding sync processing time. ([\#17593](https://github.com/element-hq/synapse/issues/17593))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump bytes from 1.6.1 to 1.7.1. ([\#17526](https://github.com/element-hq/synapse/issues/17526))
|
||||
* Bump lxml from 5.2.2 to 5.3.0. ([\#17550](https://github.com/element-hq/synapse/issues/17550))
|
||||
* Bump phonenumbers from 8.13.42 to 8.13.43. ([\#17551](https://github.com/element-hq/synapse/issues/17551))
|
||||
* Bump regex from 1.10.5 to 1.10.6. ([\#17527](https://github.com/element-hq/synapse/issues/17527))
|
||||
* Bump sentry-sdk from 2.10.0 to 2.12.0. ([\#17553](https://github.com/element-hq/synapse/issues/17553))
|
||||
* Bump serde from 1.0.204 to 1.0.206. ([\#17556](https://github.com/element-hq/synapse/issues/17556))
|
||||
* Bump serde_json from 1.0.122 to 1.0.124. ([\#17555](https://github.com/element-hq/synapse/issues/17555))
|
||||
* Bump sigstore/cosign-installer from 3.5.0 to 3.6.0. ([\#17549](https://github.com/element-hq/synapse/issues/17549))
|
||||
* Bump types-pyyaml from 6.0.12.20240311 to 6.0.12.20240808. ([\#17552](https://github.com/element-hq/synapse/issues/17552))
|
||||
* Bump types-requests from 2.31.0.20240406 to 2.32.0.20240712. ([\#17524](https://github.com/element-hq/synapse/issues/17524))
|
||||
|
||||
# Synapse 1.113.0 (2024-08-13)
|
||||
|
||||
No significant changes since 1.113.0rc1.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.113.0rc1 (2024-08-06)
|
||||
|
||||
### Features
|
||||
|
||||
- Track which rooms have been sent to clients in the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17447](https://github.com/element-hq/synapse/issues/17447))
|
||||
- Add Account Data extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17477](https://github.com/element-hq/synapse/issues/17477))
|
||||
- Add receipts extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17489](https://github.com/element-hq/synapse/issues/17489))
|
||||
- Add typing notification extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17505](https://github.com/element-hq/synapse/issues/17505))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to handle invite/knock rooms when filtering. ([\#17450](https://github.com/element-hq/synapse/issues/17450))
|
||||
- Fix a bug introduced in v1.110.0 which caused `/keys/query` to return incomplete results, leading to high network activity and CPU usage on Matrix clients. ([\#17499](https://github.com/element-hq/synapse/issues/17499))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Update the [`allowed_local_3pids`](https://element-hq.github.io/synapse/v1.112/usage/configuration/config_documentation.html#allowed_local_3pids) config option's msisdn address to a working example. ([\#17476](https://github.com/element-hq/synapse/issues/17476))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Change sliding sync to use their own token format in preparation for storing per-connection state. ([\#17452](https://github.com/element-hq/synapse/issues/17452))
|
||||
- Ensure we don't send down negative `bump_stamp` in experimental sliding sync endpoint. ([\#17478](https://github.com/element-hq/synapse/issues/17478))
|
||||
- Do not send down empty room entries down experimental sliding sync endpoint. ([\#17479](https://github.com/element-hq/synapse/issues/17479))
|
||||
- Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`. ([\#17481](https://github.com/element-hq/synapse/issues/17481), [\#17482](https://github.com/element-hq/synapse/issues/17482))
|
||||
- Add some opentracing tags and logging to the experimental sliding sync implementation. ([\#17501](https://github.com/element-hq/synapse/issues/17501))
|
||||
- Split and move Sliding Sync tests so we have some more sane test file sizes. ([\#17504](https://github.com/element-hq/synapse/issues/17504))
|
||||
- Update the `limited` field description in the Sliding Sync response to accurately describe what it actually represents. ([\#17507](https://github.com/element-hq/synapse/issues/17507))
|
||||
- Easier to understand `timeline` assertions in Sliding Sync tests. ([\#17511](https://github.com/element-hq/synapse/issues/17511))
|
||||
- Reset the sliding sync connection if we don't recognize the per-connection state position. ([\#17529](https://github.com/element-hq/synapse/issues/17529))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump bcrypt from 4.1.3 to 4.2.0. ([\#17495](https://github.com/element-hq/synapse/issues/17495))
|
||||
* Bump black from 24.4.2 to 24.8.0. ([\#17522](https://github.com/element-hq/synapse/issues/17522))
|
||||
* Bump phonenumbers from 8.13.39 to 8.13.42. ([\#17521](https://github.com/element-hq/synapse/issues/17521))
|
||||
* Bump ruff from 0.5.4 to 0.5.5. ([\#17494](https://github.com/element-hq/synapse/issues/17494))
|
||||
* Bump serde_json from 1.0.120 to 1.0.121. ([\#17493](https://github.com/element-hq/synapse/issues/17493))
|
||||
* Bump serde_json from 1.0.121 to 1.0.122. ([\#17525](https://github.com/element-hq/synapse/issues/17525))
|
||||
* Bump towncrier from 23.11.0 to 24.7.1. ([\#17523](https://github.com/element-hq/synapse/issues/17523))
|
||||
* Bump types-pyopenssl from 24.1.0.20240425 to 24.1.0.20240722. ([\#17496](https://github.com/element-hq/synapse/issues/17496))
|
||||
* Bump types-setuptools from 70.1.0.20240627 to 71.1.0.20240726. ([\#17497](https://github.com/element-hq/synapse/issues/17497))
|
||||
|
||||
# Synapse 1.112.0 (2024-07-30)
|
||||
|
||||
This security release is to update our locked dependency on Twisted to 24.7.0rc1, which includes a security fix for [CVE-2024-41671 / GHSA-c8m8-j448-xjx7: Disordered HTTP pipeline response in twisted.web, again](https://github.com/twisted/twisted/security/advisories/GHSA-c8m8-j448-xjx7).
|
||||
|
||||
Note that this security fix is also available as **Synapse 1.111.1**, which does not include the rest of the changes in Synapse 1.112.0.
|
||||
|
||||
This issue means that, if multiple HTTP requests are pipelined in the same TCP connection, Synapse can send responses to the wrong HTTP request.
|
||||
If a reverse proxy was configured to use HTTP pipelining, this could result in responses being sent to the wrong user, severely harming confidentiality.
|
||||
|
||||
With that said, despite being a high severity issue, **we consider it unlikely that Synapse installations will be affected**.
|
||||
The use of HTTP pipelining in this fashion would cause worse performance for clients (request-response latencies would be increased as users' responses would be artificially blocked behind other users' slow requests). Further, Nginx and Haproxy, two common reverse proxies, do not appear to support configuring their upstreams to use HTTP pipelining and thus would not be affected. For both of these reasons, we consider it unlikely that a Synapse deployment would be set up in such a configuration.
|
||||
|
||||
Despite that, we cannot rule out that some installations may exist with this unusual setup and so we are releasing this security update today.
|
||||
|
||||
**pip users:** Note that by default, upgrading Synapse using pip will not automatically upgrade Twisted. **Please manually install the new version of Twisted** using `pip install Twisted==24.7.0rc1`. Note also that even the `--upgrade-strategy=eager` flag to `pip install -U matrix-synapse` will not upgrade Twisted to a patched version because it is only a release candidate at this time.
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Upgrade locked dependency on Twisted to 24.7.0rc1. ([\#17502](https://github.com/element-hq/synapse/issues/17502))
|
||||
|
||||
|
||||
# Synapse 1.112.0rc1 (2024-07-23)
|
||||
|
||||
Please note that this release candidate does not include the security dependency update
|
||||
included in version 1.111.1 as this version was released before 1.111.1.
|
||||
The same security fix can be found in the full release of 1.112.0.
|
||||
|
||||
### Features
|
||||
|
||||
- Add to-device extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17416](https://github.com/element-hq/synapse/issues/17416))
|
||||
- Populate `name`/`avatar` fields in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17418](https://github.com/element-hq/synapse/issues/17418))
|
||||
- Populate `heroes` and room summary fields (`joined_count`, `invited_count`) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17419](https://github.com/element-hq/synapse/issues/17419))
|
||||
- Populate `is_dm` room field in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17429](https://github.com/element-hq/synapse/issues/17429))
|
||||
- Add room subscriptions to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17432](https://github.com/element-hq/synapse/issues/17432))
|
||||
- Prepare for authenticated media freeze. ([\#17433](https://github.com/element-hq/synapse/issues/17433))
|
||||
- Add E2EE extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17454](https://github.com/element-hq/synapse/issues/17454))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Add configurable option to always include offline users in presence sync results. Contributed by @Michael-Hollister. ([\#17231](https://github.com/element-hq/synapse/issues/17231))
|
||||
- Fix bug in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint when using room type filters and the user has one or more remote invites. ([\#17434](https://github.com/element-hq/synapse/issues/17434))
|
||||
- Order `heroes` by `stream_ordering` as the Matrix specification states (applies to `/sync`). ([\#17435](https://github.com/element-hq/synapse/issues/17435))
|
||||
- Fix rare bug where `/sync` would break for a user when using workers with multiple stream writers. ([\#17438](https://github.com/element-hq/synapse/issues/17438))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Update the readme image to have a white background, so that it is readable in dark mode. ([\#17387](https://github.com/element-hq/synapse/issues/17387))
|
||||
- Add Red Hat Enterprise Linux and Rocky Linux 8 and 9 installation instructions. ([\#17423](https://github.com/element-hq/synapse/issues/17423))
|
||||
- Improve documentation for the [`default_power_level_content_override`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#default_power_level_content_override) config option. ([\#17451](https://github.com/element-hq/synapse/issues/17451))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Make sure we always use the right logic for enabling the media repo. ([\#17424](https://github.com/element-hq/synapse/issues/17424))
|
||||
- Fix argument documentation for method `RateLimiter.record_action`. ([\#17426](https://github.com/element-hq/synapse/issues/17426))
|
||||
- Reduce volume of 'Waiting for current token' logs, which were introduced in v1.109.0. ([\#17428](https://github.com/element-hq/synapse/issues/17428))
|
||||
- Limit concurrent remote downloads to 6 per IP address, and decrement remote downloads without a content-length from the ratelimiter after the download is complete. ([\#17439](https://github.com/element-hq/synapse/issues/17439))
|
||||
- Remove unnecessary call to resume producing in fake channel. ([\#17449](https://github.com/element-hq/synapse/issues/17449))
|
||||
- Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to bump room when it is created. ([\#17453](https://github.com/element-hq/synapse/issues/17453))
|
||||
- Speed up generating sliding sync responses. ([\#17458](https://github.com/element-hq/synapse/issues/17458))
|
||||
- Add cache to `get_rooms_for_local_user_where_membership_is` to speed up sliding sync. ([\#17460](https://github.com/element-hq/synapse/issues/17460))
|
||||
- Speed up fetching room keys from backup. ([\#17461](https://github.com/element-hq/synapse/issues/17461))
|
||||
- Speed up sorting of the room list in sliding sync. ([\#17468](https://github.com/element-hq/synapse/issues/17468))
|
||||
- Implement handling of `$ME` as a state key in sliding sync. ([\#17469](https://github.com/element-hq/synapse/issues/17469))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump bytes from 1.6.0 to 1.6.1. ([\#17441](https://github.com/element-hq/synapse/issues/17441))
|
||||
* Bump hiredis from 2.3.2 to 3.0.0. ([\#17464](https://github.com/element-hq/synapse/issues/17464))
|
||||
* Bump jsonschema from 4.22.0 to 4.23.0. ([\#17444](https://github.com/element-hq/synapse/issues/17444))
|
||||
* Bump matrix-org/done-action from 2 to 3. ([\#17440](https://github.com/element-hq/synapse/issues/17440))
|
||||
* Bump mypy from 1.9.0 to 1.10.1. ([\#17445](https://github.com/element-hq/synapse/issues/17445))
|
||||
* Bump pyopenssl from 24.1.0 to 24.2.1. ([\#17465](https://github.com/element-hq/synapse/issues/17465))
|
||||
* Bump ruff from 0.5.0 to 0.5.4. ([\#17466](https://github.com/element-hq/synapse/issues/17466))
|
||||
* Bump sentry-sdk from 2.6.0 to 2.8.0. ([\#17456](https://github.com/element-hq/synapse/issues/17456))
|
||||
* Bump sentry-sdk from 2.8.0 to 2.10.0. ([\#17467](https://github.com/element-hq/synapse/issues/17467))
|
||||
* Bump setuptools from 67.6.0 to 70.0.0. ([\#17448](https://github.com/element-hq/synapse/issues/17448))
|
||||
* Bump twine from 5.1.0 to 5.1.1. ([\#17443](https://github.com/element-hq/synapse/issues/17443))
|
||||
* Bump types-jsonschema from 4.22.0.20240610 to 4.23.0.20240712. ([\#17446](https://github.com/element-hq/synapse/issues/17446))
|
||||
* Bump ulid from 1.1.2 to 1.1.3. ([\#17442](https://github.com/element-hq/synapse/issues/17442))
|
||||
* Bump zipp from 3.15.0 to 3.19.1. ([\#17427](https://github.com/element-hq/synapse/issues/17427))
|
||||
|
||||
|
||||
# Synapse 1.111.1 (2024-07-30)
|
||||
|
||||
This security release is to update our locked dependency on Twisted to 24.7.0rc1, which includes a security fix for [CVE-2024-41671 / GHSA-c8m8-j448-xjx7: Disordered HTTP pipeline response in twisted.web, again](https://github.com/twisted/twisted/security/advisories/GHSA-c8m8-j448-xjx7).
|
||||
|
||||
This issue means that, if multiple HTTP requests are pipelined in the same TCP connection, Synapse can send responses to the wrong HTTP request.
|
||||
If a reverse proxy was configured to use HTTP pipelining, this could result in responses being sent to the wrong user, severely harming confidentiality.
|
||||
|
||||
With that said, despite being a high severity issue, **we consider it unlikely that Synapse installations will be affected**.
|
||||
The use of HTTP pipelining in this fashion would cause worse performance for clients (request-response latencies would be increased as users' responses would be artificially blocked behind other users' slow requests). Further, Nginx and Haproxy, two common reverse proxies, do not appear to support configuring their upstreams to use HTTP pipelining and thus would not be affected. For both of these reasons, we consider it unlikely that a Synapse deployment would be set up in such a configuration.
|
||||
|
||||
Despite that, we cannot rule out that some installations may exist with this unusual setup and so we are releasing this security update today.
|
||||
|
||||
**pip users:** Note that by default, upgrading Synapse using pip will not automatically upgrade Twisted. **Please manually install the new version of Twisted** using `pip install Twisted==24.7.0rc1`. Note also that even the `--upgrade-strategy=eager` flag to `pip install -U matrix-synapse` will not upgrade Twisted to a patched version because it is only a release candidate at this time.
|
||||
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Upgrade locked dependency on Twisted to 24.7.0rc1. ([\#17502](https://github.com/element-hq/synapse/issues/17502))
|
||||
|
||||
|
||||
# Synapse 1.111.0 (2024-07-16)
|
||||
|
||||
No significant changes since 1.111.0rc2.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.111.0rc2 (2024-07-10)
|
||||
|
||||
### Bugfixes
|
||||
|
||||
25
Cargo.lock
generated
25
Cargo.lock
generated
@@ -67,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
|
||||
|
||||
[[package]]
|
||||
name = "bytes"
|
||||
version = "1.6.0"
|
||||
version = "1.7.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "514de17de45fdb8dc022b1a7975556c53c86f9f0aa5f534b98977b171857c2c9"
|
||||
checksum = "8318a53db07bb3f8dca91a600466bdb3f2eaadeedfdbcf02e1accbad9271ba50"
|
||||
|
||||
[[package]]
|
||||
name = "cfg-if"
|
||||
@@ -444,9 +444,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "1.10.5"
|
||||
version = "1.10.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b91213439dad192326a0d7c6ee3955910425f441d7038e0d6933b0aec5c4517f"
|
||||
checksum = "4219d74c6b67a3654a9fbebc4b419e22126d13d2f3c4a07ee0cb61ff79a79619"
|
||||
dependencies = [
|
||||
"aho-corasick",
|
||||
"memchr",
|
||||
@@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
|
||||
|
||||
[[package]]
|
||||
name = "serde"
|
||||
version = "1.0.204"
|
||||
version = "1.0.206"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bc76f558e0cbb2a839d37354c575f1dc3fdc6546b5be373ba43d95f231bf7c12"
|
||||
checksum = "5b3e4cd94123dd520a128bcd11e34d9e9e423e7e3e50425cb1b4b1e3549d0284"
|
||||
dependencies = [
|
||||
"serde_derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_derive"
|
||||
version = "1.0.204"
|
||||
version = "1.0.206"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e0cd7e117be63d3c3678776753929474f3b04a43a080c744d6b0ae2a8c28e222"
|
||||
checksum = "fabfb6138d2383ea8208cf98ccf69cdfb1aff4088460681d84189aa259762f97"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
@@ -505,11 +505,12 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serde_json"
|
||||
version = "1.0.120"
|
||||
version = "1.0.124"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4e0d21c9a8cae1235ad58a00c11cb40d4b1e5c784f1ef2c537876ed6ffd8b7c5"
|
||||
checksum = "66ad62847a56b3dba58cc891acd13884b9c61138d330c0d7b6181713d4fce38d"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"memchr",
|
||||
"ryu",
|
||||
"serde",
|
||||
]
|
||||
@@ -597,9 +598,9 @@ checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825"
|
||||
|
||||
[[package]]
|
||||
name = "ulid"
|
||||
version = "1.1.2"
|
||||
version = "1.1.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "34778c17965aa2a08913b57e1f34db9b4a63f5de31768b55bf20d2795f921259"
|
||||
checksum = "04f903f293d11f31c0c29e4148f6dc0d033a7f80cebc0282bea147611667d289"
|
||||
dependencies = [
|
||||
"getrandom",
|
||||
"rand",
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Update the readme image to have a white background, so that it is readable in dark mode.
|
||||
@@ -1 +0,0 @@
|
||||
Add to-device extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
||||
@@ -1 +0,0 @@
|
||||
Populate `name`/`avatar` fields in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
||||
@@ -1 +0,0 @@
|
||||
Populate `heroes` and room summary fields (`joined_count`, `invited_count`) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
||||
@@ -1 +0,0 @@
|
||||
Add Red Hat Enterprise Linux and Rocky Linux 8 and 9 installation instructions.
|
||||
@@ -1 +0,0 @@
|
||||
Fix documentation on `RateLimiter#record_action`.
|
||||
@@ -1 +0,0 @@
|
||||
Populate `is_dm` room field in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint when using room type filters and the user has one or more remote invites.
|
||||
@@ -1 +0,0 @@
|
||||
Fix rare bug where `/sync` would break for a user when using workers with multiple stream writers.
|
||||
1
changelog.d/17543.bugfix
Normal file
1
changelog.d/17543.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix authenticated media responses using a wrong limit when following redirects over federation.
|
||||
1
changelog.d/17595.misc
Normal file
1
changelog.d/17595.misc
Normal file
@@ -0,0 +1 @@
|
||||
Refactor sliding sync class into multiple files.
|
||||
42
debian/changelog
vendored
42
debian/changelog
vendored
@@ -1,3 +1,45 @@
|
||||
matrix-synapse-py3 (1.114.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.114.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 20 Aug 2024 12:55:28 +0000
|
||||
|
||||
matrix-synapse-py3 (1.113.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.113.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 13 Aug 2024 14:36:56 +0100
|
||||
|
||||
matrix-synapse-py3 (1.113.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.113.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 06 Aug 2024 12:23:23 +0100
|
||||
|
||||
matrix-synapse-py3 (1.112.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.112.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Jul 2024 17:15:48 +0100
|
||||
|
||||
matrix-synapse-py3 (1.112.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.112.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Jul 2024 08:58:55 -0600
|
||||
|
||||
matrix-synapse-py3 (1.111.1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.111.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Jul 2024 16:13:52 +0100
|
||||
|
||||
matrix-synapse-py3 (1.111.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.111.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Jul 2024 12:42:46 +0200
|
||||
|
||||
matrix-synapse-py3 (1.111.0~rc2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.111.0rc2.
|
||||
|
||||
2
debian/templates
vendored
2
debian/templates
vendored
@@ -5,7 +5,7 @@ _Description: Name of the server:
|
||||
servers via federation. This is normally the public hostname of the
|
||||
server running synapse, but can be different if you set up delegation.
|
||||
Please refer to the delegation documentation in this case:
|
||||
https://github.com/element-hq/synapse/blob/master/docs/delegate.md.
|
||||
https://element-hq.github.io/synapse/latest/delegate.html.
|
||||
|
||||
Template: matrix-synapse/report-stats
|
||||
Type: boolean
|
||||
|
||||
@@ -21,8 +21,10 @@ incrementing integer, but backfilled events start with `stream_ordering=-1` and
|
||||
|
||||
---
|
||||
|
||||
- `/sync` returns things in the order they arrive at the server (`stream_ordering`).
|
||||
- `/messages` (and `/backfill` in the federation API) return them in the order determined by the event graph `(topological_ordering, stream_ordering)`.
|
||||
- Incremental `/sync?since=xxx` returns things in the order they arrive at the server
|
||||
(`stream_ordering`).
|
||||
- Initial `/sync`, `/messages` (and `/backfill` in the federation API) return them in
|
||||
the order determined by the event graph `(topological_ordering, stream_ordering)`.
|
||||
|
||||
The general idea is that, if you're following a room in real-time (i.e.
|
||||
`/sync`), you probably want to see the messages as they arrive at your server,
|
||||
|
||||
@@ -246,6 +246,7 @@ Example configuration:
|
||||
```yaml
|
||||
presence:
|
||||
enabled: false
|
||||
include_offline_users_on_sync: false
|
||||
```
|
||||
|
||||
`enabled` can also be set to a special value of "untracked" which ignores updates
|
||||
@@ -254,6 +255,10 @@ received via clients and federation, while still accepting updates from the
|
||||
|
||||
*The "untracked" option was added in Synapse 1.96.0.*
|
||||
|
||||
When clients perform an initial or `full_state` sync, presence results for offline users are
|
||||
not included by default. Setting `include_offline_users_on_sync` to `true` will always include
|
||||
offline users in the results. Defaults to false.
|
||||
|
||||
---
|
||||
### `require_auth_for_profile_requests`
|
||||
|
||||
@@ -1863,6 +1868,18 @@ federation_rr_transactions_per_room_per_second: 40
|
||||
## Media Store
|
||||
Config options related to Synapse's media store.
|
||||
|
||||
---
|
||||
### `enable_authenticated_media`
|
||||
|
||||
When set to true, all subsequent media uploads will be marked as authenticated, and will not be available over legacy
|
||||
unauthenticated media endpoints (`/_matrix/media/(r0|v3|v1)/download` and `/_matrix/media/(r0|v3|v1)/thumbnail`) - requests for authenticated media over these endpoints will result in a 404. All media, including authenticated media, will be available over the authenticated media endpoints `_matrix/client/v1/media/download` and `_matrix/client/v1/media/thumbnail`. Media uploaded prior to setting this option to true will still be available over the legacy endpoints. Note if the setting is switched to false
|
||||
after enabling, media marked as authenticated will be available over legacy endpoints. Defaults to false, but
|
||||
this will change to true in a future Synapse release.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
enable_authenticated_media: true
|
||||
```
|
||||
---
|
||||
### `enable_media_repo`
|
||||
|
||||
@@ -2369,7 +2386,7 @@ enable_registration_without_verification: true
|
||||
---
|
||||
### `registrations_require_3pid`
|
||||
|
||||
If this is set, users must provide all of the specified types of 3PID when registering an account.
|
||||
If this is set, users must provide all of the specified types of [3PID](https://spec.matrix.org/latest/appendices/#3pid-types) when registering an account.
|
||||
|
||||
Note that [`enable_registration`](#enable_registration) must also be set to allow account registration.
|
||||
|
||||
@@ -2394,6 +2411,9 @@ disable_msisdn_registration: true
|
||||
|
||||
Mandate that users are only allowed to associate certain formats of
|
||||
3PIDs with accounts on this server, as specified by the `medium` and `pattern` sub-options.
|
||||
`pattern` is a [Perl-like regular expression](https://docs.python.org/3/library/re.html#module-re).
|
||||
|
||||
More information about 3PIDs, allowed `medium` types and their `address` syntax can be found [in the Matrix spec](https://spec.matrix.org/latest/appendices/#3pid-types).
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
@@ -2403,7 +2423,7 @@ allowed_local_3pids:
|
||||
- medium: email
|
||||
pattern: '^[^@]+@vector\.im$'
|
||||
- medium: msisdn
|
||||
pattern: '\+44'
|
||||
pattern: '^44\d{10}$'
|
||||
```
|
||||
---
|
||||
### `enable_3pid_lookup`
|
||||
@@ -4134,6 +4154,38 @@ default_power_level_content_override:
|
||||
trusted_private_chat: null
|
||||
public_chat: null
|
||||
```
|
||||
|
||||
The default power levels for each preset are:
|
||||
```yaml
|
||||
"m.room.name": 50
|
||||
"m.room.power_levels": 100
|
||||
"m.room.history_visibility": 100
|
||||
"m.room.canonical_alias": 50
|
||||
"m.room.avatar": 50
|
||||
"m.room.tombstone": 100
|
||||
"m.room.server_acl": 100
|
||||
"m.room.encryption": 100
|
||||
```
|
||||
|
||||
So a complete example where the default power-levels for a preset are maintained
|
||||
but the power level for a new key is set is:
|
||||
```yaml
|
||||
default_power_level_content_override:
|
||||
private_chat:
|
||||
events:
|
||||
"com.example.foo": 0
|
||||
"m.room.name": 50
|
||||
"m.room.power_levels": 100
|
||||
"m.room.history_visibility": 100
|
||||
"m.room.canonical_alias": 50
|
||||
"m.room.avatar": 50
|
||||
"m.room.tombstone": 100
|
||||
"m.room.server_acl": 100
|
||||
"m.room.encryption": 100
|
||||
trusted_private_chat: null
|
||||
public_chat: null
|
||||
```
|
||||
|
||||
---
|
||||
### `forget_rooms_on_leave`
|
||||
|
||||
@@ -4633,7 +4685,9 @@ This setting has the following sub-options:
|
||||
* `only_for_direct_messages`: Whether invites should be automatically accepted for all room types, or only
|
||||
for direct messages. Defaults to false.
|
||||
* `only_from_local_users`: Whether to only automatically accept invites from users on this homeserver. Defaults to false.
|
||||
* `worker_to_run_on`: Which worker to run this module on. This must match the "worker_name".
|
||||
* `worker_to_run_on`: Which worker to run this module on. This must match
|
||||
the "worker_name". If not set or `null`, invites will be accepted on the
|
||||
main process.
|
||||
|
||||
NOTE: Care should be taken not to enable this setting if the `synapse_auto_accept_invite` module is enabled and installed.
|
||||
The two modules will compete to perform the same task and may result in undesired behaviour. For example, multiple join
|
||||
|
||||
855
poetry.lock
generated
855
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.111.0rc2"
|
||||
version = "1.114.0rc1"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
@@ -201,8 +201,8 @@ netaddr = ">=0.7.18"
|
||||
# add a lower bound to the Jinja2 dependency.
|
||||
Jinja2 = ">=3.0"
|
||||
bleach = ">=1.4.3"
|
||||
# We use `Self`, which were added in `typing-extensions` 4.0.
|
||||
typing-extensions = ">=4.0"
|
||||
# We use `assert_never`, which were added in `typing-extensions` 4.1.
|
||||
typing-extensions = ">=4.1"
|
||||
# We enforce that we have a `cryptography` version that bundles an `openssl`
|
||||
# with the latest security patches.
|
||||
cryptography = ">=3.4.7"
|
||||
@@ -322,7 +322,7 @@ all = [
|
||||
# This helps prevents merge conflicts when running a batch of dependabot updates.
|
||||
isort = ">=5.10.1"
|
||||
black = ">=22.7.0"
|
||||
ruff = "0.5.0"
|
||||
ruff = "0.5.5"
|
||||
# Type checking only works with the pydantic.v1 compat module from pydantic v2
|
||||
pydantic = "^2"
|
||||
|
||||
|
||||
@@ -43,7 +43,7 @@ import argparse
|
||||
import base64
|
||||
import json
|
||||
import sys
|
||||
from typing import Any, Dict, Optional, Tuple
|
||||
from typing import Any, Dict, Mapping, Optional, Tuple, Union
|
||||
from urllib import parse as urlparse
|
||||
|
||||
import requests
|
||||
@@ -75,7 +75,7 @@ def encode_canonical_json(value: object) -> bytes:
|
||||
value,
|
||||
# Encode code-points outside of ASCII as UTF-8 rather than \u escapes
|
||||
ensure_ascii=False,
|
||||
# Remove unecessary white space.
|
||||
# Remove unnecessary white space.
|
||||
separators=(",", ":"),
|
||||
# Sort the keys of dictionaries.
|
||||
sort_keys=True,
|
||||
@@ -298,12 +298,23 @@ class MatrixConnectionAdapter(HTTPAdapter):
|
||||
|
||||
return super().send(request, *args, **kwargs)
|
||||
|
||||
def get_connection(
|
||||
self, url: str, proxies: Optional[Dict[str, str]] = None
|
||||
def get_connection_with_tls_context(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
verify: Optional[Union[bool, str]],
|
||||
proxies: Optional[Mapping[str, str]] = None,
|
||||
cert: Optional[Union[Tuple[str, str], str]] = None,
|
||||
) -> HTTPConnectionPool:
|
||||
# overrides the get_connection() method in the base class
|
||||
parsed = urlparse.urlsplit(url)
|
||||
(host, port, ssl_server_name) = self._lookup(parsed.netloc)
|
||||
# overrides the get_connection_with_tls_context() method in the base class
|
||||
parsed = urlparse.urlsplit(request.url)
|
||||
|
||||
# Extract the server name from the request URL, and ensure it's a str.
|
||||
hostname = parsed.netloc
|
||||
if isinstance(hostname, bytes):
|
||||
hostname = hostname.decode("utf-8")
|
||||
assert isinstance(hostname, str)
|
||||
|
||||
(host, port, ssl_server_name) = self._lookup(hostname)
|
||||
print(
|
||||
f"Connecting to {host}:{port} with SNI {ssl_server_name}", file=sys.stderr
|
||||
)
|
||||
|
||||
@@ -324,6 +324,11 @@ def tag(gh_token: Optional[str]) -> None:
|
||||
def _tag(gh_token: Optional[str]) -> None:
|
||||
"""Tags the release and generates a draft GitHub release"""
|
||||
|
||||
if gh_token:
|
||||
# Test that the GH Token is valid before continuing.
|
||||
gh = Github(gh_token)
|
||||
gh.get_user()
|
||||
|
||||
# Make sure we're in a git repo.
|
||||
repo = get_repo_and_check_clean_checkout()
|
||||
|
||||
@@ -418,6 +423,11 @@ def publish(gh_token: str) -> None:
|
||||
def _publish(gh_token: str) -> None:
|
||||
"""Publish release on GitHub."""
|
||||
|
||||
if gh_token:
|
||||
# Test that the GH Token is valid before continuing.
|
||||
gh = Github(gh_token)
|
||||
gh.get_user()
|
||||
|
||||
# Make sure we're in a git repo.
|
||||
get_repo_and_check_clean_checkout()
|
||||
|
||||
@@ -460,6 +470,11 @@ def upload(gh_token: Optional[str]) -> None:
|
||||
def _upload(gh_token: Optional[str]) -> None:
|
||||
"""Upload release to pypi."""
|
||||
|
||||
if gh_token:
|
||||
# Test that the GH Token is valid before continuing.
|
||||
gh = Github(gh_token)
|
||||
gh.get_user()
|
||||
|
||||
current_version = get_package_version()
|
||||
tag_name = f"v{current_version}"
|
||||
|
||||
@@ -555,6 +570,11 @@ def wait_for_actions(gh_token: Optional[str]) -> None:
|
||||
|
||||
|
||||
def _wait_for_actions(gh_token: Optional[str]) -> None:
|
||||
if gh_token:
|
||||
# Test that the GH Token is valid before continuing.
|
||||
gh = Github(gh_token)
|
||||
gh.get_user()
|
||||
|
||||
# Find out the version and tag name.
|
||||
current_version = get_package_version()
|
||||
tag_name = f"v{current_version}"
|
||||
@@ -711,6 +731,11 @@ Ask the designated people to do the blog and tweets."""
|
||||
@cli.command()
|
||||
@click.option("--gh-token", envvar=["GH_TOKEN", "GITHUB_TOKEN"], required=True)
|
||||
def full(gh_token: str) -> None:
|
||||
if gh_token:
|
||||
# Test that the GH Token is valid before continuing.
|
||||
gh = Github(gh_token)
|
||||
gh.get_user()
|
||||
|
||||
click.echo("1. If this is a security release, read the security wiki page.")
|
||||
click.echo("2. Check for any release blockers before proceeding.")
|
||||
click.echo(" https://github.com/element-hq/synapse/labels/X-Release-Blocker")
|
||||
|
||||
@@ -119,18 +119,19 @@ BOOLEAN_COLUMNS = {
|
||||
"e2e_room_keys": ["is_verified"],
|
||||
"event_edges": ["is_state"],
|
||||
"events": ["processed", "outlier", "contains_url"],
|
||||
"local_media_repository": ["safe_from_quarantine"],
|
||||
"local_media_repository": ["safe_from_quarantine", "authenticated"],
|
||||
"per_user_experimental_features": ["enabled"],
|
||||
"presence_list": ["accepted"],
|
||||
"presence_stream": ["currently_active"],
|
||||
"public_room_list_stream": ["visibility"],
|
||||
"pushers": ["enabled"],
|
||||
"redactions": ["have_censored"],
|
||||
"remote_media_cache": ["authenticated"],
|
||||
"room_stats_state": ["is_federatable"],
|
||||
"rooms": ["is_public", "has_auth_chain_index"],
|
||||
"users": ["shadow_banned", "approved", "locked", "suspended"],
|
||||
"un_partial_stated_event_stream": ["rejection_status_changed"],
|
||||
"users_who_share_rooms": ["share_private"],
|
||||
"per_user_experimental_features": ["enabled"],
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -50,7 +50,7 @@ class Membership:
|
||||
KNOCK: Final = "knock"
|
||||
LEAVE: Final = "leave"
|
||||
BAN: Final = "ban"
|
||||
LIST: Final = {INVITE, JOIN, KNOCK, LEAVE, BAN}
|
||||
LIST: Final = frozenset((INVITE, JOIN, KNOCK, LEAVE, BAN))
|
||||
|
||||
|
||||
class PresenceState:
|
||||
@@ -225,6 +225,11 @@ class EventContentFields:
|
||||
# This is deprecated in MSC2175.
|
||||
ROOM_CREATOR: Final = "creator"
|
||||
|
||||
# The version of the room for `m.room.create` events.
|
||||
ROOM_VERSION: Final = "room_version"
|
||||
|
||||
ROOM_NAME: Final = "name"
|
||||
|
||||
# Used in m.room.guest_access events.
|
||||
GUEST_ACCESS: Final = "guest_access"
|
||||
|
||||
@@ -237,6 +242,9 @@ class EventContentFields:
|
||||
# an unspecced field added to to-device messages to identify them uniquely-ish
|
||||
TO_DEVICE_MSGID: Final = "org.matrix.msgid"
|
||||
|
||||
# `m.room.encryption`` algorithm field
|
||||
ENCRYPTION_ALGORITHM: Final = "algorithm"
|
||||
|
||||
|
||||
class EventUnsignedContentFields:
|
||||
"""Fields found inside the 'unsigned' data on events"""
|
||||
|
||||
@@ -128,6 +128,10 @@ class Codes(str, Enum):
|
||||
# MSC2677
|
||||
DUPLICATE_ANNOTATION = "M_DUPLICATE_ANNOTATION"
|
||||
|
||||
# MSC3575 we are telling the client they need to expire their sliding sync
|
||||
# connection.
|
||||
UNKNOWN_POS = "M_UNKNOWN_POS"
|
||||
|
||||
|
||||
class CodeMessageException(RuntimeError):
|
||||
"""An exception with integer code, a message string attributes and optional headers.
|
||||
@@ -847,3 +851,17 @@ class PartialStateConflictError(SynapseError):
|
||||
msg=PartialStateConflictError.message(),
|
||||
errcode=Codes.UNKNOWN,
|
||||
)
|
||||
|
||||
|
||||
class SlidingSyncUnknownPosition(SynapseError):
|
||||
"""An error that Synapse can return to signal to the client to expire their
|
||||
sliding sync connection (i.e. send a new request without a `?since=`
|
||||
param).
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
super().__init__(
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
msg="Unknown position",
|
||||
errcode=Codes.UNKNOWN_POS,
|
||||
)
|
||||
|
||||
@@ -206,6 +206,21 @@ class GenericWorkerServer(HomeServer):
|
||||
"/_synapse/admin": admin_resource,
|
||||
}
|
||||
)
|
||||
|
||||
if "federation" not in res.names:
|
||||
# Only load the federation media resource separately if federation
|
||||
# resource is not specified since federation resource includes media
|
||||
# resource.
|
||||
resources[FEDERATION_PREFIX] = TransportLayerServer(
|
||||
self, servlet_groups=["media"]
|
||||
)
|
||||
if "client" not in res.names:
|
||||
# Only load the client media resource separately if client
|
||||
# resource is not specified since client resource includes media
|
||||
# resource.
|
||||
resources[CLIENT_API_PREFIX] = ClientRestResource(
|
||||
self, servlet_groups=["media"]
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
"A 'media' listener is configured but the media"
|
||||
|
||||
@@ -101,6 +101,12 @@ class SynapseHomeServer(HomeServer):
|
||||
# Skip loading openid resource if federation is defined
|
||||
# since federation resource will include openid
|
||||
continue
|
||||
if name == "media" and (
|
||||
"federation" in res.names or "client" in res.names
|
||||
):
|
||||
# Skip loading media resource if federation or client are defined
|
||||
# since federation & client resources will include media
|
||||
continue
|
||||
if name == "health":
|
||||
# Skip loading, health resource is always included
|
||||
continue
|
||||
@@ -217,7 +223,7 @@ class SynapseHomeServer(HomeServer):
|
||||
)
|
||||
|
||||
if name in ["media", "federation", "client"]:
|
||||
if self.config.server.enable_media_repo:
|
||||
if self.config.media.can_load_media_repo:
|
||||
media_repo = self.get_media_repository_resource()
|
||||
resources.update(
|
||||
{
|
||||
@@ -231,6 +237,14 @@ class SynapseHomeServer(HomeServer):
|
||||
"'media' resource conflicts with enable_media_repo=False"
|
||||
)
|
||||
|
||||
if name == "media":
|
||||
resources[FEDERATION_PREFIX] = TransportLayerServer(
|
||||
self, servlet_groups=["media"]
|
||||
)
|
||||
resources[CLIENT_API_PREFIX] = ClientRestResource(
|
||||
self, servlet_groups=["media"]
|
||||
)
|
||||
|
||||
if name in ["keys", "federation"]:
|
||||
resources[SERVER_KEY_PREFIX] = KeyResource(self)
|
||||
|
||||
|
||||
@@ -126,7 +126,7 @@ class ContentRepositoryConfig(Config):
|
||||
# Only enable the media repo if either the media repo is enabled or the
|
||||
# current worker app is the media repo.
|
||||
if (
|
||||
self.root.server.enable_media_repo is False
|
||||
config.get("enable_media_repo", True) is False
|
||||
and config.get("worker_app") != "synapse.app.media_repository"
|
||||
):
|
||||
self.can_load_media_repo = False
|
||||
@@ -272,6 +272,10 @@ class ContentRepositoryConfig(Config):
|
||||
remote_media_lifetime
|
||||
)
|
||||
|
||||
self.enable_authenticated_media = config.get(
|
||||
"enable_authenticated_media", False
|
||||
)
|
||||
|
||||
def generate_config_section(self, data_dir_path: str, **kwargs: Any) -> str:
|
||||
assert data_dir_path is not None
|
||||
media_store = os.path.join(data_dir_path, "media_store")
|
||||
|
||||
@@ -384,6 +384,11 @@ class ServerConfig(Config):
|
||||
# Whether to internally track presence, requires that presence is enabled,
|
||||
self.track_presence = self.presence_enabled and presence_enabled != "untracked"
|
||||
|
||||
# Determines if presence results for offline users are included on initial/full sync
|
||||
self.presence_include_offline_users_on_sync = presence_config.get(
|
||||
"include_offline_users_on_sync", False
|
||||
)
|
||||
|
||||
# Custom presence router module
|
||||
# This is the legacy way of configuring it (the config should now be put in the modules section)
|
||||
self.presence_router_module_class = None
|
||||
@@ -395,12 +400,6 @@ class ServerConfig(Config):
|
||||
self.presence_router_config,
|
||||
) = load_module(presence_router_config, ("presence", "presence_router"))
|
||||
|
||||
# whether to enable the media repository endpoints. This should be set
|
||||
# to false if the media repository is running as a separate endpoint;
|
||||
# doing so ensures that we will not run cache cleanup jobs on the
|
||||
# master, potentially causing inconsistency.
|
||||
self.enable_media_repo = config.get("enable_media_repo", True)
|
||||
|
||||
# Whether to require authentication to retrieve profile data (avatars,
|
||||
# display names) of other users through the client API.
|
||||
self.require_auth_for_profile_requests = config.get(
|
||||
|
||||
@@ -589,7 +589,7 @@ class BaseV2KeyFetcher(KeyFetcher):
|
||||
% (server_name,)
|
||||
)
|
||||
|
||||
for key_id, key_data in response_json["old_verify_keys"].items():
|
||||
for key_id, key_data in response_json.get("old_verify_keys", {}).items():
|
||||
if is_signing_algorithm_supported(key_id):
|
||||
key_base64 = key_data["key"]
|
||||
key_bytes = decode_base64(key_base64)
|
||||
|
||||
@@ -554,3 +554,22 @@ def relation_from_event(event: EventBase) -> Optional[_EventRelation]:
|
||||
aggregation_key = None
|
||||
|
||||
return _EventRelation(parent_id, rel_type, aggregation_key)
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class StrippedStateEvent:
|
||||
"""
|
||||
A stripped down state event. Usually used for remote invite/knocks so the user can
|
||||
make an informed decision on whether they want to join.
|
||||
|
||||
Attributes:
|
||||
type: Event `type`
|
||||
state_key: Event `state_key`
|
||||
sender: Event `sender`
|
||||
content: Event `content`
|
||||
"""
|
||||
|
||||
type: str
|
||||
state_key: str
|
||||
sender: str
|
||||
content: Dict[str, Any]
|
||||
|
||||
@@ -49,7 +49,7 @@ from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.api.room_versions import RoomVersion
|
||||
from synapse.types import JsonDict, Requester
|
||||
|
||||
from . import EventBase, make_event_from_dict
|
||||
from . import EventBase, StrippedStateEvent, make_event_from_dict
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.handlers.relations import BundledAggregations
|
||||
@@ -854,3 +854,30 @@ def strip_event(event: EventBase) -> JsonDict:
|
||||
"content": event.content,
|
||||
"sender": event.sender,
|
||||
}
|
||||
|
||||
|
||||
def parse_stripped_state_event(raw_stripped_event: Any) -> Optional[StrippedStateEvent]:
|
||||
"""
|
||||
Given a raw value from an event's `unsigned` field, attempt to parse it into a
|
||||
`StrippedStateEvent`.
|
||||
"""
|
||||
if isinstance(raw_stripped_event, dict):
|
||||
# All of these fields are required
|
||||
type = raw_stripped_event.get("type")
|
||||
state_key = raw_stripped_event.get("state_key")
|
||||
sender = raw_stripped_event.get("sender")
|
||||
content = raw_stripped_event.get("content")
|
||||
if (
|
||||
isinstance(type, str)
|
||||
and isinstance(state_key, str)
|
||||
and isinstance(sender, str)
|
||||
and isinstance(content, dict)
|
||||
):
|
||||
return StrippedStateEvent(
|
||||
type=type,
|
||||
state_key=state_key,
|
||||
sender=sender,
|
||||
content=content,
|
||||
)
|
||||
|
||||
return None
|
||||
|
||||
@@ -271,6 +271,10 @@ SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
|
||||
"federation": FEDERATION_SERVLET_CLASSES,
|
||||
"room_list": (PublicRoomList,),
|
||||
"openid": (OpenIdUserInfo,),
|
||||
"media": (
|
||||
FederationMediaDownloadServlet,
|
||||
FederationMediaThumbnailServlet,
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -912,6 +912,4 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
||||
FederationV1SendKnockServlet,
|
||||
FederationMakeKnockServlet,
|
||||
FederationAccountStatusServlet,
|
||||
FederationMediaDownloadServlet,
|
||||
FederationMediaThumbnailServlet,
|
||||
)
|
||||
|
||||
@@ -197,8 +197,14 @@ class AdminHandler:
|
||||
# events that we have and then filtering, this isn't the most
|
||||
# efficient method perhaps but it does guarantee we get everything.
|
||||
while True:
|
||||
events, _ = await self._store.paginate_room_events(
|
||||
room_id, from_key, to_key, limit=100, direction=Direction.FORWARDS
|
||||
events, _ = (
|
||||
await self._store.paginate_room_events_by_topological_ordering(
|
||||
room_id=room_id,
|
||||
from_key=from_key,
|
||||
to_key=to_key,
|
||||
limit=100,
|
||||
direction=Direction.FORWARDS,
|
||||
)
|
||||
)
|
||||
if not events:
|
||||
break
|
||||
|
||||
@@ -20,10 +20,20 @@
|
||||
#
|
||||
#
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, Iterable, List, Mapping, Optional, Set, Tuple
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
AbstractSet,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
)
|
||||
|
||||
from synapse.api import errors
|
||||
from synapse.api.constants import EduTypes, EventTypes
|
||||
from synapse.api.constants import EduTypes, EventTypes, Membership
|
||||
from synapse.api.errors import (
|
||||
Codes,
|
||||
FederationDeniedError,
|
||||
@@ -38,7 +48,9 @@ from synapse.metrics.background_process_metrics import (
|
||||
wrap_as_background_process,
|
||||
)
|
||||
from synapse.storage.databases.main.client_ips import DeviceLastConnectionInfo
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.types import (
|
||||
DeviceListUpdates,
|
||||
JsonDict,
|
||||
JsonMapping,
|
||||
ScheduledTask,
|
||||
@@ -214,138 +226,210 @@ class DeviceWorkerHandler:
|
||||
@cancellable
|
||||
async def get_user_ids_changed(
|
||||
self, user_id: str, from_token: StreamToken
|
||||
) -> JsonDict:
|
||||
) -> DeviceListUpdates:
|
||||
"""Get list of users that have had the devices updated, or have newly
|
||||
joined a room, that `user_id` may be interested in.
|
||||
"""
|
||||
|
||||
set_tag("user_id", user_id)
|
||||
set_tag("from_token", str(from_token))
|
||||
now_room_key = self.store.get_room_max_token()
|
||||
|
||||
room_ids = await self.store.get_rooms_for_user(user_id)
|
||||
now_token = self._event_sources.get_current_token()
|
||||
|
||||
changed = await self.get_device_changes_in_shared_rooms(
|
||||
user_id, room_ids, from_token
|
||||
# We need to work out all the different membership changes for the user
|
||||
# and user they share a room with, to pass to
|
||||
# `generate_sync_entry_for_device_list`. See its docstring for details
|
||||
# on the data required.
|
||||
|
||||
joined_room_ids = await self.store.get_rooms_for_user(user_id)
|
||||
|
||||
# Get the set of rooms that the user has joined/left
|
||||
membership_changes = (
|
||||
await self.store.get_current_state_delta_membership_changes_for_user(
|
||||
user_id, from_key=from_token.room_key, to_key=now_token.room_key
|
||||
)
|
||||
)
|
||||
|
||||
# Then work out if any users have since joined
|
||||
rooms_changed = self.store.get_rooms_that_changed(room_ids, from_token.room_key)
|
||||
# Check for newly joined or left rooms. We need to make sure that we add
|
||||
# to newly joined in the case membership goes from join -> leave -> join
|
||||
# again.
|
||||
newly_joined_rooms: Set[str] = set()
|
||||
newly_left_rooms: Set[str] = set()
|
||||
for change in membership_changes:
|
||||
# We check for changes in "joinedness", i.e. if the membership has
|
||||
# changed to or from JOIN.
|
||||
if change.membership == Membership.JOIN:
|
||||
if change.prev_membership != Membership.JOIN:
|
||||
newly_joined_rooms.add(change.room_id)
|
||||
newly_left_rooms.discard(change.room_id)
|
||||
elif change.prev_membership == Membership.JOIN:
|
||||
newly_joined_rooms.discard(change.room_id)
|
||||
newly_left_rooms.add(change.room_id)
|
||||
|
||||
member_events = await self.store.get_membership_changes_for_user(
|
||||
user_id, from_token.room_key, now_room_key
|
||||
# We now work out if any other users have since joined or left the rooms
|
||||
# the user is currently in.
|
||||
|
||||
# List of membership changes per room
|
||||
room_to_deltas: Dict[str, List[StateDelta]] = {}
|
||||
# The set of event IDs of membership events (so we can fetch their
|
||||
# associated membership).
|
||||
memberships_to_fetch: Set[str] = set()
|
||||
|
||||
# TODO: Only pull out membership events?
|
||||
state_changes = await self.store.get_current_state_deltas_for_rooms(
|
||||
joined_room_ids, from_token=from_token.room_key, to_token=now_token.room_key
|
||||
)
|
||||
rooms_changed.update(event.room_id for event in member_events)
|
||||
|
||||
stream_ordering = from_token.room_key.stream
|
||||
|
||||
possibly_changed = set(changed)
|
||||
possibly_left = set()
|
||||
for room_id in rooms_changed:
|
||||
# Check if the forward extremities have changed. If not then we know
|
||||
# the current state won't have changed, and so we can skip this room.
|
||||
try:
|
||||
if not await self.store.have_room_forward_extremities_changed_since(
|
||||
room_id, stream_ordering
|
||||
):
|
||||
continue
|
||||
except errors.StoreError:
|
||||
pass
|
||||
|
||||
current_state_ids = await self._state_storage.get_current_state_ids(
|
||||
room_id, await_full_state=False
|
||||
)
|
||||
|
||||
# The user may have left the room
|
||||
# TODO: Check if they actually did or if we were just invited.
|
||||
if room_id not in room_ids:
|
||||
for etype, state_key in current_state_ids.keys():
|
||||
if etype != EventTypes.Member:
|
||||
continue
|
||||
possibly_left.add(state_key)
|
||||
for delta in state_changes:
|
||||
if delta.event_type != EventTypes.Member:
|
||||
continue
|
||||
|
||||
# Fetch the current state at the time.
|
||||
try:
|
||||
event_ids = await self.store.get_forward_extremities_for_room_at_stream_ordering(
|
||||
room_id, stream_ordering=stream_ordering
|
||||
)
|
||||
except errors.StoreError:
|
||||
# we have purged the stream_ordering index since the stream
|
||||
# ordering: treat it the same as a new room
|
||||
event_ids = []
|
||||
room_to_deltas.setdefault(delta.room_id, []).append(delta)
|
||||
if delta.event_id:
|
||||
memberships_to_fetch.add(delta.event_id)
|
||||
if delta.prev_event_id:
|
||||
memberships_to_fetch.add(delta.prev_event_id)
|
||||
|
||||
# special-case for an empty prev state: include all members
|
||||
# in the changed list
|
||||
if not event_ids:
|
||||
log_kv(
|
||||
{"event": "encountered empty previous state", "room_id": room_id}
|
||||
)
|
||||
for etype, state_key in current_state_ids.keys():
|
||||
if etype != EventTypes.Member:
|
||||
continue
|
||||
possibly_changed.add(state_key)
|
||||
continue
|
||||
# Fetch all the memberships for the membership events
|
||||
event_id_to_memberships = await self.store.get_membership_from_event_ids(
|
||||
memberships_to_fetch
|
||||
)
|
||||
|
||||
current_member_id = current_state_ids.get((EventTypes.Member, user_id))
|
||||
if not current_member_id:
|
||||
continue
|
||||
joined_invited_knocked = (
|
||||
Membership.JOIN,
|
||||
Membership.INVITE,
|
||||
Membership.KNOCK,
|
||||
)
|
||||
|
||||
# mapping from event_id -> state_dict
|
||||
prev_state_ids = await self._state_storage.get_state_ids_for_events(
|
||||
event_ids,
|
||||
await_full_state=False,
|
||||
)
|
||||
# We now want to find any user that have newly joined/invited/knocked,
|
||||
# or newly left, similarly to above.
|
||||
newly_joined_or_invited_or_knocked_users: Set[str] = set()
|
||||
newly_left_users: Set[str] = set()
|
||||
for _, deltas in room_to_deltas.items():
|
||||
for delta in deltas:
|
||||
# Get the prev/new memberships for the delta
|
||||
new_membership = None
|
||||
prev_membership = None
|
||||
if delta.event_id:
|
||||
m = event_id_to_memberships.get(delta.event_id)
|
||||
if m is not None:
|
||||
new_membership = m.membership
|
||||
if delta.prev_event_id:
|
||||
m = event_id_to_memberships.get(delta.prev_event_id)
|
||||
if m is not None:
|
||||
prev_membership = m.membership
|
||||
|
||||
# Check if we've joined the room? If so we just blindly add all the users to
|
||||
# the "possibly changed" users.
|
||||
for state_dict in prev_state_ids.values():
|
||||
member_event = state_dict.get((EventTypes.Member, user_id), None)
|
||||
if not member_event or member_event != current_member_id:
|
||||
for etype, state_key in current_state_ids.keys():
|
||||
if etype != EventTypes.Member:
|
||||
continue
|
||||
possibly_changed.add(state_key)
|
||||
break
|
||||
# Check if a user has newly joined/invited/knocked, or left.
|
||||
if new_membership in joined_invited_knocked:
|
||||
if prev_membership not in joined_invited_knocked:
|
||||
newly_joined_or_invited_or_knocked_users.add(delta.state_key)
|
||||
newly_left_users.discard(delta.state_key)
|
||||
elif prev_membership in joined_invited_knocked:
|
||||
newly_joined_or_invited_or_knocked_users.discard(delta.state_key)
|
||||
newly_left_users.add(delta.state_key)
|
||||
|
||||
# If there has been any change in membership, include them in the
|
||||
# possibly changed list. We'll check if they are joined below,
|
||||
# and we're not toooo worried about spuriously adding users.
|
||||
for key, event_id in current_state_ids.items():
|
||||
etype, state_key = key
|
||||
if etype != EventTypes.Member:
|
||||
continue
|
||||
# Now we actually calculate the device list entry with the information
|
||||
# calculated above.
|
||||
device_list_updates = await self.generate_sync_entry_for_device_list(
|
||||
user_id=user_id,
|
||||
since_token=from_token,
|
||||
now_token=now_token,
|
||||
joined_room_ids=joined_room_ids,
|
||||
newly_joined_rooms=newly_joined_rooms,
|
||||
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
|
||||
newly_left_rooms=newly_left_rooms,
|
||||
newly_left_users=newly_left_users,
|
||||
)
|
||||
|
||||
# check if this member has changed since any of the extremities
|
||||
# at the stream_ordering, and add them to the list if so.
|
||||
for state_dict in prev_state_ids.values():
|
||||
prev_event_id = state_dict.get(key, None)
|
||||
if not prev_event_id or prev_event_id != event_id:
|
||||
if state_key != user_id:
|
||||
possibly_changed.add(state_key)
|
||||
break
|
||||
log_kv(
|
||||
{
|
||||
"changed": device_list_updates.changed,
|
||||
"left": device_list_updates.left,
|
||||
}
|
||||
)
|
||||
|
||||
if possibly_changed or possibly_left:
|
||||
possibly_joined = possibly_changed
|
||||
possibly_left = possibly_changed | possibly_left
|
||||
return device_list_updates
|
||||
|
||||
# Double check if we still share rooms with the given user.
|
||||
users_rooms = await self.store.get_rooms_for_users(possibly_left)
|
||||
for changed_user_id, entries in users_rooms.items():
|
||||
if any(rid in room_ids for rid in entries):
|
||||
possibly_left.discard(changed_user_id)
|
||||
else:
|
||||
possibly_joined.discard(changed_user_id)
|
||||
@measure_func("_generate_sync_entry_for_device_list")
|
||||
async def generate_sync_entry_for_device_list(
|
||||
self,
|
||||
user_id: str,
|
||||
since_token: StreamToken,
|
||||
now_token: StreamToken,
|
||||
joined_room_ids: AbstractSet[str],
|
||||
newly_joined_rooms: AbstractSet[str],
|
||||
newly_joined_or_invited_or_knocked_users: AbstractSet[str],
|
||||
newly_left_rooms: AbstractSet[str],
|
||||
newly_left_users: AbstractSet[str],
|
||||
) -> DeviceListUpdates:
|
||||
"""Generate the DeviceListUpdates section of sync
|
||||
|
||||
else:
|
||||
possibly_joined = set()
|
||||
possibly_left = set()
|
||||
Args:
|
||||
sync_result_builder
|
||||
newly_joined_rooms: Set of rooms user has joined since previous sync
|
||||
newly_joined_or_invited_or_knocked_users: Set of users that have joined,
|
||||
been invited to a room or are knocking on a room since
|
||||
previous sync.
|
||||
newly_left_rooms: Set of rooms user has left since previous sync
|
||||
newly_left_users: Set of users that have left a room we're in since
|
||||
previous sync
|
||||
"""
|
||||
# Take a copy since these fields will be mutated later.
|
||||
newly_joined_or_invited_or_knocked_users = set(
|
||||
newly_joined_or_invited_or_knocked_users
|
||||
)
|
||||
newly_left_users = set(newly_left_users)
|
||||
|
||||
result = {"changed": list(possibly_joined), "left": list(possibly_left)}
|
||||
# We want to figure out what user IDs the client should refetch
|
||||
# device keys for, and which users we aren't going to track changes
|
||||
# for anymore.
|
||||
#
|
||||
# For the first step we check:
|
||||
# a. if any users we share a room with have updated their devices,
|
||||
# and
|
||||
# b. we also check if we've joined any new rooms, or if a user has
|
||||
# joined a room we're in.
|
||||
#
|
||||
# For the second step we just find any users we no longer share a
|
||||
# room with by looking at all users that have left a room plus users
|
||||
# that were in a room we've left.
|
||||
|
||||
log_kv(result)
|
||||
users_that_have_changed = set()
|
||||
|
||||
return result
|
||||
# Step 1a, check for changes in devices of users we share a room
|
||||
# with
|
||||
users_that_have_changed = await self.get_device_changes_in_shared_rooms(
|
||||
user_id,
|
||||
joined_room_ids,
|
||||
from_token=since_token,
|
||||
now_token=now_token,
|
||||
)
|
||||
|
||||
# Step 1b, check for newly joined rooms
|
||||
for room_id in newly_joined_rooms:
|
||||
joined_users = await self.store.get_users_in_room(room_id)
|
||||
newly_joined_or_invited_or_knocked_users.update(joined_users)
|
||||
|
||||
# TODO: Check that these users are actually new, i.e. either they
|
||||
# weren't in the previous sync *or* they left and rejoined.
|
||||
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
|
||||
|
||||
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
|
||||
user_id, since_token.device_list_key
|
||||
)
|
||||
users_that_have_changed.update(user_signatures_changed)
|
||||
|
||||
# Now find users that we no longer track
|
||||
for room_id in newly_left_rooms:
|
||||
left_users = await self.store.get_users_in_room(room_id)
|
||||
newly_left_users.update(left_users)
|
||||
|
||||
# Remove any users that we still share a room with.
|
||||
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
|
||||
for user_id, entries in left_users_rooms.items():
|
||||
if any(rid in joined_room_ids for rid in entries):
|
||||
newly_left_users.discard(user_id)
|
||||
|
||||
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
|
||||
|
||||
async def on_federation_query_user_devices(self, user_id: str) -> JsonDict:
|
||||
if not self.hs.is_mine(UserID.from_string(user_id)):
|
||||
|
||||
@@ -291,13 +291,20 @@ class E2eKeysHandler:
|
||||
|
||||
# Only try and fetch keys for destinations that are not marked as
|
||||
# down.
|
||||
filtered_destinations = await filter_destinations_by_retry_limiter(
|
||||
remote_queries_not_in_cache.keys(),
|
||||
self.clock,
|
||||
self.store,
|
||||
# Let's give an arbitrary grace period for those hosts that are
|
||||
# only recently down
|
||||
retry_due_within_ms=60 * 1000,
|
||||
unfiltered_destinations = remote_queries_not_in_cache.keys()
|
||||
filtered_destinations = set(
|
||||
await filter_destinations_by_retry_limiter(
|
||||
unfiltered_destinations,
|
||||
self.clock,
|
||||
self.store,
|
||||
# Let's give an arbitrary grace period for those hosts that are
|
||||
# only recently down
|
||||
retry_due_within_ms=60 * 1000,
|
||||
)
|
||||
)
|
||||
failures.update(
|
||||
(dest, _NOT_READY_FOR_RETRY_FAILURE)
|
||||
for dest in (unfiltered_destinations - filtered_destinations)
|
||||
)
|
||||
|
||||
await concurrently_execute(
|
||||
@@ -1641,6 +1648,9 @@ def _check_device_signature(
|
||||
raise SynapseError(400, "Invalid signature", Codes.INVALID_SIGNATURE)
|
||||
|
||||
|
||||
_NOT_READY_FOR_RETRY_FAILURE = {"status": 503, "message": "Not ready for retry"}
|
||||
|
||||
|
||||
def _exception_to_failure(e: Exception) -> JsonDict:
|
||||
if isinstance(e, SynapseError):
|
||||
return {"status": e.code, "errcode": e.errcode, "message": str(e)}
|
||||
@@ -1649,7 +1659,7 @@ def _exception_to_failure(e: Exception) -> JsonDict:
|
||||
return {"status": e.code, "message": str(e)}
|
||||
|
||||
if isinstance(e, NotRetryingDestination):
|
||||
return {"status": 503, "message": "Not ready for retry"}
|
||||
return _NOT_READY_FOR_RETRY_FAILURE
|
||||
|
||||
# include ConnectionRefused and other errors
|
||||
#
|
||||
|
||||
@@ -34,7 +34,7 @@ from synapse.api.errors import (
|
||||
from synapse.logging.opentracing import log_kv, trace
|
||||
from synapse.storage.databases.main.e2e_room_keys import RoomKey
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.async_helpers import ReadWriteLock
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -58,7 +58,7 @@ class E2eRoomKeysHandler:
|
||||
# clients belonging to a user will receive and try to upload a new session at
|
||||
# roughly the same time. Also used to lock out uploads when the key is being
|
||||
# changed.
|
||||
self._upload_linearizer = Linearizer("upload_room_keys_lock")
|
||||
self._upload_lock = ReadWriteLock()
|
||||
|
||||
@trace
|
||||
async def get_room_keys(
|
||||
@@ -89,7 +89,7 @@ class E2eRoomKeysHandler:
|
||||
|
||||
# we deliberately take the lock to get keys so that changing the version
|
||||
# works atomically
|
||||
async with self._upload_linearizer.queue(user_id):
|
||||
async with self._upload_lock.read(user_id):
|
||||
# make sure the backup version exists
|
||||
try:
|
||||
await self.store.get_e2e_room_keys_version_info(user_id, version)
|
||||
@@ -132,7 +132,7 @@ class E2eRoomKeysHandler:
|
||||
"""
|
||||
|
||||
# lock for consistency with uploading
|
||||
async with self._upload_linearizer.queue(user_id):
|
||||
async with self._upload_lock.write(user_id):
|
||||
# make sure the backup version exists
|
||||
try:
|
||||
version_info = await self.store.get_e2e_room_keys_version_info(
|
||||
@@ -193,7 +193,7 @@ class E2eRoomKeysHandler:
|
||||
# TODO: Validate the JSON to make sure it has the right keys.
|
||||
|
||||
# XXX: perhaps we should use a finer grained lock here?
|
||||
async with self._upload_linearizer.queue(user_id):
|
||||
async with self._upload_lock.write(user_id):
|
||||
# Check that the version we're trying to upload is the current version
|
||||
try:
|
||||
version_info = await self.store.get_e2e_room_keys_version_info(user_id)
|
||||
@@ -355,7 +355,7 @@ class E2eRoomKeysHandler:
|
||||
# TODO: Validate the JSON to make sure it has the right keys.
|
||||
|
||||
# lock everyone out until we've switched version
|
||||
async with self._upload_linearizer.queue(user_id):
|
||||
async with self._upload_lock.write(user_id):
|
||||
new_version = await self.store.create_e2e_room_keys_version(
|
||||
user_id, version_info
|
||||
)
|
||||
@@ -382,7 +382,7 @@ class E2eRoomKeysHandler:
|
||||
}
|
||||
"""
|
||||
|
||||
async with self._upload_linearizer.queue(user_id):
|
||||
async with self._upload_lock.read(user_id):
|
||||
try:
|
||||
res = await self.store.get_e2e_room_keys_version_info(user_id, version)
|
||||
except StoreError as e:
|
||||
@@ -407,7 +407,7 @@ class E2eRoomKeysHandler:
|
||||
NotFoundError: if this backup version doesn't exist
|
||||
"""
|
||||
|
||||
async with self._upload_linearizer.queue(user_id):
|
||||
async with self._upload_lock.write(user_id):
|
||||
try:
|
||||
await self.store.delete_e2e_room_keys_version(user_id, version)
|
||||
except StoreError as e:
|
||||
@@ -437,7 +437,7 @@ class E2eRoomKeysHandler:
|
||||
raise SynapseError(
|
||||
400, "Version in body does not match", Codes.INVALID_PARAM
|
||||
)
|
||||
async with self._upload_linearizer.queue(user_id):
|
||||
async with self._upload_lock.write(user_id):
|
||||
try:
|
||||
old_info = await self.store.get_e2e_room_keys_version_info(
|
||||
user_id, version
|
||||
|
||||
@@ -507,13 +507,15 @@ class PaginationHandler:
|
||||
|
||||
# Initially fetch the events from the database. With any luck, we can return
|
||||
# these without blocking on backfill (handled below).
|
||||
events, next_key = await self.store.paginate_room_events(
|
||||
room_id=room_id,
|
||||
from_key=from_token.room_key,
|
||||
to_key=to_room_key,
|
||||
direction=pagin_config.direction,
|
||||
limit=pagin_config.limit,
|
||||
event_filter=event_filter,
|
||||
events, next_key = (
|
||||
await self.store.paginate_room_events_by_topological_ordering(
|
||||
room_id=room_id,
|
||||
from_key=from_token.room_key,
|
||||
to_key=to_room_key,
|
||||
direction=pagin_config.direction,
|
||||
limit=pagin_config.limit,
|
||||
event_filter=event_filter,
|
||||
)
|
||||
)
|
||||
|
||||
if pagin_config.direction == Direction.BACKWARDS:
|
||||
@@ -582,13 +584,15 @@ class PaginationHandler:
|
||||
# If we did backfill something, refetch the events from the database to
|
||||
# catch anything new that might have been added since we last fetched.
|
||||
if did_backfill:
|
||||
events, next_key = await self.store.paginate_room_events(
|
||||
room_id=room_id,
|
||||
from_key=from_token.room_key,
|
||||
to_key=to_room_key,
|
||||
direction=pagin_config.direction,
|
||||
limit=pagin_config.limit,
|
||||
event_filter=event_filter,
|
||||
events, next_key = (
|
||||
await self.store.paginate_room_events_by_topological_ordering(
|
||||
room_id=room_id,
|
||||
from_key=from_token.room_key,
|
||||
to_key=to_room_key,
|
||||
direction=pagin_config.direction,
|
||||
limit=pagin_config.limit,
|
||||
event_filter=event_filter,
|
||||
)
|
||||
)
|
||||
else:
|
||||
# Otherwise, we can backfill in the background for eventual
|
||||
|
||||
@@ -74,6 +74,17 @@ class ProfileHandler:
|
||||
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
|
||||
|
||||
async def get_profile(self, user_id: str, ignore_backoff: bool = True) -> JsonDict:
|
||||
"""
|
||||
Get a user's profile as a JSON dictionary.
|
||||
|
||||
Args:
|
||||
user_id: The user to fetch the profile of.
|
||||
ignore_backoff: True to ignore backoff when fetching over federation.
|
||||
|
||||
Returns:
|
||||
A JSON dictionary. For local queries this will include the displayname and avatar_url
|
||||
fields. For remote queries it may contain arbitrary information.
|
||||
"""
|
||||
target_user = UserID.from_string(user_id)
|
||||
|
||||
if self.hs.is_mine(target_user):
|
||||
@@ -107,6 +118,15 @@ class ProfileHandler:
|
||||
raise e.to_synapse_error()
|
||||
|
||||
async def get_displayname(self, target_user: UserID) -> Optional[str]:
|
||||
"""
|
||||
Fetch a user's display name from their profile.
|
||||
|
||||
Args:
|
||||
target_user: The user to fetch the display name of.
|
||||
|
||||
Returns:
|
||||
The user's display name or None if unset.
|
||||
"""
|
||||
if self.hs.is_mine(target_user):
|
||||
try:
|
||||
displayname = await self.store.get_profile_displayname(target_user)
|
||||
@@ -203,6 +223,15 @@ class ProfileHandler:
|
||||
await self._update_join_states(requester, target_user)
|
||||
|
||||
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
|
||||
"""
|
||||
Fetch a user's avatar URL from their profile.
|
||||
|
||||
Args:
|
||||
target_user: The user to fetch the avatar URL of.
|
||||
|
||||
Returns:
|
||||
The user's avatar URL or None if unset.
|
||||
"""
|
||||
if self.hs.is_mine(target_user):
|
||||
try:
|
||||
avatar_url = await self.store.get_profile_avatar_url(target_user)
|
||||
@@ -403,6 +432,12 @@ class ProfileHandler:
|
||||
async def _update_join_states(
|
||||
self, requester: Requester, target_user: UserID
|
||||
) -> None:
|
||||
"""
|
||||
Update the membership events of each room the user is joined to with the
|
||||
new profile information.
|
||||
|
||||
Note that this stomps over any custom display name or avatar URL in member events.
|
||||
"""
|
||||
if not self.hs.is_mine(target_user):
|
||||
return
|
||||
|
||||
|
||||
@@ -286,8 +286,14 @@ class ReceiptEventSource(EventSource[MultiWriterStreamToken, JsonMapping]):
|
||||
room_ids: Iterable[str],
|
||||
is_guest: bool,
|
||||
explicit_room_id: Optional[str] = None,
|
||||
to_key: Optional[MultiWriterStreamToken] = None,
|
||||
) -> Tuple[List[JsonMapping], MultiWriterStreamToken]:
|
||||
to_key = self.get_current_key()
|
||||
"""
|
||||
Find read receipts for given rooms (> `from_token` and <= `to_token`)
|
||||
"""
|
||||
|
||||
if to_key is None:
|
||||
to_key = self.get_current_key()
|
||||
|
||||
if from_key == to_key:
|
||||
return [], to_key
|
||||
|
||||
@@ -1188,6 +1188,8 @@ class RoomCreationHandler:
|
||||
)
|
||||
events_to_send.append((power_event, power_context))
|
||||
else:
|
||||
# Please update the docs for `default_power_level_content_override` when
|
||||
# updating the `events` dict below
|
||||
power_level_content: JsonDict = {
|
||||
"users": {creator_id: 100},
|
||||
"users_default": 0,
|
||||
@@ -1748,7 +1750,7 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
|
||||
from_key=from_key,
|
||||
to_key=to_key,
|
||||
limit=limit or 10,
|
||||
order="ASC",
|
||||
direction=Direction.FORWARDS,
|
||||
)
|
||||
|
||||
events = list(room_events)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
2326
synapse/handlers/sliding_sync/__init__.py
Normal file
2326
synapse/handlers/sliding_sync/__init__.py
Normal file
File diff suppressed because it is too large
Load Diff
660
synapse/handlers/sliding_sync/extensions.py
Normal file
660
synapse/handlers/sliding_sync/extensions.py
Normal file
@@ -0,0 +1,660 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2023 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, List, Mapping, Optional, Sequence, Set
|
||||
|
||||
from typing_extensions import assert_never
|
||||
|
||||
from synapse.api.constants import AccountDataTypes
|
||||
from synapse.handlers.receipts import ReceiptEventSource
|
||||
from synapse.handlers.sliding_sync.types import (
|
||||
HaveSentRoomFlag,
|
||||
MutablePerConnectionState,
|
||||
PerConnectionState,
|
||||
)
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.types import (
|
||||
DeviceListUpdates,
|
||||
JsonMapping,
|
||||
MultiWriterStreamToken,
|
||||
SlidingSyncStreamToken,
|
||||
StreamToken,
|
||||
)
|
||||
from synapse.types.handlers import OperationType, SlidingSyncConfig, SlidingSyncResult
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SlidingSyncExtensionHandler:
|
||||
"""Handles the extensions to sliding sync."""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.store = hs.get_datastores().main
|
||||
self.event_sources = hs.get_event_sources()
|
||||
self.device_handler = hs.get_device_handler()
|
||||
self.push_rules_handler = hs.get_push_rules_handler()
|
||||
|
||||
@trace
|
||||
async def get_extensions_response(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
previous_connection_state: "PerConnectionState",
|
||||
new_connection_state: "MutablePerConnectionState",
|
||||
actual_lists: Dict[str, SlidingSyncResult.SlidingWindowList],
|
||||
actual_room_ids: Set[str],
|
||||
actual_room_response_map: Dict[str, SlidingSyncResult.RoomResult],
|
||||
to_token: StreamToken,
|
||||
from_token: Optional[SlidingSyncStreamToken],
|
||||
) -> SlidingSyncResult.Extensions:
|
||||
"""Handle extension requests.
|
||||
|
||||
Args:
|
||||
sync_config: Sync configuration
|
||||
new_connection_state: Snapshot of the current per-connection state
|
||||
new_per_connection_state: A mutable copy of the per-connection
|
||||
state, used to record updates to the state during this request.
|
||||
actual_lists: Sliding window API. A map of list key to list results in the
|
||||
Sliding Sync response.
|
||||
actual_room_ids: The actual room IDs in the the Sliding Sync response.
|
||||
actual_room_response_map: A map of room ID to room results in the the
|
||||
Sliding Sync response.
|
||||
to_token: The point in the stream to sync up to.
|
||||
from_token: The point in the stream to sync from.
|
||||
"""
|
||||
|
||||
if sync_config.extensions is None:
|
||||
return SlidingSyncResult.Extensions()
|
||||
|
||||
to_device_response = None
|
||||
if sync_config.extensions.to_device is not None:
|
||||
to_device_response = await self.get_to_device_extension_response(
|
||||
sync_config=sync_config,
|
||||
to_device_request=sync_config.extensions.to_device,
|
||||
to_token=to_token,
|
||||
)
|
||||
|
||||
e2ee_response = None
|
||||
if sync_config.extensions.e2ee is not None:
|
||||
e2ee_response = await self.get_e2ee_extension_response(
|
||||
sync_config=sync_config,
|
||||
e2ee_request=sync_config.extensions.e2ee,
|
||||
to_token=to_token,
|
||||
from_token=from_token,
|
||||
)
|
||||
|
||||
account_data_response = None
|
||||
if sync_config.extensions.account_data is not None:
|
||||
account_data_response = await self.get_account_data_extension_response(
|
||||
sync_config=sync_config,
|
||||
actual_lists=actual_lists,
|
||||
actual_room_ids=actual_room_ids,
|
||||
account_data_request=sync_config.extensions.account_data,
|
||||
to_token=to_token,
|
||||
from_token=from_token,
|
||||
)
|
||||
|
||||
receipts_response = None
|
||||
if sync_config.extensions.receipts is not None:
|
||||
receipts_response = await self.get_receipts_extension_response(
|
||||
sync_config=sync_config,
|
||||
previous_connection_state=previous_connection_state,
|
||||
new_connection_state=new_connection_state,
|
||||
actual_lists=actual_lists,
|
||||
actual_room_ids=actual_room_ids,
|
||||
actual_room_response_map=actual_room_response_map,
|
||||
receipts_request=sync_config.extensions.receipts,
|
||||
to_token=to_token,
|
||||
from_token=from_token,
|
||||
)
|
||||
|
||||
typing_response = None
|
||||
if sync_config.extensions.typing is not None:
|
||||
typing_response = await self.get_typing_extension_response(
|
||||
sync_config=sync_config,
|
||||
actual_lists=actual_lists,
|
||||
actual_room_ids=actual_room_ids,
|
||||
actual_room_response_map=actual_room_response_map,
|
||||
typing_request=sync_config.extensions.typing,
|
||||
to_token=to_token,
|
||||
from_token=from_token,
|
||||
)
|
||||
|
||||
return SlidingSyncResult.Extensions(
|
||||
to_device=to_device_response,
|
||||
e2ee=e2ee_response,
|
||||
account_data=account_data_response,
|
||||
receipts=receipts_response,
|
||||
typing=typing_response,
|
||||
)
|
||||
|
||||
def find_relevant_room_ids_for_extension(
|
||||
self,
|
||||
requested_lists: Optional[List[str]],
|
||||
requested_room_ids: Optional[List[str]],
|
||||
actual_lists: Dict[str, SlidingSyncResult.SlidingWindowList],
|
||||
actual_room_ids: Set[str],
|
||||
) -> Set[str]:
|
||||
"""
|
||||
Handle the reserved `lists`/`rooms` keys for extensions. Extensions should only
|
||||
return results for rooms in the Sliding Sync response. This matches up the
|
||||
requested rooms/lists with the actual lists/rooms in the Sliding Sync response.
|
||||
|
||||
{"lists": []} // Do not process any lists.
|
||||
{"lists": ["rooms", "dms"]} // Process only a subset of lists.
|
||||
{"lists": ["*"]} // Process all lists defined in the Sliding Window API. (This is the default.)
|
||||
|
||||
{"rooms": []} // Do not process any specific rooms.
|
||||
{"rooms": ["!a:b", "!c:d"]} // Process only a subset of room subscriptions.
|
||||
{"rooms": ["*"]} // Process all room subscriptions defined in the Room Subscription API. (This is the default.)
|
||||
|
||||
Args:
|
||||
requested_lists: The `lists` from the extension request.
|
||||
requested_room_ids: The `rooms` from the extension request.
|
||||
actual_lists: The actual lists from the Sliding Sync response.
|
||||
actual_room_ids: The actual room subscriptions from the Sliding Sync request.
|
||||
"""
|
||||
|
||||
# We only want to include account data for rooms that are already in the sliding
|
||||
# sync response AND that were requested in the account data request.
|
||||
relevant_room_ids: Set[str] = set()
|
||||
|
||||
# See what rooms from the room subscriptions we should get account data for
|
||||
if requested_room_ids is not None:
|
||||
for room_id in requested_room_ids:
|
||||
# A wildcard means we process all rooms from the room subscriptions
|
||||
if room_id == "*":
|
||||
relevant_room_ids.update(actual_room_ids)
|
||||
break
|
||||
|
||||
if room_id in actual_room_ids:
|
||||
relevant_room_ids.add(room_id)
|
||||
|
||||
# See what rooms from the sliding window lists we should get account data for
|
||||
if requested_lists is not None:
|
||||
for list_key in requested_lists:
|
||||
# Just some typing because we share the variable name in multiple places
|
||||
actual_list: Optional[SlidingSyncResult.SlidingWindowList] = None
|
||||
|
||||
# A wildcard means we process rooms from all lists
|
||||
if list_key == "*":
|
||||
for actual_list in actual_lists.values():
|
||||
# We only expect a single SYNC operation for any list
|
||||
assert len(actual_list.ops) == 1
|
||||
sync_op = actual_list.ops[0]
|
||||
assert sync_op.op == OperationType.SYNC
|
||||
|
||||
relevant_room_ids.update(sync_op.room_ids)
|
||||
|
||||
break
|
||||
|
||||
actual_list = actual_lists.get(list_key)
|
||||
if actual_list is not None:
|
||||
# We only expect a single SYNC operation for any list
|
||||
assert len(actual_list.ops) == 1
|
||||
sync_op = actual_list.ops[0]
|
||||
assert sync_op.op == OperationType.SYNC
|
||||
|
||||
relevant_room_ids.update(sync_op.room_ids)
|
||||
|
||||
return relevant_room_ids
|
||||
|
||||
@trace
|
||||
async def get_to_device_extension_response(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
to_device_request: SlidingSyncConfig.Extensions.ToDeviceExtension,
|
||||
to_token: StreamToken,
|
||||
) -> Optional[SlidingSyncResult.Extensions.ToDeviceExtension]:
|
||||
"""Handle to-device extension (MSC3885)
|
||||
|
||||
Args:
|
||||
sync_config: Sync configuration
|
||||
to_device_request: The to-device extension from the request
|
||||
to_token: The point in the stream to sync up to.
|
||||
"""
|
||||
user_id = sync_config.user.to_string()
|
||||
device_id = sync_config.requester.device_id
|
||||
|
||||
# Skip if the extension is not enabled
|
||||
if not to_device_request.enabled:
|
||||
return None
|
||||
|
||||
# Check that this request has a valid device ID (not all requests have
|
||||
# to belong to a device, and so device_id is None)
|
||||
if device_id is None:
|
||||
return SlidingSyncResult.Extensions.ToDeviceExtension(
|
||||
next_batch=f"{to_token.to_device_key}",
|
||||
events=[],
|
||||
)
|
||||
|
||||
since_stream_id = 0
|
||||
if to_device_request.since is not None:
|
||||
# We've already validated this is an int.
|
||||
since_stream_id = int(to_device_request.since)
|
||||
|
||||
if to_token.to_device_key < since_stream_id:
|
||||
# The since token is ahead of our current token, so we return an
|
||||
# empty response.
|
||||
logger.warning(
|
||||
"Got to-device.since from the future. since token: %r is ahead of our current to_device stream position: %r",
|
||||
since_stream_id,
|
||||
to_token.to_device_key,
|
||||
)
|
||||
return SlidingSyncResult.Extensions.ToDeviceExtension(
|
||||
next_batch=to_device_request.since,
|
||||
events=[],
|
||||
)
|
||||
|
||||
# Delete everything before the given since token, as we know the
|
||||
# device must have received them.
|
||||
deleted = await self.store.delete_messages_for_device(
|
||||
user_id=user_id,
|
||||
device_id=device_id,
|
||||
up_to_stream_id=since_stream_id,
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
"Deleted %d to-device messages up to %d for %s",
|
||||
deleted,
|
||||
since_stream_id,
|
||||
user_id,
|
||||
)
|
||||
|
||||
messages, stream_id = await self.store.get_messages_for_device(
|
||||
user_id=user_id,
|
||||
device_id=device_id,
|
||||
from_stream_id=since_stream_id,
|
||||
to_stream_id=to_token.to_device_key,
|
||||
limit=min(to_device_request.limit, 100), # Limit to at most 100 events
|
||||
)
|
||||
|
||||
return SlidingSyncResult.Extensions.ToDeviceExtension(
|
||||
next_batch=f"{stream_id}",
|
||||
events=messages,
|
||||
)
|
||||
|
||||
@trace
|
||||
async def get_e2ee_extension_response(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
e2ee_request: SlidingSyncConfig.Extensions.E2eeExtension,
|
||||
to_token: StreamToken,
|
||||
from_token: Optional[SlidingSyncStreamToken],
|
||||
) -> Optional[SlidingSyncResult.Extensions.E2eeExtension]:
|
||||
"""Handle E2EE device extension (MSC3884)
|
||||
|
||||
Args:
|
||||
sync_config: Sync configuration
|
||||
e2ee_request: The e2ee extension from the request
|
||||
to_token: The point in the stream to sync up to.
|
||||
from_token: The point in the stream to sync from.
|
||||
"""
|
||||
user_id = sync_config.user.to_string()
|
||||
device_id = sync_config.requester.device_id
|
||||
|
||||
# Skip if the extension is not enabled
|
||||
if not e2ee_request.enabled:
|
||||
return None
|
||||
|
||||
device_list_updates: Optional[DeviceListUpdates] = None
|
||||
if from_token is not None:
|
||||
# TODO: This should take into account the `from_token` and `to_token`
|
||||
device_list_updates = await self.device_handler.get_user_ids_changed(
|
||||
user_id=user_id,
|
||||
from_token=from_token.stream_token,
|
||||
)
|
||||
|
||||
device_one_time_keys_count: Mapping[str, int] = {}
|
||||
device_unused_fallback_key_types: Sequence[str] = []
|
||||
if device_id:
|
||||
# TODO: We should have a way to let clients differentiate between the states of:
|
||||
# * no change in OTK count since the provided since token
|
||||
# * the server has zero OTKs left for this device
|
||||
# Spec issue: https://github.com/matrix-org/matrix-doc/issues/3298
|
||||
device_one_time_keys_count = await self.store.count_e2e_one_time_keys(
|
||||
user_id, device_id
|
||||
)
|
||||
device_unused_fallback_key_types = (
|
||||
await self.store.get_e2e_unused_fallback_key_types(user_id, device_id)
|
||||
)
|
||||
|
||||
return SlidingSyncResult.Extensions.E2eeExtension(
|
||||
device_list_updates=device_list_updates,
|
||||
device_one_time_keys_count=device_one_time_keys_count,
|
||||
device_unused_fallback_key_types=device_unused_fallback_key_types,
|
||||
)
|
||||
|
||||
@trace
|
||||
async def get_account_data_extension_response(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
actual_lists: Dict[str, SlidingSyncResult.SlidingWindowList],
|
||||
actual_room_ids: Set[str],
|
||||
account_data_request: SlidingSyncConfig.Extensions.AccountDataExtension,
|
||||
to_token: StreamToken,
|
||||
from_token: Optional[SlidingSyncStreamToken],
|
||||
) -> Optional[SlidingSyncResult.Extensions.AccountDataExtension]:
|
||||
"""Handle Account Data extension (MSC3959)
|
||||
|
||||
Args:
|
||||
sync_config: Sync configuration
|
||||
actual_lists: Sliding window API. A map of list key to list results in the
|
||||
Sliding Sync response.
|
||||
actual_room_ids: The actual room IDs in the the Sliding Sync response.
|
||||
account_data_request: The account_data extension from the request
|
||||
to_token: The point in the stream to sync up to.
|
||||
from_token: The point in the stream to sync from.
|
||||
"""
|
||||
user_id = sync_config.user.to_string()
|
||||
|
||||
# Skip if the extension is not enabled
|
||||
if not account_data_request.enabled:
|
||||
return None
|
||||
|
||||
global_account_data_map: Mapping[str, JsonMapping] = {}
|
||||
if from_token is not None:
|
||||
# TODO: This should take into account the `from_token` and `to_token`
|
||||
global_account_data_map = (
|
||||
await self.store.get_updated_global_account_data_for_user(
|
||||
user_id, from_token.stream_token.account_data_key
|
||||
)
|
||||
)
|
||||
|
||||
have_push_rules_changed = await self.store.have_push_rules_changed_for_user(
|
||||
user_id, from_token.stream_token.push_rules_key
|
||||
)
|
||||
if have_push_rules_changed:
|
||||
global_account_data_map = dict(global_account_data_map)
|
||||
# TODO: This should take into account the `from_token` and `to_token`
|
||||
global_account_data_map[AccountDataTypes.PUSH_RULES] = (
|
||||
await self.push_rules_handler.push_rules_for_user(sync_config.user)
|
||||
)
|
||||
else:
|
||||
# TODO: This should take into account the `to_token`
|
||||
all_global_account_data = await self.store.get_global_account_data_for_user(
|
||||
user_id
|
||||
)
|
||||
|
||||
global_account_data_map = dict(all_global_account_data)
|
||||
# TODO: This should take into account the `to_token`
|
||||
global_account_data_map[AccountDataTypes.PUSH_RULES] = (
|
||||
await self.push_rules_handler.push_rules_for_user(sync_config.user)
|
||||
)
|
||||
|
||||
# Fetch room account data
|
||||
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]] = {}
|
||||
relevant_room_ids = self.find_relevant_room_ids_for_extension(
|
||||
requested_lists=account_data_request.lists,
|
||||
requested_room_ids=account_data_request.rooms,
|
||||
actual_lists=actual_lists,
|
||||
actual_room_ids=actual_room_ids,
|
||||
)
|
||||
if len(relevant_room_ids) > 0:
|
||||
if from_token is not None:
|
||||
# TODO: This should take into account the `from_token` and `to_token`
|
||||
account_data_by_room_map = (
|
||||
await self.store.get_updated_room_account_data_for_user(
|
||||
user_id, from_token.stream_token.account_data_key
|
||||
)
|
||||
)
|
||||
else:
|
||||
# TODO: This should take into account the `to_token`
|
||||
account_data_by_room_map = (
|
||||
await self.store.get_room_account_data_for_user(user_id)
|
||||
)
|
||||
|
||||
# Filter down to the relevant rooms
|
||||
account_data_by_room_map = {
|
||||
room_id: account_data_map
|
||||
for room_id, account_data_map in account_data_by_room_map.items()
|
||||
if room_id in relevant_room_ids
|
||||
}
|
||||
|
||||
return SlidingSyncResult.Extensions.AccountDataExtension(
|
||||
global_account_data_map=global_account_data_map,
|
||||
account_data_by_room_map=account_data_by_room_map,
|
||||
)
|
||||
|
||||
@trace
|
||||
async def get_receipts_extension_response(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
previous_connection_state: "PerConnectionState",
|
||||
new_connection_state: "MutablePerConnectionState",
|
||||
actual_lists: Dict[str, SlidingSyncResult.SlidingWindowList],
|
||||
actual_room_ids: Set[str],
|
||||
actual_room_response_map: Dict[str, SlidingSyncResult.RoomResult],
|
||||
receipts_request: SlidingSyncConfig.Extensions.ReceiptsExtension,
|
||||
to_token: StreamToken,
|
||||
from_token: Optional[SlidingSyncStreamToken],
|
||||
) -> Optional[SlidingSyncResult.Extensions.ReceiptsExtension]:
|
||||
"""Handle Receipts extension (MSC3960)
|
||||
|
||||
Args:
|
||||
sync_config: Sync configuration
|
||||
previous_connection_state: The current per-connection state
|
||||
new_connection_state: A mutable copy of the per-connection
|
||||
state, used to record updates to the state.
|
||||
actual_lists: Sliding window API. A map of list key to list results in the
|
||||
Sliding Sync response.
|
||||
actual_room_ids: The actual room IDs in the the Sliding Sync response.
|
||||
actual_room_response_map: A map of room ID to room results in the the
|
||||
Sliding Sync response.
|
||||
account_data_request: The account_data extension from the request
|
||||
to_token: The point in the stream to sync up to.
|
||||
from_token: The point in the stream to sync from.
|
||||
"""
|
||||
# Skip if the extension is not enabled
|
||||
if not receipts_request.enabled:
|
||||
return None
|
||||
|
||||
relevant_room_ids = self.find_relevant_room_ids_for_extension(
|
||||
requested_lists=receipts_request.lists,
|
||||
requested_room_ids=receipts_request.rooms,
|
||||
actual_lists=actual_lists,
|
||||
actual_room_ids=actual_room_ids,
|
||||
)
|
||||
|
||||
room_id_to_receipt_map: Dict[str, JsonMapping] = {}
|
||||
if len(relevant_room_ids) > 0:
|
||||
# We need to handle the different cases depending on if we have sent
|
||||
# down receipts previously or not, so we split the relevant rooms
|
||||
# up into different collections based on status.
|
||||
live_rooms = set()
|
||||
previously_rooms: Dict[str, MultiWriterStreamToken] = {}
|
||||
initial_rooms = set()
|
||||
|
||||
for room_id in relevant_room_ids:
|
||||
if not from_token:
|
||||
initial_rooms.add(room_id)
|
||||
continue
|
||||
|
||||
# If we're sending down the room from scratch again for some reason, we
|
||||
# should always resend the receipts as well (regardless of if
|
||||
# we've sent them down before). This is to mimic the behaviour
|
||||
# of what happens on initial sync, where you get a chunk of
|
||||
# timeline with all of the corresponding receipts for the events in the timeline.
|
||||
room_result = actual_room_response_map.get(room_id)
|
||||
if room_result is not None and room_result.initial:
|
||||
initial_rooms.add(room_id)
|
||||
continue
|
||||
|
||||
room_status = previous_connection_state.receipts.have_sent_room(room_id)
|
||||
if room_status.status == HaveSentRoomFlag.LIVE:
|
||||
live_rooms.add(room_id)
|
||||
elif room_status.status == HaveSentRoomFlag.PREVIOUSLY:
|
||||
assert room_status.last_token is not None
|
||||
previously_rooms[room_id] = room_status.last_token
|
||||
elif room_status.status == HaveSentRoomFlag.NEVER:
|
||||
initial_rooms.add(room_id)
|
||||
else:
|
||||
assert_never(room_status.status)
|
||||
|
||||
# The set of receipts that we fetched. Private receipts need to be
|
||||
# filtered out before returning.
|
||||
fetched_receipts = []
|
||||
|
||||
# For live rooms we just fetch all receipts in those rooms since the
|
||||
# `since` token.
|
||||
if live_rooms:
|
||||
assert from_token is not None
|
||||
receipts = await self.store.get_linearized_receipts_for_rooms(
|
||||
room_ids=live_rooms,
|
||||
from_key=from_token.stream_token.receipt_key,
|
||||
to_key=to_token.receipt_key,
|
||||
)
|
||||
fetched_receipts.extend(receipts)
|
||||
|
||||
# For rooms we've previously sent down, but aren't up to date, we
|
||||
# need to use the from token from the room status.
|
||||
if previously_rooms:
|
||||
for room_id, receipt_token in previously_rooms.items():
|
||||
# TODO: Limit the number of receipts we're about to send down
|
||||
# for the room, if its too many we should TODO
|
||||
previously_receipts = (
|
||||
await self.store.get_linearized_receipts_for_room(
|
||||
room_id=room_id,
|
||||
from_key=receipt_token,
|
||||
to_key=to_token.receipt_key,
|
||||
)
|
||||
)
|
||||
fetched_receipts.extend(previously_receipts)
|
||||
|
||||
# For rooms we haven't previously sent down, we could send all receipts
|
||||
# from that room but we only want to include receipts for events
|
||||
# in the timeline to avoid bloating and blowing up the sync response
|
||||
# as the number of users in the room increases. (this behavior is part of the spec)
|
||||
initial_rooms_and_event_ids = [
|
||||
(room_id, event.event_id)
|
||||
for room_id in initial_rooms
|
||||
if room_id in actual_room_response_map
|
||||
for event in actual_room_response_map[room_id].timeline_events
|
||||
]
|
||||
if initial_rooms_and_event_ids:
|
||||
initial_receipts = await self.store.get_linearized_receipts_for_events(
|
||||
room_and_event_ids=initial_rooms_and_event_ids,
|
||||
)
|
||||
fetched_receipts.extend(initial_receipts)
|
||||
|
||||
fetched_receipts = ReceiptEventSource.filter_out_private_receipts(
|
||||
fetched_receipts, sync_config.user.to_string()
|
||||
)
|
||||
|
||||
for receipt in fetched_receipts:
|
||||
# These fields should exist for every receipt
|
||||
room_id = receipt["room_id"]
|
||||
type = receipt["type"]
|
||||
content = receipt["content"]
|
||||
|
||||
room_id_to_receipt_map[room_id] = {"type": type, "content": content}
|
||||
|
||||
# Now we update the per-connection state to track which receipts we have
|
||||
# and haven't sent down.
|
||||
new_connection_state.receipts.record_sent_rooms(relevant_room_ids)
|
||||
|
||||
if from_token:
|
||||
# Now find the set of rooms that may have receipts that we're not sending
|
||||
# down. We only need to check rooms that we have previously returned
|
||||
# receipts for (in `previous_connection_state`) because we only care about
|
||||
# updating `LIVE` rooms to `PREVIOUSLY`. The `PREVIOUSLY` rooms will just
|
||||
# stay pointing at their previous position so we don't need to waste time
|
||||
# checking those and since we default to `NEVER`, rooms that were `NEVER`
|
||||
# sent before don't need to be recorded as we'll handle them correctly when
|
||||
# they come into range for the first time.
|
||||
rooms_no_receipts = [
|
||||
room_id
|
||||
for room_id, room_status in previous_connection_state.receipts._statuses.items()
|
||||
if room_status.status == HaveSentRoomFlag.LIVE
|
||||
and room_id not in relevant_room_ids
|
||||
]
|
||||
changed_rooms = await self.store.get_rooms_with_receipts_between(
|
||||
rooms_no_receipts,
|
||||
from_key=from_token.stream_token.receipt_key,
|
||||
to_key=to_token.receipt_key,
|
||||
)
|
||||
new_connection_state.receipts.record_unsent_rooms(
|
||||
changed_rooms, from_token.stream_token.receipt_key
|
||||
)
|
||||
|
||||
return SlidingSyncResult.Extensions.ReceiptsExtension(
|
||||
room_id_to_receipt_map=room_id_to_receipt_map,
|
||||
)
|
||||
|
||||
async def get_typing_extension_response(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
actual_lists: Dict[str, SlidingSyncResult.SlidingWindowList],
|
||||
actual_room_ids: Set[str],
|
||||
actual_room_response_map: Dict[str, SlidingSyncResult.RoomResult],
|
||||
typing_request: SlidingSyncConfig.Extensions.TypingExtension,
|
||||
to_token: StreamToken,
|
||||
from_token: Optional[SlidingSyncStreamToken],
|
||||
) -> Optional[SlidingSyncResult.Extensions.TypingExtension]:
|
||||
"""Handle Typing Notification extension (MSC3961)
|
||||
|
||||
Args:
|
||||
sync_config: Sync configuration
|
||||
actual_lists: Sliding window API. A map of list key to list results in the
|
||||
Sliding Sync response.
|
||||
actual_room_ids: The actual room IDs in the the Sliding Sync response.
|
||||
actual_room_response_map: A map of room ID to room results in the the
|
||||
Sliding Sync response.
|
||||
account_data_request: The account_data extension from the request
|
||||
to_token: The point in the stream to sync up to.
|
||||
from_token: The point in the stream to sync from.
|
||||
"""
|
||||
# Skip if the extension is not enabled
|
||||
if not typing_request.enabled:
|
||||
return None
|
||||
|
||||
relevant_room_ids = self.find_relevant_room_ids_for_extension(
|
||||
requested_lists=typing_request.lists,
|
||||
requested_room_ids=typing_request.rooms,
|
||||
actual_lists=actual_lists,
|
||||
actual_room_ids=actual_room_ids,
|
||||
)
|
||||
|
||||
room_id_to_typing_map: Dict[str, JsonMapping] = {}
|
||||
if len(relevant_room_ids) > 0:
|
||||
# Note: We don't need to take connection tracking into account for typing
|
||||
# notifications because they'll get anything still relevant and hasn't timed
|
||||
# out when the room comes into range. We consider the gap where the room
|
||||
# fell out of range, as long enough for any typing notifications to have
|
||||
# timed out (it's not worth the 30 seconds of data we may have missed).
|
||||
typing_source = self.event_sources.sources.typing
|
||||
typing_notifications, _ = await typing_source.get_new_events(
|
||||
user=sync_config.user,
|
||||
from_key=(from_token.stream_token.typing_key if from_token else 0),
|
||||
to_key=to_token.typing_key,
|
||||
# This is a dummy value and isn't used in the function
|
||||
limit=0,
|
||||
room_ids=relevant_room_ids,
|
||||
is_guest=False,
|
||||
)
|
||||
|
||||
for typing_notification in typing_notifications:
|
||||
# These fields should exist for every typing notification
|
||||
room_id = typing_notification["room_id"]
|
||||
type = typing_notification["type"]
|
||||
content = typing_notification["content"]
|
||||
|
||||
room_id_to_typing_map[room_id] = {"type": type, "content": content}
|
||||
|
||||
return SlidingSyncResult.Extensions.TypingExtension(
|
||||
room_id_to_typing_map=room_id_to_typing_map,
|
||||
)
|
||||
200
synapse/handlers/sliding_sync/store.py
Normal file
200
synapse/handlers/sliding_sync/store.py
Normal file
@@ -0,0 +1,200 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2023 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, Optional, Tuple
|
||||
|
||||
import attr
|
||||
|
||||
from synapse.api.errors import SlidingSyncUnknownPosition
|
||||
from synapse.handlers.sliding_sync.types import (
|
||||
MutablePerConnectionState,
|
||||
PerConnectionState,
|
||||
)
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.types import SlidingSyncStreamToken
|
||||
from synapse.types.handlers import SlidingSyncConfig
|
||||
|
||||
if TYPE_CHECKING:
|
||||
pass
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True)
|
||||
class SlidingSyncConnectionStore:
|
||||
"""In-memory store of per-connection state, including what rooms we have
|
||||
previously sent down a sliding sync connection.
|
||||
|
||||
Note: This is NOT safe to run in a worker setup because connection positions will
|
||||
point to different sets of rooms on different workers. e.g. for the same connection,
|
||||
a connection position of 5 might have totally different states on worker A and
|
||||
worker B.
|
||||
|
||||
One complication that we need to deal with here is needing to handle requests being
|
||||
resent, i.e. if we sent down a room in a response that the client received, we must
|
||||
consider the room *not* sent when we get the request again.
|
||||
|
||||
This is handled by using an integer "token", which is returned to the client
|
||||
as part of the sync token. For each connection we store a mapping from
|
||||
tokens to the room states, and create a new entry when we send down new
|
||||
rooms.
|
||||
|
||||
Note that for any given sliding sync connection we will only store a maximum
|
||||
of two different tokens: the previous token from the request and a new token
|
||||
sent in the response. When we receive a request with a given token, we then
|
||||
clear out all other entries with a different token.
|
||||
|
||||
Attributes:
|
||||
_connections: Mapping from `(user_id, conn_id)` to mapping of `token`
|
||||
to mapping of room ID to `HaveSentRoom`.
|
||||
"""
|
||||
|
||||
# `(user_id, conn_id)` -> `connection_position` -> `PerConnectionState`
|
||||
_connections: Dict[Tuple[str, str], Dict[int, PerConnectionState]] = attr.Factory(
|
||||
dict
|
||||
)
|
||||
|
||||
async def is_valid_token(
|
||||
self, sync_config: SlidingSyncConfig, connection_token: int
|
||||
) -> bool:
|
||||
"""Return whether the connection token is valid/recognized"""
|
||||
if connection_token == 0:
|
||||
return True
|
||||
|
||||
conn_key = self._get_connection_key(sync_config)
|
||||
return connection_token in self._connections.get(conn_key, {})
|
||||
|
||||
async def get_per_connection_state(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
from_token: Optional[SlidingSyncStreamToken],
|
||||
) -> PerConnectionState:
|
||||
"""Fetch the per-connection state for the token.
|
||||
|
||||
Raises:
|
||||
SlidingSyncUnknownPosition if the connection_token is unknown
|
||||
"""
|
||||
if from_token is None:
|
||||
return PerConnectionState()
|
||||
|
||||
connection_position = from_token.connection_position
|
||||
if connection_position == 0:
|
||||
# Initial sync (request without a `from_token`) starts at `0` so
|
||||
# there is no existing per-connection state
|
||||
return PerConnectionState()
|
||||
|
||||
conn_key = self._get_connection_key(sync_config)
|
||||
sync_statuses = self._connections.get(conn_key, {})
|
||||
connection_state = sync_statuses.get(connection_position)
|
||||
|
||||
if connection_state is None:
|
||||
raise SlidingSyncUnknownPosition()
|
||||
|
||||
return connection_state
|
||||
|
||||
@trace
|
||||
async def record_new_state(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
from_token: Optional[SlidingSyncStreamToken],
|
||||
new_connection_state: MutablePerConnectionState,
|
||||
) -> int:
|
||||
"""Record updated per-connection state, returning the connection
|
||||
position associated with the new state.
|
||||
If there are no changes to the state this may return the same token as
|
||||
the existing per-connection state.
|
||||
"""
|
||||
prev_connection_token = 0
|
||||
if from_token is not None:
|
||||
prev_connection_token = from_token.connection_position
|
||||
|
||||
if not new_connection_state.has_updates():
|
||||
return prev_connection_token
|
||||
|
||||
conn_key = self._get_connection_key(sync_config)
|
||||
sync_statuses = self._connections.setdefault(conn_key, {})
|
||||
|
||||
# Generate a new token, removing any existing entries in that token
|
||||
# (which can happen if requests get resent).
|
||||
new_store_token = prev_connection_token + 1
|
||||
sync_statuses.pop(new_store_token, None)
|
||||
|
||||
# We copy the `MutablePerConnectionState` so that the inner `ChainMap`s
|
||||
# don't grow forever.
|
||||
sync_statuses[new_store_token] = new_connection_state.copy()
|
||||
|
||||
return new_store_token
|
||||
|
||||
@trace
|
||||
async def mark_token_seen(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
from_token: Optional[SlidingSyncStreamToken],
|
||||
) -> None:
|
||||
"""We have received a request with the given token, so we can clear out
|
||||
any other tokens associated with the connection.
|
||||
|
||||
If there is no from token then we have started afresh, and so we delete
|
||||
all tokens associated with the device.
|
||||
"""
|
||||
# Clear out any tokens for the connection that doesn't match the one
|
||||
# from the request.
|
||||
|
||||
conn_key = self._get_connection_key(sync_config)
|
||||
sync_statuses = self._connections.pop(conn_key, {})
|
||||
if from_token is None:
|
||||
return
|
||||
|
||||
sync_statuses = {
|
||||
connection_token: room_statuses
|
||||
for connection_token, room_statuses in sync_statuses.items()
|
||||
if connection_token == from_token.connection_position
|
||||
}
|
||||
if sync_statuses:
|
||||
self._connections[conn_key] = sync_statuses
|
||||
|
||||
@staticmethod
|
||||
def _get_connection_key(sync_config: SlidingSyncConfig) -> Tuple[str, str]:
|
||||
"""Return a unique identifier for this connection.
|
||||
|
||||
The first part is simply the user ID.
|
||||
|
||||
The second part is generally a combination of device ID and conn_id.
|
||||
However, both these two are optional (e.g. puppet access tokens don't
|
||||
have device IDs), so this handles those edge cases.
|
||||
|
||||
We use this over the raw `conn_id` to avoid clashes between different
|
||||
clients that use the same `conn_id`. Imagine a user uses a web client
|
||||
that uses `conn_id: main_sync_loop` and an Android client that also has
|
||||
a `conn_id: main_sync_loop`.
|
||||
"""
|
||||
|
||||
user_id = sync_config.user.to_string()
|
||||
|
||||
# Only one sliding sync connection is allowed per given conn_id (empty
|
||||
# or not).
|
||||
conn_id = sync_config.conn_id or ""
|
||||
|
||||
if sync_config.requester.device_id:
|
||||
return (user_id, f"D/{sync_config.requester.device_id}/{conn_id}")
|
||||
|
||||
if sync_config.requester.access_token_id:
|
||||
# If we don't have a device, then the access token ID should be a
|
||||
# stable ID.
|
||||
return (user_id, f"A/{sync_config.requester.access_token_id}/{conn_id}")
|
||||
|
||||
# If we have neither then its likely an AS or some weird token. Either
|
||||
# way we can just fail here.
|
||||
raise Exception("Cannot use sliding sync with access token type")
|
||||
506
synapse/handlers/sliding_sync/types.py
Normal file
506
synapse/handlers/sliding_sync/types.py
Normal file
@@ -0,0 +1,506 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
|
||||
import logging
|
||||
import typing
|
||||
from collections import ChainMap
|
||||
from enum import Enum
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Callable,
|
||||
Dict,
|
||||
Final,
|
||||
Generic,
|
||||
Mapping,
|
||||
MutableMapping,
|
||||
Optional,
|
||||
Set,
|
||||
TypeVar,
|
||||
cast,
|
||||
)
|
||||
|
||||
import attr
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.types import MultiWriterStreamToken, RoomStreamToken, StrCollection, UserID
|
||||
from synapse.types.handlers import SlidingSyncConfig
|
||||
|
||||
if TYPE_CHECKING:
|
||||
pass
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class StateValues:
|
||||
"""
|
||||
Understood values of the (type, state_key) tuple in `required_state`.
|
||||
"""
|
||||
|
||||
# Include all state events of the given type
|
||||
WILDCARD: Final = "*"
|
||||
# Lazy-load room membership events (include room membership events for any event
|
||||
# `sender` in the timeline). We only give special meaning to this value when it's a
|
||||
# `state_key`.
|
||||
LAZY: Final = "$LAZY"
|
||||
# Subsitute with the requester's user ID. Typically used by clients to get
|
||||
# the user's membership.
|
||||
ME: Final = "$ME"
|
||||
|
||||
|
||||
# We can't freeze this class because we want to update it in place with the
|
||||
# de-duplicated data.
|
||||
@attr.s(slots=True, auto_attribs=True)
|
||||
class RoomSyncConfig:
|
||||
"""
|
||||
Holds the config for what data we should fetch for a room in the sync response.
|
||||
|
||||
Attributes:
|
||||
timeline_limit: The maximum number of events to return in the timeline.
|
||||
|
||||
required_state_map: Map from state event type to state_keys requested for the
|
||||
room. The values are close to `StateKey` but actually use a syntax where you
|
||||
can provide `*` wildcard and `$LAZY` for lazy-loading room members.
|
||||
"""
|
||||
|
||||
timeline_limit: int
|
||||
required_state_map: Dict[str, Set[str]]
|
||||
|
||||
@classmethod
|
||||
def from_room_config(
|
||||
cls,
|
||||
room_params: SlidingSyncConfig.CommonRoomParameters,
|
||||
) -> "RoomSyncConfig":
|
||||
"""
|
||||
Create a `RoomSyncConfig` from a `SlidingSyncList`/`RoomSubscription` config.
|
||||
|
||||
Args:
|
||||
room_params: `SlidingSyncConfig.SlidingSyncList` or `SlidingSyncConfig.RoomSubscription`
|
||||
"""
|
||||
required_state_map: Dict[str, Set[str]] = {}
|
||||
for (
|
||||
state_type,
|
||||
state_key,
|
||||
) in room_params.required_state:
|
||||
# If we already have a wildcard for this specific `state_key`, we don't need
|
||||
# to add it since the wildcard already covers it.
|
||||
if state_key in required_state_map.get(StateValues.WILDCARD, set()):
|
||||
continue
|
||||
|
||||
# If we already have a wildcard `state_key` for this `state_type`, we don't need
|
||||
# to add anything else
|
||||
if StateValues.WILDCARD in required_state_map.get(state_type, set()):
|
||||
continue
|
||||
|
||||
# If we're getting wildcards for the `state_type` and `state_key`, that's
|
||||
# all that matters so get rid of any other entries
|
||||
if state_type == StateValues.WILDCARD and state_key == StateValues.WILDCARD:
|
||||
required_state_map = {StateValues.WILDCARD: {StateValues.WILDCARD}}
|
||||
# We can break, since we don't need to add anything else
|
||||
break
|
||||
|
||||
# If we're getting a wildcard for the `state_type`, get rid of any other
|
||||
# entries with the same `state_key`, since the wildcard will cover it already.
|
||||
elif state_type == StateValues.WILDCARD:
|
||||
# Get rid of any entries that match the `state_key`
|
||||
#
|
||||
# Make a copy so we don't run into an error: `dictionary changed size
|
||||
# during iteration`, when we remove items
|
||||
for (
|
||||
existing_state_type,
|
||||
existing_state_key_set,
|
||||
) in list(required_state_map.items()):
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for existing_state_key in existing_state_key_set.copy():
|
||||
if existing_state_key == state_key:
|
||||
existing_state_key_set.remove(state_key)
|
||||
|
||||
# If we've the left the `set()` empty, remove it from the map
|
||||
if existing_state_key_set == set():
|
||||
required_state_map.pop(existing_state_type, None)
|
||||
|
||||
# If we're getting a wildcard `state_key`, get rid of any other state_keys
|
||||
# for this `state_type` since the wildcard will cover it already.
|
||||
if state_key == StateValues.WILDCARD:
|
||||
required_state_map[state_type] = {state_key}
|
||||
# Otherwise, just add it to the set
|
||||
else:
|
||||
if required_state_map.get(state_type) is None:
|
||||
required_state_map[state_type] = {state_key}
|
||||
else:
|
||||
required_state_map[state_type].add(state_key)
|
||||
|
||||
return cls(
|
||||
timeline_limit=room_params.timeline_limit,
|
||||
required_state_map=required_state_map,
|
||||
)
|
||||
|
||||
def deep_copy(self) -> "RoomSyncConfig":
|
||||
required_state_map: Dict[str, Set[str]] = {
|
||||
state_type: state_key_set.copy()
|
||||
for state_type, state_key_set in self.required_state_map.items()
|
||||
}
|
||||
|
||||
return RoomSyncConfig(
|
||||
timeline_limit=self.timeline_limit,
|
||||
required_state_map=required_state_map,
|
||||
)
|
||||
|
||||
def combine_room_sync_config(
|
||||
self, other_room_sync_config: "RoomSyncConfig"
|
||||
) -> None:
|
||||
"""
|
||||
Combine this `RoomSyncConfig` with another `RoomSyncConfig` and take the
|
||||
superset union of the two.
|
||||
"""
|
||||
# Take the highest timeline limit
|
||||
if self.timeline_limit < other_room_sync_config.timeline_limit:
|
||||
self.timeline_limit = other_room_sync_config.timeline_limit
|
||||
|
||||
# Union the required state
|
||||
for (
|
||||
state_type,
|
||||
state_key_set,
|
||||
) in other_room_sync_config.required_state_map.items():
|
||||
# If we already have a wildcard for everything, we don't need to add
|
||||
# anything else
|
||||
if StateValues.WILDCARD in self.required_state_map.get(
|
||||
StateValues.WILDCARD, set()
|
||||
):
|
||||
break
|
||||
|
||||
# If we already have a wildcard `state_key` for this `state_type`, we don't need
|
||||
# to add anything else
|
||||
if StateValues.WILDCARD in self.required_state_map.get(state_type, set()):
|
||||
continue
|
||||
|
||||
# If we're getting wildcards for the `state_type` and `state_key`, that's
|
||||
# all that matters so get rid of any other entries
|
||||
if (
|
||||
state_type == StateValues.WILDCARD
|
||||
and StateValues.WILDCARD in state_key_set
|
||||
):
|
||||
self.required_state_map = {state_type: {StateValues.WILDCARD}}
|
||||
# We can break, since we don't need to add anything else
|
||||
break
|
||||
|
||||
for state_key in state_key_set:
|
||||
# If we already have a wildcard for this specific `state_key`, we don't need
|
||||
# to add it since the wildcard already covers it.
|
||||
if state_key in self.required_state_map.get(
|
||||
StateValues.WILDCARD, set()
|
||||
):
|
||||
continue
|
||||
|
||||
# If we're getting a wildcard for the `state_type`, get rid of any other
|
||||
# entries with the same `state_key`, since the wildcard will cover it already.
|
||||
if state_type == StateValues.WILDCARD:
|
||||
# Get rid of any entries that match the `state_key`
|
||||
#
|
||||
# Make a copy so we don't run into an error: `dictionary changed size
|
||||
# during iteration`, when we remove items
|
||||
for existing_state_type, existing_state_key_set in list(
|
||||
self.required_state_map.items()
|
||||
):
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for existing_state_key in existing_state_key_set.copy():
|
||||
if existing_state_key == state_key:
|
||||
existing_state_key_set.remove(state_key)
|
||||
|
||||
# If we've the left the `set()` empty, remove it from the map
|
||||
if existing_state_key_set == set():
|
||||
self.required_state_map.pop(existing_state_type, None)
|
||||
|
||||
# If we're getting a wildcard `state_key`, get rid of any other state_keys
|
||||
# for this `state_type` since the wildcard will cover it already.
|
||||
if state_key == StateValues.WILDCARD:
|
||||
self.required_state_map[state_type] = {state_key}
|
||||
break
|
||||
# Otherwise, just add it to the set
|
||||
else:
|
||||
if self.required_state_map.get(state_type) is None:
|
||||
self.required_state_map[state_type] = {state_key}
|
||||
else:
|
||||
self.required_state_map[state_type].add(state_key)
|
||||
|
||||
def must_await_full_state(
|
||||
self,
|
||||
is_mine_id: Callable[[str], bool],
|
||||
) -> bool:
|
||||
"""
|
||||
Check if we have a we're only requesting `required_state` which is completely
|
||||
satisfied even with partial state, then we don't need to `await_full_state` before
|
||||
we can return it.
|
||||
|
||||
Also see `StateFilter.must_await_full_state(...)` for comparison
|
||||
|
||||
Partially-stated rooms should have all state events except for remote membership
|
||||
events so if we require a remote membership event anywhere, then we need to
|
||||
return `True` (requires full state).
|
||||
|
||||
Args:
|
||||
is_mine_id: a callable which confirms if a given state_key matches a mxid
|
||||
of a local user
|
||||
"""
|
||||
wildcard_state_keys = self.required_state_map.get(StateValues.WILDCARD)
|
||||
# Requesting *all* state in the room so we have to wait
|
||||
if (
|
||||
wildcard_state_keys is not None
|
||||
and StateValues.WILDCARD in wildcard_state_keys
|
||||
):
|
||||
return True
|
||||
|
||||
# If the wildcards don't refer to remote user IDs, then we don't need to wait
|
||||
# for full state.
|
||||
if wildcard_state_keys is not None:
|
||||
for possible_user_id in wildcard_state_keys:
|
||||
if not possible_user_id[0].startswith(UserID.SIGIL):
|
||||
# Not a user ID
|
||||
continue
|
||||
|
||||
localpart_hostname = possible_user_id.split(":", 1)
|
||||
if len(localpart_hostname) < 2:
|
||||
# Not a user ID
|
||||
continue
|
||||
|
||||
if not is_mine_id(possible_user_id):
|
||||
return True
|
||||
|
||||
membership_state_keys = self.required_state_map.get(EventTypes.Member)
|
||||
# We aren't requesting any membership events at all so the partial state will
|
||||
# cover us.
|
||||
if membership_state_keys is None:
|
||||
return False
|
||||
|
||||
# If we're requesting entirely local users, the partial state will cover us.
|
||||
for user_id in membership_state_keys:
|
||||
if user_id == StateValues.ME:
|
||||
continue
|
||||
# We're lazy-loading membership so we can just return the state we have.
|
||||
# Lazy-loading means we include membership for any event `sender` in the
|
||||
# timeline but since we had to auth those timeline events, we will have the
|
||||
# membership state for them (including from remote senders).
|
||||
elif user_id == StateValues.LAZY:
|
||||
continue
|
||||
elif user_id == StateValues.WILDCARD:
|
||||
return False
|
||||
elif not is_mine_id(user_id):
|
||||
return True
|
||||
|
||||
# Local users only so the partial state will cover us.
|
||||
return False
|
||||
|
||||
|
||||
class HaveSentRoomFlag(Enum):
|
||||
"""Flag for whether we have sent the room down a sliding sync connection.
|
||||
|
||||
The valid state changes here are:
|
||||
NEVER -> LIVE
|
||||
LIVE -> PREVIOUSLY
|
||||
PREVIOUSLY -> LIVE
|
||||
"""
|
||||
|
||||
# The room has never been sent down (or we have forgotten we have sent it
|
||||
# down).
|
||||
NEVER = "never"
|
||||
|
||||
# We have previously sent the room down, but there are updates that we
|
||||
# haven't sent down.
|
||||
PREVIOUSLY = "previously"
|
||||
|
||||
# We have sent the room down and the client has received all updates.
|
||||
LIVE = "live"
|
||||
|
||||
|
||||
T = TypeVar("T")
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True, slots=True, frozen=True)
|
||||
class HaveSentRoom(Generic[T]):
|
||||
"""Whether we have sent the room data down a sliding sync connection.
|
||||
|
||||
We are generic over the type of token used, e.g. `RoomStreamToken` or
|
||||
`MultiWriterStreamToken`.
|
||||
|
||||
Attributes:
|
||||
status: Flag of if we have or haven't sent down the room
|
||||
last_token: If the flag is `PREVIOUSLY` then this is non-null and
|
||||
contains the last stream token of the last updates we sent down
|
||||
the room, i.e. we still need to send everything since then to the
|
||||
client.
|
||||
"""
|
||||
|
||||
status: HaveSentRoomFlag
|
||||
last_token: Optional[T]
|
||||
|
||||
@staticmethod
|
||||
def live() -> "HaveSentRoom[T]":
|
||||
return HaveSentRoom(HaveSentRoomFlag.LIVE, None)
|
||||
|
||||
@staticmethod
|
||||
def previously(last_token: T) -> "HaveSentRoom[T]":
|
||||
"""Constructor for `PREVIOUSLY` flag."""
|
||||
return HaveSentRoom(HaveSentRoomFlag.PREVIOUSLY, last_token)
|
||||
|
||||
@staticmethod
|
||||
def never() -> "HaveSentRoom[T]":
|
||||
return HaveSentRoom(HaveSentRoomFlag.NEVER, None)
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True, slots=True, frozen=True)
|
||||
class RoomStatusMap(Generic[T]):
|
||||
"""For a given stream, e.g. events, records what we have or have not sent
|
||||
down for that stream in a given room."""
|
||||
|
||||
# `room_id` -> `HaveSentRoom`
|
||||
_statuses: Mapping[str, HaveSentRoom[T]] = attr.Factory(dict)
|
||||
|
||||
def have_sent_room(self, room_id: str) -> HaveSentRoom[T]:
|
||||
"""Return whether we have previously sent the room down"""
|
||||
return self._statuses.get(room_id, HaveSentRoom.never())
|
||||
|
||||
def get_mutable(self) -> "MutableRoomStatusMap[T]":
|
||||
"""Get a mutable copy of this state."""
|
||||
return MutableRoomStatusMap(
|
||||
statuses=self._statuses,
|
||||
)
|
||||
|
||||
def copy(self) -> "RoomStatusMap[T]":
|
||||
"""Make a copy of the class. Useful for converting from a mutable to
|
||||
immutable version."""
|
||||
|
||||
return RoomStatusMap(statuses=dict(self._statuses))
|
||||
|
||||
|
||||
class MutableRoomStatusMap(RoomStatusMap[T]):
|
||||
"""A mutable version of `RoomStatusMap`"""
|
||||
|
||||
# We use a ChainMap here so that we can easily track what has been updated
|
||||
# and what hasn't. Note that when we persist the per connection state this
|
||||
# will get flattened to a normal dict (via calling `.copy()`)
|
||||
_statuses: typing.ChainMap[str, HaveSentRoom[T]]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
statuses: Mapping[str, HaveSentRoom[T]],
|
||||
) -> None:
|
||||
# ChainMap requires a mutable mapping, but we're not actually going to
|
||||
# mutate it.
|
||||
statuses = cast(MutableMapping, statuses)
|
||||
|
||||
super().__init__(
|
||||
statuses=ChainMap({}, statuses),
|
||||
)
|
||||
|
||||
def get_updates(self) -> Mapping[str, HaveSentRoom[T]]:
|
||||
"""Return only the changes that were made"""
|
||||
return self._statuses.maps[0]
|
||||
|
||||
def record_sent_rooms(self, room_ids: StrCollection) -> None:
|
||||
"""Record that we have sent these rooms in the response"""
|
||||
for room_id in room_ids:
|
||||
current_status = self._statuses.get(room_id, HaveSentRoom.never())
|
||||
if current_status.status == HaveSentRoomFlag.LIVE:
|
||||
continue
|
||||
|
||||
self._statuses[room_id] = HaveSentRoom.live()
|
||||
|
||||
def record_unsent_rooms(self, room_ids: StrCollection, from_token: T) -> None:
|
||||
"""Record that we have not sent these rooms in the response, but there
|
||||
have been updates.
|
||||
"""
|
||||
# Whether we add/update the entries for unsent rooms depends on the
|
||||
# existing entry:
|
||||
# - LIVE: We have previously sent down everything up to
|
||||
# `last_room_token, so we update the entry to be `PREVIOUSLY` with
|
||||
# `last_room_token`.
|
||||
# - PREVIOUSLY: We have previously sent down everything up to *a*
|
||||
# given token, so we don't need to update the entry.
|
||||
# - NEVER: We have never previously sent down the room, and we haven't
|
||||
# sent anything down this time either so we leave it as NEVER.
|
||||
|
||||
for room_id in room_ids:
|
||||
current_status = self._statuses.get(room_id, HaveSentRoom.never())
|
||||
if current_status.status != HaveSentRoomFlag.LIVE:
|
||||
continue
|
||||
|
||||
self._statuses[room_id] = HaveSentRoom.previously(from_token)
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True)
|
||||
class PerConnectionState:
|
||||
"""The per-connection state. A snapshot of what we've sent down the
|
||||
connection before.
|
||||
|
||||
Currently, we track whether we've sent down various aspects of a given room
|
||||
before.
|
||||
|
||||
We use the `rooms` field to store the position in the events stream for each
|
||||
room that we've previously sent to the client before. On the next request
|
||||
that includes the room, we can then send only what's changed since that
|
||||
recorded position.
|
||||
|
||||
Same goes for the `receipts` field so we only need to send the new receipts
|
||||
since the last time you made a sync request.
|
||||
|
||||
Attributes:
|
||||
rooms: The status of each room for the events stream.
|
||||
receipts: The status of each room for the receipts stream.
|
||||
room_configs: Map from room_id to the `RoomSyncConfig` of all
|
||||
rooms that we have previously sent down.
|
||||
"""
|
||||
|
||||
rooms: RoomStatusMap[RoomStreamToken] = attr.Factory(RoomStatusMap)
|
||||
receipts: RoomStatusMap[MultiWriterStreamToken] = attr.Factory(RoomStatusMap)
|
||||
|
||||
room_configs: Mapping[str, RoomSyncConfig] = attr.Factory(dict)
|
||||
|
||||
def get_mutable(self) -> "MutablePerConnectionState":
|
||||
"""Get a mutable copy of this state."""
|
||||
room_configs = cast(MutableMapping[str, RoomSyncConfig], self.room_configs)
|
||||
|
||||
return MutablePerConnectionState(
|
||||
rooms=self.rooms.get_mutable(),
|
||||
receipts=self.receipts.get_mutable(),
|
||||
room_configs=ChainMap({}, room_configs),
|
||||
)
|
||||
|
||||
def copy(self) -> "PerConnectionState":
|
||||
return PerConnectionState(
|
||||
rooms=self.rooms.copy(),
|
||||
receipts=self.receipts.copy(),
|
||||
room_configs=dict(self.room_configs),
|
||||
)
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True)
|
||||
class MutablePerConnectionState(PerConnectionState):
|
||||
"""A mutable version of `PerConnectionState`"""
|
||||
|
||||
rooms: MutableRoomStatusMap[RoomStreamToken]
|
||||
receipts: MutableRoomStatusMap[MultiWriterStreamToken]
|
||||
|
||||
room_configs: typing.ChainMap[str, RoomSyncConfig]
|
||||
|
||||
def has_updates(self) -> bool:
|
||||
return (
|
||||
bool(self.rooms.get_updates())
|
||||
or bool(self.receipts.get_updates())
|
||||
or bool(self.get_room_config_updates())
|
||||
)
|
||||
|
||||
def get_room_config_updates(self) -> Mapping[str, RoomSyncConfig]:
|
||||
"""Get updates to the room sync config"""
|
||||
return self.room_configs.maps[0]
|
||||
@@ -293,7 +293,9 @@ class StatsHandler:
|
||||
"history_visibility"
|
||||
)
|
||||
elif delta.event_type == EventTypes.RoomEncryption:
|
||||
room_state["encryption"] = event_content.get("algorithm")
|
||||
room_state["encryption"] = event_content.get(
|
||||
EventContentFields.ENCRYPTION_ALGORITHM
|
||||
)
|
||||
elif delta.event_type == EventTypes.Name:
|
||||
room_state["name"] = event_content.get("name")
|
||||
elif delta.event_type == EventTypes.Topic:
|
||||
|
||||
@@ -43,6 +43,7 @@ from prometheus_client import Counter
|
||||
|
||||
from synapse.api.constants import (
|
||||
AccountDataTypes,
|
||||
Direction,
|
||||
EventContentFields,
|
||||
EventTypes,
|
||||
JoinRules,
|
||||
@@ -64,6 +65,7 @@ from synapse.logging.opentracing import (
|
||||
)
|
||||
from synapse.storage.databases.main.event_push_actions import RoomNotifCounts
|
||||
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
|
||||
from synapse.storage.databases.main.stream import PaginateFunction
|
||||
from synapse.storage.roommember import MemberSummary
|
||||
from synapse.types import (
|
||||
DeviceListUpdates,
|
||||
@@ -84,7 +86,7 @@ from synapse.util.async_helpers import concurrently_execute
|
||||
from synapse.util.caches.expiringcache import ExpiringCache
|
||||
from synapse.util.caches.lrucache import LruCache
|
||||
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
|
||||
from synapse.util.metrics import Measure, measure_func
|
||||
from synapse.util.metrics import Measure
|
||||
from synapse.visibility import filter_events_for_client
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -879,22 +881,49 @@ class SyncHandler:
|
||||
since_key = since_token.room_key
|
||||
|
||||
while limited and len(recents) < timeline_limit and max_repeat:
|
||||
# If we have a since_key then we are trying to get any events
|
||||
# that have happened since `since_key` up to `end_key`, so we
|
||||
# can just use `get_room_events_stream_for_room`.
|
||||
# Otherwise, we want to return the last N events in the room
|
||||
# in topological ordering.
|
||||
if since_key:
|
||||
events, end_key = await self.store.get_room_events_stream_for_room(
|
||||
room_id,
|
||||
limit=load_limit + 1,
|
||||
from_key=since_key,
|
||||
to_key=end_key,
|
||||
)
|
||||
else:
|
||||
events, end_key = await self.store.get_recent_events_for_room(
|
||||
room_id, limit=load_limit + 1, end_token=end_key
|
||||
)
|
||||
# For initial `/sync`, we want to view a historical section of the
|
||||
# timeline; to fetch events by `topological_ordering` (best
|
||||
# representation of the room DAG as others were seeing it at the time).
|
||||
# This also aligns with the order that `/messages` returns events in.
|
||||
#
|
||||
# For incremental `/sync`, we want to get all updates for rooms since
|
||||
# the last `/sync` (regardless if those updates arrived late or happened
|
||||
# a while ago in the past); to fetch events by `stream_ordering` (in the
|
||||
# order they were received by the server).
|
||||
#
|
||||
# Relevant spec issue: https://github.com/matrix-org/matrix-spec/issues/1917
|
||||
#
|
||||
# FIXME: Using workaround for mypy,
|
||||
# https://github.com/python/mypy/issues/10740#issuecomment-1997047277 and
|
||||
# https://github.com/python/mypy/issues/17479
|
||||
paginate_room_events_by_topological_ordering: PaginateFunction = (
|
||||
self.store.paginate_room_events_by_topological_ordering
|
||||
)
|
||||
paginate_room_events_by_stream_ordering: PaginateFunction = (
|
||||
self.store.paginate_room_events_by_stream_ordering
|
||||
)
|
||||
pagination_method: PaginateFunction = (
|
||||
# Use `topographical_ordering` for historical events
|
||||
paginate_room_events_by_topological_ordering
|
||||
if since_key is None
|
||||
# Use `stream_ordering` for updates
|
||||
else paginate_room_events_by_stream_ordering
|
||||
)
|
||||
events, end_key = await pagination_method(
|
||||
room_id=room_id,
|
||||
# The bounds are reversed so we can paginate backwards
|
||||
# (from newer to older events) starting at to_bound.
|
||||
# This ensures we fill the `limit` with the newest events first,
|
||||
from_key=end_key,
|
||||
to_key=since_key,
|
||||
direction=Direction.BACKWARDS,
|
||||
# We add one so we can determine if there are enough events to saturate
|
||||
# the limit or not (see `limited`)
|
||||
limit=load_limit + 1,
|
||||
)
|
||||
# We want to return the events in ascending order (the last event is the
|
||||
# most recent).
|
||||
events.reverse()
|
||||
|
||||
log_kv({"loaded_recents": len(events)})
|
||||
|
||||
@@ -1750,8 +1779,15 @@ class SyncHandler:
|
||||
)
|
||||
|
||||
if include_device_list_updates:
|
||||
device_lists = await self._generate_sync_entry_for_device_list(
|
||||
sync_result_builder,
|
||||
# include_device_list_updates can only be True if we have a
|
||||
# since token.
|
||||
assert since_token is not None
|
||||
|
||||
device_lists = await self._device_handler.generate_sync_entry_for_device_list(
|
||||
user_id=user_id,
|
||||
since_token=since_token,
|
||||
now_token=sync_result_builder.now_token,
|
||||
joined_room_ids=sync_result_builder.joined_room_ids,
|
||||
newly_joined_rooms=newly_joined_rooms,
|
||||
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
|
||||
newly_left_rooms=newly_left_rooms,
|
||||
@@ -1863,8 +1899,14 @@ class SyncHandler:
|
||||
newly_left_users,
|
||||
) = sync_result_builder.calculate_user_changes()
|
||||
|
||||
device_lists = await self._generate_sync_entry_for_device_list(
|
||||
sync_result_builder,
|
||||
# include_device_list_updates can only be True if we have a
|
||||
# since token.
|
||||
assert since_token is not None
|
||||
device_lists = await self._device_handler.generate_sync_entry_for_device_list(
|
||||
user_id=user_id,
|
||||
since_token=since_token,
|
||||
now_token=sync_result_builder.now_token,
|
||||
joined_room_ids=sync_result_builder.joined_room_ids,
|
||||
newly_joined_rooms=newly_joined_rooms,
|
||||
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
|
||||
newly_left_rooms=newly_left_rooms,
|
||||
@@ -2041,94 +2083,6 @@ class SyncHandler:
|
||||
|
||||
return sync_result_builder
|
||||
|
||||
@measure_func("_generate_sync_entry_for_device_list")
|
||||
async def _generate_sync_entry_for_device_list(
|
||||
self,
|
||||
sync_result_builder: "SyncResultBuilder",
|
||||
newly_joined_rooms: AbstractSet[str],
|
||||
newly_joined_or_invited_or_knocked_users: AbstractSet[str],
|
||||
newly_left_rooms: AbstractSet[str],
|
||||
newly_left_users: AbstractSet[str],
|
||||
) -> DeviceListUpdates:
|
||||
"""Generate the DeviceListUpdates section of sync
|
||||
|
||||
Args:
|
||||
sync_result_builder
|
||||
newly_joined_rooms: Set of rooms user has joined since previous sync
|
||||
newly_joined_or_invited_or_knocked_users: Set of users that have joined,
|
||||
been invited to a room or are knocking on a room since
|
||||
previous sync.
|
||||
newly_left_rooms: Set of rooms user has left since previous sync
|
||||
newly_left_users: Set of users that have left a room we're in since
|
||||
previous sync
|
||||
"""
|
||||
|
||||
user_id = sync_result_builder.sync_config.user.to_string()
|
||||
since_token = sync_result_builder.since_token
|
||||
assert since_token is not None
|
||||
|
||||
# Take a copy since these fields will be mutated later.
|
||||
newly_joined_or_invited_or_knocked_users = set(
|
||||
newly_joined_or_invited_or_knocked_users
|
||||
)
|
||||
newly_left_users = set(newly_left_users)
|
||||
|
||||
# We want to figure out what user IDs the client should refetch
|
||||
# device keys for, and which users we aren't going to track changes
|
||||
# for anymore.
|
||||
#
|
||||
# For the first step we check:
|
||||
# a. if any users we share a room with have updated their devices,
|
||||
# and
|
||||
# b. we also check if we've joined any new rooms, or if a user has
|
||||
# joined a room we're in.
|
||||
#
|
||||
# For the second step we just find any users we no longer share a
|
||||
# room with by looking at all users that have left a room plus users
|
||||
# that were in a room we've left.
|
||||
|
||||
users_that_have_changed = set()
|
||||
|
||||
joined_room_ids = sync_result_builder.joined_room_ids
|
||||
|
||||
# Step 1a, check for changes in devices of users we share a room
|
||||
# with
|
||||
users_that_have_changed = (
|
||||
await self._device_handler.get_device_changes_in_shared_rooms(
|
||||
user_id,
|
||||
joined_room_ids,
|
||||
from_token=since_token,
|
||||
now_token=sync_result_builder.now_token,
|
||||
)
|
||||
)
|
||||
|
||||
# Step 1b, check for newly joined rooms
|
||||
for room_id in newly_joined_rooms:
|
||||
joined_users = await self.store.get_users_in_room(room_id)
|
||||
newly_joined_or_invited_or_knocked_users.update(joined_users)
|
||||
|
||||
# TODO: Check that these users are actually new, i.e. either they
|
||||
# weren't in the previous sync *or* they left and rejoined.
|
||||
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
|
||||
|
||||
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
|
||||
user_id, since_token.device_list_key
|
||||
)
|
||||
users_that_have_changed.update(user_signatures_changed)
|
||||
|
||||
# Now find users that we no longer track
|
||||
for room_id in newly_left_rooms:
|
||||
left_users = await self.store.get_users_in_room(room_id)
|
||||
newly_left_users.update(left_users)
|
||||
|
||||
# Remove any users that we still share a room with.
|
||||
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
|
||||
for user_id, entries in left_users_rooms.items():
|
||||
if any(rid in joined_room_ids for rid in entries):
|
||||
newly_left_users.discard(user_id)
|
||||
|
||||
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
|
||||
|
||||
@trace
|
||||
async def _generate_sync_entry_for_to_device(
|
||||
self, sync_result_builder: "SyncResultBuilder"
|
||||
@@ -2270,7 +2224,11 @@ class SyncHandler:
|
||||
user=user,
|
||||
from_key=presence_key,
|
||||
is_guest=sync_config.is_guest,
|
||||
include_offline=include_offline,
|
||||
include_offline=(
|
||||
True
|
||||
if self.hs_config.server.presence_include_offline_users_on_sync
|
||||
else include_offline
|
||||
),
|
||||
)
|
||||
assert presence_key
|
||||
sync_result_builder.now_token = now_token.copy_and_replace(
|
||||
@@ -2637,9 +2595,10 @@ class SyncHandler:
|
||||
# a "gap" in the timeline, as described by the spec for /sync.
|
||||
room_to_events = await self.store.get_room_events_stream_for_rooms(
|
||||
room_ids=sync_result_builder.joined_room_ids,
|
||||
from_key=since_token.room_key,
|
||||
to_key=now_token.room_key,
|
||||
from_key=now_token.room_key,
|
||||
to_key=since_token.room_key,
|
||||
limit=timeline_limit + 1,
|
||||
direction=Direction.BACKWARDS,
|
||||
)
|
||||
|
||||
# We loop through all room ids, even if there are no new events, in case
|
||||
@@ -2650,6 +2609,9 @@ class SyncHandler:
|
||||
newly_joined = room_id in newly_joined_rooms
|
||||
if room_entry:
|
||||
events, start_key = room_entry
|
||||
# We want to return the events in ascending order (the last event is the
|
||||
# most recent).
|
||||
events.reverse()
|
||||
|
||||
prev_batch_token = now_token.copy_and_replace(
|
||||
StreamKeyType.ROOM, start_key
|
||||
|
||||
@@ -565,7 +565,12 @@ class TypingNotificationEventSource(EventSource[int, JsonMapping]):
|
||||
room_ids: Iterable[str],
|
||||
is_guest: bool,
|
||||
explicit_room_id: Optional[str] = None,
|
||||
to_key: Optional[int] = None,
|
||||
) -> Tuple[List[JsonMapping], int]:
|
||||
"""
|
||||
Find typing notifications for given rooms (> `from_token` and <= `to_token`)
|
||||
"""
|
||||
|
||||
with Measure(self.clock, "typing.get_new_events"):
|
||||
from_key = int(from_key)
|
||||
handler = self.get_typing_handler()
|
||||
@@ -574,7 +579,9 @@ class TypingNotificationEventSource(EventSource[int, JsonMapping]):
|
||||
for room_id in room_ids:
|
||||
if room_id not in handler._room_serials:
|
||||
continue
|
||||
if handler._room_serials[room_id] <= from_key:
|
||||
if handler._room_serials[room_id] <= from_key or (
|
||||
to_key is not None and handler._room_serials[room_id] > to_key
|
||||
):
|
||||
continue
|
||||
|
||||
events.append(self._make_event_for(room_id))
|
||||
|
||||
@@ -1057,11 +1057,11 @@ class _MultipartParserProtocol(protocol.Protocol):
|
||||
if not self.parser:
|
||||
|
||||
def on_header_field(data: bytes, start: int, end: int) -> None:
|
||||
if data[start:end] == b"Location":
|
||||
if data[start:end].lower() == b"location":
|
||||
self.has_redirect = True
|
||||
if data[start:end] == b"Content-Disposition":
|
||||
if data[start:end].lower() == b"content-disposition":
|
||||
self.in_disposition = True
|
||||
if data[start:end] == b"Content-Type":
|
||||
if data[start:end].lower() == b"content-type":
|
||||
self.in_content_type = True
|
||||
|
||||
def on_header_value(data: bytes, start: int, end: int) -> None:
|
||||
@@ -1088,7 +1088,6 @@ class _MultipartParserProtocol(protocol.Protocol):
|
||||
return
|
||||
# otherwise we are in the file part
|
||||
else:
|
||||
logger.info("Writing multipart file data to stream")
|
||||
try:
|
||||
self.stream.write(data[start:end])
|
||||
except Exception as e:
|
||||
|
||||
@@ -90,7 +90,7 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.logging.opentracing import set_tag, start_active_span, tags
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.async_helpers import AwakenableSleeper, timeout_deferred
|
||||
from synapse.util.async_helpers import AwakenableSleeper, Linearizer, timeout_deferred
|
||||
from synapse.util.metrics import Measure
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
@@ -464,6 +464,8 @@ class MatrixFederationHttpClient:
|
||||
self.max_long_retries = hs.config.federation.max_long_retries
|
||||
self.max_short_retries = hs.config.federation.max_short_retries
|
||||
|
||||
self.max_download_size = hs.config.media.max_upload_size
|
||||
|
||||
self._cooperator = Cooperator(scheduler=_make_scheduler(self.reactor))
|
||||
|
||||
self._sleeper = AwakenableSleeper(self.reactor)
|
||||
@@ -475,6 +477,8 @@ class MatrixFederationHttpClient:
|
||||
use_proxy=True,
|
||||
)
|
||||
|
||||
self.remote_download_linearizer = Linearizer("remote_download_linearizer", 6)
|
||||
|
||||
def wake_destination(self, destination: str) -> None:
|
||||
"""Called when the remote server may have come back online."""
|
||||
|
||||
@@ -1486,35 +1490,44 @@ class MatrixFederationHttpClient:
|
||||
)
|
||||
|
||||
headers = dict(response.headers.getAllRawHeaders())
|
||||
|
||||
expected_size = response.length
|
||||
# if we don't get an expected length then use the max length
|
||||
|
||||
if expected_size == UNKNOWN_LENGTH:
|
||||
expected_size = max_size
|
||||
logger.debug(
|
||||
f"File size unknown, assuming file is max allowable size: {max_size}"
|
||||
)
|
||||
else:
|
||||
if int(expected_size) > max_size:
|
||||
msg = "Requested file is too large > %r bytes" % (max_size,)
|
||||
logger.warning(
|
||||
"{%s} [%s] %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
msg,
|
||||
)
|
||||
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg, Codes.TOO_LARGE)
|
||||
|
||||
read_body, _ = await download_ratelimiter.can_do_action(
|
||||
requester=None,
|
||||
key=ip_address,
|
||||
n_actions=expected_size,
|
||||
)
|
||||
if not read_body:
|
||||
msg = "Requested file size exceeds ratelimits"
|
||||
logger.warning(
|
||||
"{%s} [%s] %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
msg,
|
||||
read_body, _ = await download_ratelimiter.can_do_action(
|
||||
requester=None,
|
||||
key=ip_address,
|
||||
n_actions=expected_size,
|
||||
)
|
||||
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
|
||||
if not read_body:
|
||||
msg = "Requested file size exceeds ratelimits"
|
||||
logger.warning(
|
||||
"{%s} [%s] %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
msg,
|
||||
)
|
||||
raise SynapseError(
|
||||
HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED
|
||||
)
|
||||
|
||||
try:
|
||||
# add a byte of headroom to max size as function errs at >=
|
||||
d = read_body_with_max_size(response, output_stream, expected_size + 1)
|
||||
d.addTimeout(self.default_timeout_seconds, self.reactor)
|
||||
length = await make_deferred_yieldable(d)
|
||||
async with self.remote_download_linearizer.queue(ip_address):
|
||||
# add a byte of headroom to max size as function errs at >=
|
||||
d = read_body_with_max_size(response, output_stream, expected_size + 1)
|
||||
d.addTimeout(self.default_timeout_seconds, self.reactor)
|
||||
length = await make_deferred_yieldable(d)
|
||||
except BodyExceededMaxSize:
|
||||
msg = "Requested file is too large > %r bytes" % (expected_size,)
|
||||
logger.warning(
|
||||
@@ -1560,6 +1573,13 @@ class MatrixFederationHttpClient:
|
||||
request.method,
|
||||
request.uri.decode("ascii"),
|
||||
)
|
||||
|
||||
# if we didn't know the length upfront, decrement the actual size from ratelimiter
|
||||
if response.length == UNKNOWN_LENGTH:
|
||||
download_ratelimiter.record_action(
|
||||
requester=None, key=ip_address, n_actions=length
|
||||
)
|
||||
|
||||
return length, headers
|
||||
|
||||
async def federation_get_file(
|
||||
@@ -1630,29 +1650,37 @@ class MatrixFederationHttpClient:
|
||||
)
|
||||
|
||||
headers = dict(response.headers.getAllRawHeaders())
|
||||
|
||||
expected_size = response.length
|
||||
# if we don't get an expected length then use the max length
|
||||
|
||||
if expected_size == UNKNOWN_LENGTH:
|
||||
expected_size = max_size
|
||||
logger.debug(
|
||||
f"File size unknown, assuming file is max allowable size: {max_size}"
|
||||
)
|
||||
else:
|
||||
if int(expected_size) > max_size:
|
||||
msg = "Requested file is too large > %r bytes" % (max_size,)
|
||||
logger.warning(
|
||||
"{%s} [%s] %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
msg,
|
||||
)
|
||||
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg, Codes.TOO_LARGE)
|
||||
|
||||
read_body, _ = await download_ratelimiter.can_do_action(
|
||||
requester=None,
|
||||
key=ip_address,
|
||||
n_actions=expected_size,
|
||||
)
|
||||
if not read_body:
|
||||
msg = "Requested file size exceeds ratelimits"
|
||||
logger.warning(
|
||||
"{%s} [%s] %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
msg,
|
||||
read_body, _ = await download_ratelimiter.can_do_action(
|
||||
requester=None,
|
||||
key=ip_address,
|
||||
n_actions=expected_size,
|
||||
)
|
||||
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
|
||||
if not read_body:
|
||||
msg = "Requested file size exceeds ratelimits"
|
||||
logger.warning(
|
||||
"{%s} [%s] %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
msg,
|
||||
)
|
||||
raise SynapseError(
|
||||
HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED
|
||||
)
|
||||
|
||||
# this should be a multipart/mixed response with the boundary string in the header
|
||||
try:
|
||||
@@ -1672,11 +1700,12 @@ class MatrixFederationHttpClient:
|
||||
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg)
|
||||
|
||||
try:
|
||||
# add a byte of headroom to max size as `_MultipartParserProtocol.dataReceived` errs at >=
|
||||
deferred = read_multipart_response(
|
||||
response, output_stream, boundary, expected_size + 1
|
||||
)
|
||||
deferred.addTimeout(self.default_timeout_seconds, self.reactor)
|
||||
async with self.remote_download_linearizer.queue(ip_address):
|
||||
# add a byte of headroom to max size as `_MultipartParserProtocol.dataReceived` errs at >=
|
||||
deferred = read_multipart_response(
|
||||
response, output_stream, boundary, expected_size + 1
|
||||
)
|
||||
deferred.addTimeout(self.default_timeout_seconds, self.reactor)
|
||||
except BodyExceededMaxSize:
|
||||
msg = "Requested file is too large > %r bytes" % (expected_size,)
|
||||
logger.warning(
|
||||
@@ -1729,8 +1758,10 @@ class MatrixFederationHttpClient:
|
||||
request.destination,
|
||||
str_url,
|
||||
)
|
||||
# We don't know how large the response will be upfront, so limit it to
|
||||
# the `max_upload_size` config value.
|
||||
length, headers, _, _ = await self._simple_http_client.get_file(
|
||||
str_url, output_stream, expected_size
|
||||
str_url, output_stream, self.max_download_size
|
||||
)
|
||||
|
||||
logger.info(
|
||||
@@ -1743,6 +1774,13 @@ class MatrixFederationHttpClient:
|
||||
request.method,
|
||||
request.uri.decode("ascii"),
|
||||
)
|
||||
|
||||
# if we didn't know the length upfront, decrement the actual size from ratelimiter
|
||||
if response.length == UNKNOWN_LENGTH:
|
||||
download_ratelimiter.record_action(
|
||||
requester=None, key=ip_address, n_actions=length
|
||||
)
|
||||
|
||||
return length, headers, multipart_response.json
|
||||
|
||||
|
||||
|
||||
@@ -62,6 +62,15 @@ HOP_BY_HOP_HEADERS = {
|
||||
"Upgrade",
|
||||
}
|
||||
|
||||
if hasattr(Headers, "_canonicalNameCaps"):
|
||||
# Twisted < 24.7.0rc1
|
||||
_canonicalHeaderName = Headers()._canonicalNameCaps # type: ignore[attr-defined]
|
||||
else:
|
||||
# Twisted >= 24.7.0rc1
|
||||
# But note that `_encodeName` still exists on prior versions,
|
||||
# it just encodes differently
|
||||
_canonicalHeaderName = Headers()._encodeName
|
||||
|
||||
|
||||
def parse_connection_header_value(
|
||||
connection_header_value: Optional[bytes],
|
||||
@@ -85,11 +94,10 @@ def parse_connection_header_value(
|
||||
The set of header names that should not be copied over from the remote response.
|
||||
The keys are capitalized in canonical capitalization.
|
||||
"""
|
||||
headers = Headers()
|
||||
extra_headers_to_remove: Set[str] = set()
|
||||
if connection_header_value:
|
||||
extra_headers_to_remove = {
|
||||
headers._canonicalNameCaps(connection_option.strip()).decode("ascii")
|
||||
_canonicalHeaderName(connection_option.strip()).decode("ascii")
|
||||
for connection_option in connection_header_value.split(b",")
|
||||
}
|
||||
|
||||
|
||||
@@ -658,7 +658,7 @@ class SynapseSite(ProxySite):
|
||||
)
|
||||
|
||||
self.site_tag = site_tag
|
||||
self.reactor = reactor
|
||||
self.reactor: ISynapseReactor = reactor
|
||||
|
||||
assert config.http_options is not None
|
||||
proxied = config.http_options.x_forwarded
|
||||
@@ -683,7 +683,7 @@ class SynapseSite(ProxySite):
|
||||
self.access_logger = logging.getLogger(logger_name)
|
||||
self.server_version_string = server_version_string.encode("ascii")
|
||||
|
||||
def log(self, request: SynapseRequest) -> None:
|
||||
def log(self, request: SynapseRequest) -> None: # type: ignore[override]
|
||||
pass
|
||||
|
||||
|
||||
|
||||
@@ -28,6 +28,7 @@ from types import TracebackType
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Awaitable,
|
||||
BinaryIO,
|
||||
Dict,
|
||||
Generator,
|
||||
List,
|
||||
@@ -37,19 +38,28 @@ from typing import (
|
||||
)
|
||||
|
||||
import attr
|
||||
from zope.interface import implementer
|
||||
|
||||
from twisted.internet import interfaces
|
||||
from twisted.internet.defer import Deferred
|
||||
from twisted.internet.interfaces import IConsumer
|
||||
from twisted.protocols.basic import FileSender
|
||||
from twisted.python.failure import Failure
|
||||
from twisted.web.server import Request
|
||||
|
||||
from synapse.api.errors import Codes, cs_error
|
||||
from synapse.http.server import finish_request, respond_with_json
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.logging.context import (
|
||||
defer_to_threadpool,
|
||||
make_deferred_yieldable,
|
||||
run_in_background,
|
||||
)
|
||||
from synapse.util import Clock
|
||||
from synapse.util.async_helpers import DeferredEvent
|
||||
from synapse.util.stringutils import is_ascii
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.databases.main.media_repository import LocalMedia
|
||||
|
||||
|
||||
@@ -122,6 +132,7 @@ def respond_404(request: SynapseRequest) -> None:
|
||||
|
||||
|
||||
async def respond_with_file(
|
||||
hs: "HomeServer",
|
||||
request: SynapseRequest,
|
||||
media_type: str,
|
||||
file_path: str,
|
||||
@@ -138,7 +149,7 @@ async def respond_with_file(
|
||||
add_file_headers(request, media_type, file_size, upload_name)
|
||||
|
||||
with open(file_path, "rb") as f:
|
||||
await make_deferred_yieldable(FileSender().beginFileTransfer(f, request))
|
||||
await ThreadedFileSender(hs).beginFileTransfer(f, request)
|
||||
|
||||
finish_request(request)
|
||||
else:
|
||||
@@ -601,3 +612,151 @@ def _parseparam(s: bytes) -> Generator[bytes, None, None]:
|
||||
f = s[:end]
|
||||
yield f.strip()
|
||||
s = s[end:]
|
||||
|
||||
|
||||
@implementer(interfaces.IPushProducer)
|
||||
class ThreadedFileSender:
|
||||
"""
|
||||
A producer that sends the contents of a file to a consumer, reading from the
|
||||
file on a thread.
|
||||
|
||||
This works by having a loop in a threadpool repeatedly reading from the
|
||||
file, until the consumer pauses the producer. There is then a loop in the
|
||||
main thread that waits until the consumer resumes the producer and then
|
||||
starts reading in the threadpool again.
|
||||
|
||||
This is done to ensure that we're never waiting in the threadpool, as
|
||||
otherwise its easy to starve it of threads.
|
||||
"""
|
||||
|
||||
# How much data to read in one go.
|
||||
CHUNK_SIZE = 2**14
|
||||
|
||||
# How long we wait for the consumer to be ready again before aborting the
|
||||
# read.
|
||||
TIMEOUT_SECONDS = 90.0
|
||||
|
||||
def __init__(self, hs: "HomeServer") -> None:
|
||||
self.reactor = hs.get_reactor()
|
||||
self.thread_pool = hs.get_media_sender_thread_pool()
|
||||
|
||||
self.file: Optional[BinaryIO] = None
|
||||
self.deferred: "Deferred[None]" = Deferred()
|
||||
self.consumer: Optional[interfaces.IConsumer] = None
|
||||
|
||||
# Signals if the thread should keep reading/sending data. Set means
|
||||
# continue, clear means pause.
|
||||
self.wakeup_event = DeferredEvent(self.reactor)
|
||||
|
||||
# Signals if the thread should terminate, e.g. because the consumer has
|
||||
# gone away.
|
||||
self.stop_writing = False
|
||||
|
||||
def beginFileTransfer(
|
||||
self, file: BinaryIO, consumer: interfaces.IConsumer
|
||||
) -> "Deferred[None]":
|
||||
"""
|
||||
Begin transferring a file
|
||||
"""
|
||||
self.file = file
|
||||
self.consumer = consumer
|
||||
|
||||
self.consumer.registerProducer(self, True)
|
||||
|
||||
# We set the wakeup signal as we should start producing immediately.
|
||||
self.wakeup_event.set()
|
||||
run_in_background(self.start_read_loop)
|
||||
|
||||
return make_deferred_yieldable(self.deferred)
|
||||
|
||||
def resumeProducing(self) -> None:
|
||||
"""interfaces.IPushProducer"""
|
||||
self.wakeup_event.set()
|
||||
|
||||
def pauseProducing(self) -> None:
|
||||
"""interfaces.IPushProducer"""
|
||||
self.wakeup_event.clear()
|
||||
|
||||
def stopProducing(self) -> None:
|
||||
"""interfaces.IPushProducer"""
|
||||
|
||||
# Unregister the consumer so we don't try and interact with it again.
|
||||
if self.consumer:
|
||||
self.consumer.unregisterProducer()
|
||||
|
||||
self.consumer = None
|
||||
|
||||
# Terminate the loop.
|
||||
self.stop_writing = True
|
||||
self.wakeup_event.set()
|
||||
|
||||
if not self.deferred.called:
|
||||
self.deferred.errback(Exception("Consumer asked us to stop producing"))
|
||||
|
||||
async def start_read_loop(self) -> None:
|
||||
"""This is the loop that drives reading/writing"""
|
||||
try:
|
||||
while not self.stop_writing:
|
||||
# Start the loop in the threadpool to read data.
|
||||
more_data = await defer_to_threadpool(
|
||||
self.reactor, self.thread_pool, self._on_thread_read_loop
|
||||
)
|
||||
if not more_data:
|
||||
# Reached EOF, we can just return.
|
||||
return
|
||||
|
||||
if not self.wakeup_event.is_set():
|
||||
ret = await self.wakeup_event.wait(self.TIMEOUT_SECONDS)
|
||||
if not ret:
|
||||
raise Exception("Timed out waiting to resume")
|
||||
except Exception:
|
||||
self._error(Failure())
|
||||
finally:
|
||||
self._finish()
|
||||
|
||||
def _on_thread_read_loop(self) -> bool:
|
||||
"""This is the loop that happens on a thread.
|
||||
|
||||
Returns:
|
||||
Whether there is more data to send.
|
||||
"""
|
||||
|
||||
while not self.stop_writing and self.wakeup_event.is_set():
|
||||
# The file should always have been set before we get here.
|
||||
assert self.file is not None
|
||||
|
||||
chunk = self.file.read(self.CHUNK_SIZE)
|
||||
if not chunk:
|
||||
return False
|
||||
|
||||
self.reactor.callFromThread(self._write, chunk)
|
||||
|
||||
return True
|
||||
|
||||
def _write(self, chunk: bytes) -> None:
|
||||
"""Called from the thread to write a chunk of data"""
|
||||
if self.consumer:
|
||||
self.consumer.write(chunk)
|
||||
|
||||
def _error(self, failure: Failure) -> None:
|
||||
"""Called when there was a fatal error"""
|
||||
if self.consumer:
|
||||
self.consumer.unregisterProducer()
|
||||
self.consumer = None
|
||||
|
||||
if not self.deferred.called:
|
||||
self.deferred.errback(failure)
|
||||
|
||||
def _finish(self) -> None:
|
||||
"""Called when we have finished writing (either on success or
|
||||
failure)."""
|
||||
if self.file:
|
||||
self.file.close()
|
||||
self.file = None
|
||||
|
||||
if self.consumer:
|
||||
self.consumer.unregisterProducer()
|
||||
self.consumer = None
|
||||
|
||||
if not self.deferred.called:
|
||||
self.deferred.callback(None)
|
||||
|
||||
@@ -430,6 +430,7 @@ class MediaRepository:
|
||||
media_id: str,
|
||||
name: Optional[str],
|
||||
max_timeout_ms: int,
|
||||
allow_authenticated: bool = True,
|
||||
federation: bool = False,
|
||||
) -> None:
|
||||
"""Responds to requests for local media, if exists, or returns 404.
|
||||
@@ -442,6 +443,7 @@ class MediaRepository:
|
||||
the filename in the Content-Disposition header of the response.
|
||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||
media to be uploaded.
|
||||
allow_authenticated: whether media marked as authenticated may be served to this request
|
||||
federation: whether the local media being fetched is for a federation request
|
||||
|
||||
Returns:
|
||||
@@ -451,6 +453,10 @@ class MediaRepository:
|
||||
if not media_info:
|
||||
return
|
||||
|
||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
||||
if media_info.authenticated:
|
||||
raise NotFoundError()
|
||||
|
||||
self.mark_recently_accessed(None, media_id)
|
||||
|
||||
media_type = media_info.media_type
|
||||
@@ -481,6 +487,7 @@ class MediaRepository:
|
||||
max_timeout_ms: int,
|
||||
ip_address: str,
|
||||
use_federation_endpoint: bool,
|
||||
allow_authenticated: bool = True,
|
||||
) -> None:
|
||||
"""Respond to requests for remote media.
|
||||
|
||||
@@ -495,6 +502,8 @@ class MediaRepository:
|
||||
ip_address: the IP address of the requester
|
||||
use_federation_endpoint: whether to request the remote media over the new
|
||||
federation `/download` endpoint
|
||||
allow_authenticated: whether media marked as authenticated may be served to this
|
||||
request
|
||||
|
||||
Returns:
|
||||
Resolves once a response has successfully been written to request
|
||||
@@ -526,6 +535,7 @@ class MediaRepository:
|
||||
self.download_ratelimiter,
|
||||
ip_address,
|
||||
use_federation_endpoint,
|
||||
allow_authenticated,
|
||||
)
|
||||
|
||||
# We deliberately stream the file outside the lock
|
||||
@@ -548,6 +558,7 @@ class MediaRepository:
|
||||
max_timeout_ms: int,
|
||||
ip_address: str,
|
||||
use_federation: bool,
|
||||
allow_authenticated: bool,
|
||||
) -> RemoteMedia:
|
||||
"""Gets the media info associated with the remote file, downloading
|
||||
if necessary.
|
||||
@@ -560,6 +571,8 @@ class MediaRepository:
|
||||
ip_address: IP address of the requester
|
||||
use_federation: if a download is necessary, whether to request the remote file
|
||||
over the federation `/download` endpoint
|
||||
allow_authenticated: whether media marked as authenticated may be served to this
|
||||
request
|
||||
|
||||
Returns:
|
||||
The media info of the file
|
||||
@@ -581,6 +594,7 @@ class MediaRepository:
|
||||
self.download_ratelimiter,
|
||||
ip_address,
|
||||
use_federation,
|
||||
allow_authenticated,
|
||||
)
|
||||
|
||||
# Ensure we actually use the responder so that it releases resources
|
||||
@@ -598,6 +612,7 @@ class MediaRepository:
|
||||
download_ratelimiter: Ratelimiter,
|
||||
ip_address: str,
|
||||
use_federation_endpoint: bool,
|
||||
allow_authenticated: bool,
|
||||
) -> Tuple[Optional[Responder], RemoteMedia]:
|
||||
"""Looks for media in local cache, if not there then attempt to
|
||||
download from remote server.
|
||||
@@ -619,6 +634,11 @@ class MediaRepository:
|
||||
"""
|
||||
media_info = await self.store.get_cached_remote_media(server_name, media_id)
|
||||
|
||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
||||
# if it isn't cached then don't fetch it or if it's authenticated then don't serve it
|
||||
if not media_info or media_info.authenticated:
|
||||
raise NotFoundError()
|
||||
|
||||
# file_id is the ID we use to track the file locally. If we've already
|
||||
# seen the file then reuse the existing ID, otherwise generate a new
|
||||
# one.
|
||||
@@ -792,6 +812,11 @@ class MediaRepository:
|
||||
|
||||
logger.info("Stored remote media in file %r", fname)
|
||||
|
||||
if self.hs.config.media.enable_authenticated_media:
|
||||
authenticated = True
|
||||
else:
|
||||
authenticated = False
|
||||
|
||||
return RemoteMedia(
|
||||
media_origin=server_name,
|
||||
media_id=media_id,
|
||||
@@ -802,6 +827,7 @@ class MediaRepository:
|
||||
filesystem_id=file_id,
|
||||
last_access_ts=time_now_ms,
|
||||
quarantined_by=None,
|
||||
authenticated=authenticated,
|
||||
)
|
||||
|
||||
async def _federation_download_remote_file(
|
||||
@@ -915,6 +941,11 @@ class MediaRepository:
|
||||
|
||||
logger.debug("Stored remote media in file %r", fname)
|
||||
|
||||
if self.hs.config.media.enable_authenticated_media:
|
||||
authenticated = True
|
||||
else:
|
||||
authenticated = False
|
||||
|
||||
return RemoteMedia(
|
||||
media_origin=server_name,
|
||||
media_id=media_id,
|
||||
@@ -925,6 +956,7 @@ class MediaRepository:
|
||||
filesystem_id=file_id,
|
||||
last_access_ts=time_now_ms,
|
||||
quarantined_by=None,
|
||||
authenticated=authenticated,
|
||||
)
|
||||
|
||||
def _get_thumbnail_requirements(
|
||||
@@ -1030,7 +1062,12 @@ class MediaRepository:
|
||||
t_len = os.path.getsize(output_path)
|
||||
|
||||
await self.store.store_local_thumbnail(
|
||||
media_id, t_width, t_height, t_type, t_method, t_len
|
||||
media_id,
|
||||
t_width,
|
||||
t_height,
|
||||
t_type,
|
||||
t_method,
|
||||
t_len,
|
||||
)
|
||||
|
||||
return output_path
|
||||
|
||||
@@ -49,15 +49,11 @@ from zope.interface import implementer
|
||||
from twisted.internet import interfaces
|
||||
from twisted.internet.defer import Deferred
|
||||
from twisted.internet.interfaces import IConsumer
|
||||
from twisted.protocols.basic import FileSender
|
||||
|
||||
from synapse.api.errors import NotFoundError
|
||||
from synapse.logging.context import (
|
||||
defer_to_thread,
|
||||
make_deferred_yieldable,
|
||||
run_in_background,
|
||||
)
|
||||
from synapse.logging.context import defer_to_thread, run_in_background
|
||||
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
|
||||
from synapse.media._base import ThreadedFileSender
|
||||
from synapse.util import Clock
|
||||
from synapse.util.file_consumer import BackgroundFileConsumer
|
||||
|
||||
@@ -213,7 +209,7 @@ class MediaStorage:
|
||||
local_path = os.path.join(self.local_media_directory, path)
|
||||
if os.path.exists(local_path):
|
||||
logger.debug("responding with local file %s", local_path)
|
||||
return FileResponder(open(local_path, "rb"))
|
||||
return FileResponder(self.hs, open(local_path, "rb"))
|
||||
logger.debug("local file %s did not exist", local_path)
|
||||
|
||||
for provider in self.storage_providers:
|
||||
@@ -336,13 +332,12 @@ class FileResponder(Responder):
|
||||
is closed when finished streaming.
|
||||
"""
|
||||
|
||||
def __init__(self, open_file: IO):
|
||||
def __init__(self, hs: "HomeServer", open_file: BinaryIO):
|
||||
self.hs = hs
|
||||
self.open_file = open_file
|
||||
|
||||
def write_to_consumer(self, consumer: IConsumer) -> Deferred:
|
||||
return make_deferred_yieldable(
|
||||
FileSender().beginFileTransfer(self.open_file, consumer)
|
||||
)
|
||||
return ThreadedFileSender(self.hs).beginFileTransfer(self.open_file, consumer)
|
||||
|
||||
def __exit__(
|
||||
self,
|
||||
@@ -549,7 +544,7 @@ class MultipartFileConsumer:
|
||||
Calculate the content length of the multipart response
|
||||
in bytes.
|
||||
"""
|
||||
if not self.length:
|
||||
if self.length is None:
|
||||
return None
|
||||
# calculate length of json field and content-type, disposition headers
|
||||
json_field = json.dumps(self.json_field)
|
||||
|
||||
@@ -145,6 +145,7 @@ class FileStorageProviderBackend(StorageProvider):
|
||||
|
||||
def __init__(self, hs: "HomeServer", config: str):
|
||||
self.hs = hs
|
||||
self.reactor = hs.get_reactor()
|
||||
self.cache_directory = hs.config.media.media_store_path
|
||||
self.base_directory = config
|
||||
|
||||
@@ -165,7 +166,7 @@ class FileStorageProviderBackend(StorageProvider):
|
||||
shutil_copyfile: Callable[[str, str], str] = shutil.copyfile
|
||||
with start_active_span("shutil_copyfile"):
|
||||
await defer_to_thread(
|
||||
self.hs.get_reactor(),
|
||||
self.reactor,
|
||||
shutil_copyfile,
|
||||
primary_fname,
|
||||
backup_fname,
|
||||
@@ -177,7 +178,7 @@ class FileStorageProviderBackend(StorageProvider):
|
||||
|
||||
backup_fname = os.path.join(self.base_directory, path)
|
||||
if os.path.isfile(backup_fname):
|
||||
return FileResponder(open(backup_fname, "rb"))
|
||||
return FileResponder(self.hs, open(backup_fname, "rb"))
|
||||
|
||||
return None
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ from typing import TYPE_CHECKING, List, Optional, Tuple, Type
|
||||
|
||||
from PIL import Image
|
||||
|
||||
from synapse.api.errors import Codes, SynapseError, cs_error
|
||||
from synapse.api.errors import Codes, NotFoundError, SynapseError, cs_error
|
||||
from synapse.config.repository import THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP
|
||||
from synapse.http.server import respond_with_json
|
||||
from synapse.http.site import SynapseRequest
|
||||
@@ -259,6 +259,7 @@ class ThumbnailProvider:
|
||||
media_storage: MediaStorage,
|
||||
):
|
||||
self.hs = hs
|
||||
self.reactor = hs.get_reactor()
|
||||
self.media_repo = media_repo
|
||||
self.media_storage = media_storage
|
||||
self.store = hs.get_datastores().main
|
||||
@@ -274,6 +275,7 @@ class ThumbnailProvider:
|
||||
m_type: str,
|
||||
max_timeout_ms: int,
|
||||
for_federation: bool,
|
||||
allow_authenticated: bool = True,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_local_media_info(
|
||||
request, media_id, max_timeout_ms
|
||||
@@ -281,6 +283,12 @@ class ThumbnailProvider:
|
||||
if not media_info:
|
||||
return
|
||||
|
||||
# if the media the thumbnail is generated from is authenticated, don't serve the
|
||||
# thumbnail over an unauthenticated endpoint
|
||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
||||
if media_info.authenticated:
|
||||
raise NotFoundError()
|
||||
|
||||
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
||||
await self._select_and_respond_with_thumbnail(
|
||||
request,
|
||||
@@ -307,14 +315,20 @@ class ThumbnailProvider:
|
||||
desired_type: str,
|
||||
max_timeout_ms: int,
|
||||
for_federation: bool,
|
||||
allow_authenticated: bool = True,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_local_media_info(
|
||||
request, media_id, max_timeout_ms
|
||||
)
|
||||
|
||||
if not media_info:
|
||||
return
|
||||
|
||||
# if the media the thumbnail is generated from is authenticated, don't serve the
|
||||
# thumbnail over an unauthenticated endpoint
|
||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
||||
if media_info.authenticated:
|
||||
raise NotFoundError()
|
||||
|
||||
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
||||
for info in thumbnail_infos:
|
||||
t_w = info.width == desired_width
|
||||
@@ -360,11 +374,11 @@ class ThumbnailProvider:
|
||||
await respond_with_multipart_responder(
|
||||
self.hs.get_clock(),
|
||||
request,
|
||||
FileResponder(open(file_path, "rb")),
|
||||
FileResponder(self.hs, open(file_path, "rb")),
|
||||
media_info,
|
||||
)
|
||||
else:
|
||||
await respond_with_file(request, desired_type, file_path)
|
||||
await respond_with_file(self.hs, request, desired_type, file_path)
|
||||
else:
|
||||
logger.warning("Failed to generate thumbnail")
|
||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||
@@ -381,14 +395,27 @@ class ThumbnailProvider:
|
||||
max_timeout_ms: int,
|
||||
ip_address: str,
|
||||
use_federation: bool,
|
||||
allow_authenticated: bool = True,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_remote_media_info(
|
||||
server_name, media_id, max_timeout_ms, ip_address, use_federation
|
||||
server_name,
|
||||
media_id,
|
||||
max_timeout_ms,
|
||||
ip_address,
|
||||
use_federation,
|
||||
allow_authenticated,
|
||||
)
|
||||
if not media_info:
|
||||
respond_404(request)
|
||||
return
|
||||
|
||||
# if the media the thumbnail is generated from is authenticated, don't serve the
|
||||
# thumbnail over an unauthenticated endpoint
|
||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
||||
if media_info.authenticated:
|
||||
respond_404(request)
|
||||
return
|
||||
|
||||
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
||||
server_name, media_id
|
||||
)
|
||||
@@ -429,7 +456,7 @@ class ThumbnailProvider:
|
||||
)
|
||||
|
||||
if file_path:
|
||||
await respond_with_file(request, desired_type, file_path)
|
||||
await respond_with_file(self.hs, request, desired_type, file_path)
|
||||
else:
|
||||
logger.warning("Failed to generate thumbnail")
|
||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||
@@ -446,16 +473,28 @@ class ThumbnailProvider:
|
||||
max_timeout_ms: int,
|
||||
ip_address: str,
|
||||
use_federation: bool,
|
||||
allow_authenticated: bool = True,
|
||||
) -> None:
|
||||
# TODO: Don't download the whole remote file
|
||||
# We should proxy the thumbnail from the remote server instead of
|
||||
# downloading the remote file and generating our own thumbnails.
|
||||
media_info = await self.media_repo.get_remote_media_info(
|
||||
server_name, media_id, max_timeout_ms, ip_address, use_federation
|
||||
server_name,
|
||||
media_id,
|
||||
max_timeout_ms,
|
||||
ip_address,
|
||||
use_federation,
|
||||
allow_authenticated,
|
||||
)
|
||||
if not media_info:
|
||||
return
|
||||
|
||||
# if the media the thumbnail is generated from is authenticated, don't serve the
|
||||
# thumbnail over an unauthenticated endpoint
|
||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
||||
if media_info.authenticated:
|
||||
raise NotFoundError()
|
||||
|
||||
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
||||
server_name, media_id
|
||||
)
|
||||
@@ -485,8 +524,8 @@ class ThumbnailProvider:
|
||||
file_id: str,
|
||||
url_cache: bool,
|
||||
for_federation: bool,
|
||||
server_name: Optional[str] = None,
|
||||
media_info: Optional[LocalMedia] = None,
|
||||
server_name: Optional[str] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Respond to a request with an appropriate thumbnail from the previously generated thumbnails.
|
||||
|
||||
@@ -773,6 +773,7 @@ class Notifier:
|
||||
stream_token = await self.event_sources.bound_future_token(stream_token)
|
||||
|
||||
start = self.clock.time_msec()
|
||||
logged = False
|
||||
while True:
|
||||
current_token = self.event_sources.get_current_token()
|
||||
if stream_token.is_before_or_eq(current_token):
|
||||
@@ -783,11 +784,13 @@ class Notifier:
|
||||
if now - start > 10_000:
|
||||
return False
|
||||
|
||||
logger.info(
|
||||
"Waiting for current token to reach %s; currently at %s",
|
||||
stream_token,
|
||||
current_token,
|
||||
)
|
||||
if not logged:
|
||||
logger.info(
|
||||
"Waiting for current token to reach %s; currently at %s",
|
||||
stream_token,
|
||||
current_token,
|
||||
)
|
||||
logged = True
|
||||
|
||||
# TODO: be better
|
||||
await self.clock.sleep(0.5)
|
||||
|
||||
@@ -18,7 +18,8 @@
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
from typing import TYPE_CHECKING, Callable
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Optional, Tuple
|
||||
|
||||
from synapse.http.server import HttpServer, JsonResource
|
||||
from synapse.rest import admin
|
||||
@@ -67,11 +68,64 @@ from synapse.rest.client import (
|
||||
voip,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
RegisterServletsFunc = Callable[["HomeServer", HttpServer], None]
|
||||
|
||||
CLIENT_SERVLET_FUNCTIONS: Tuple[RegisterServletsFunc, ...] = (
|
||||
versions.register_servlets,
|
||||
initial_sync.register_servlets,
|
||||
room.register_deprecated_servlets,
|
||||
events.register_servlets,
|
||||
room.register_servlets,
|
||||
login.register_servlets,
|
||||
profile.register_servlets,
|
||||
presence.register_servlets,
|
||||
directory.register_servlets,
|
||||
voip.register_servlets,
|
||||
pusher.register_servlets,
|
||||
push_rule.register_servlets,
|
||||
logout.register_servlets,
|
||||
sync.register_servlets,
|
||||
filter.register_servlets,
|
||||
account.register_servlets,
|
||||
register.register_servlets,
|
||||
auth.register_servlets,
|
||||
receipts.register_servlets,
|
||||
read_marker.register_servlets,
|
||||
room_keys.register_servlets,
|
||||
keys.register_servlets,
|
||||
tokenrefresh.register_servlets,
|
||||
tags.register_servlets,
|
||||
account_data.register_servlets,
|
||||
reporting.register_servlets,
|
||||
openid.register_servlets,
|
||||
notifications.register_servlets,
|
||||
devices.register_servlets,
|
||||
thirdparty.register_servlets,
|
||||
sendtodevice.register_servlets,
|
||||
user_directory.register_servlets,
|
||||
room_upgrade_rest_servlet.register_servlets,
|
||||
capabilities.register_servlets,
|
||||
account_validity.register_servlets,
|
||||
relations.register_servlets,
|
||||
password_policy.register_servlets,
|
||||
knock.register_servlets,
|
||||
appservice_ping.register_servlets,
|
||||
admin.register_servlets_for_client_rest_resource,
|
||||
mutual_rooms.register_servlets,
|
||||
login_token_request.register_servlets,
|
||||
rendezvous.register_servlets,
|
||||
auth_issuer.register_servlets,
|
||||
)
|
||||
|
||||
SERVLET_GROUPS: Dict[str, Iterable[RegisterServletsFunc]] = {
|
||||
"client": CLIENT_SERVLET_FUNCTIONS,
|
||||
}
|
||||
|
||||
|
||||
class ClientRestResource(JsonResource):
|
||||
"""Matrix Client API REST resource.
|
||||
@@ -83,80 +137,56 @@ class ClientRestResource(JsonResource):
|
||||
* etc
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
def __init__(self, hs: "HomeServer", servlet_groups: Optional[List[str]] = None):
|
||||
JsonResource.__init__(self, hs, canonical_json=False)
|
||||
self.register_servlets(self, hs)
|
||||
if hs.config.media.can_load_media_repo:
|
||||
# This import is here to prevent a circular import failure
|
||||
from synapse.rest.client import media
|
||||
|
||||
SERVLET_GROUPS["media"] = (media.register_servlets,)
|
||||
self.register_servlets(self, hs, servlet_groups)
|
||||
|
||||
@staticmethod
|
||||
def register_servlets(client_resource: HttpServer, hs: "HomeServer") -> None:
|
||||
def register_servlets(
|
||||
client_resource: HttpServer,
|
||||
hs: "HomeServer",
|
||||
servlet_groups: Optional[Iterable[str]] = None,
|
||||
) -> None:
|
||||
# Some servlets are only registered on the main process (and not worker
|
||||
# processes).
|
||||
is_main_process = hs.config.worker.worker_app is None
|
||||
|
||||
versions.register_servlets(hs, client_resource)
|
||||
if not servlet_groups:
|
||||
servlet_groups = SERVLET_GROUPS.keys()
|
||||
|
||||
# Deprecated in r0
|
||||
initial_sync.register_servlets(hs, client_resource)
|
||||
room.register_deprecated_servlets(hs, client_resource)
|
||||
for servlet_group in servlet_groups:
|
||||
# Fail on unknown servlet groups.
|
||||
if servlet_group not in SERVLET_GROUPS:
|
||||
if servlet_group == "media":
|
||||
logger.warn(
|
||||
"media.can_load_media_repo needs to be configured for the media servlet to be available"
|
||||
)
|
||||
raise RuntimeError(
|
||||
f"Attempting to register unknown client servlet: '{servlet_group}'"
|
||||
)
|
||||
|
||||
# Partially deprecated in r0
|
||||
events.register_servlets(hs, client_resource)
|
||||
for servletfunc in SERVLET_GROUPS[servlet_group]:
|
||||
if not is_main_process and servletfunc in [
|
||||
pusher.register_servlets,
|
||||
logout.register_servlets,
|
||||
auth.register_servlets,
|
||||
tokenrefresh.register_servlets,
|
||||
reporting.register_servlets,
|
||||
openid.register_servlets,
|
||||
thirdparty.register_servlets,
|
||||
room_upgrade_rest_servlet.register_servlets,
|
||||
account_validity.register_servlets,
|
||||
admin.register_servlets_for_client_rest_resource,
|
||||
mutual_rooms.register_servlets,
|
||||
login_token_request.register_servlets,
|
||||
rendezvous.register_servlets,
|
||||
auth_issuer.register_servlets,
|
||||
]:
|
||||
continue
|
||||
|
||||
room.register_servlets(hs, client_resource)
|
||||
login.register_servlets(hs, client_resource)
|
||||
profile.register_servlets(hs, client_resource)
|
||||
presence.register_servlets(hs, client_resource)
|
||||
directory.register_servlets(hs, client_resource)
|
||||
voip.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
pusher.register_servlets(hs, client_resource)
|
||||
push_rule.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
logout.register_servlets(hs, client_resource)
|
||||
sync.register_servlets(hs, client_resource)
|
||||
filter.register_servlets(hs, client_resource)
|
||||
account.register_servlets(hs, client_resource)
|
||||
register.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
auth.register_servlets(hs, client_resource)
|
||||
receipts.register_servlets(hs, client_resource)
|
||||
read_marker.register_servlets(hs, client_resource)
|
||||
room_keys.register_servlets(hs, client_resource)
|
||||
keys.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
tokenrefresh.register_servlets(hs, client_resource)
|
||||
tags.register_servlets(hs, client_resource)
|
||||
account_data.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
reporting.register_servlets(hs, client_resource)
|
||||
openid.register_servlets(hs, client_resource)
|
||||
notifications.register_servlets(hs, client_resource)
|
||||
devices.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
thirdparty.register_servlets(hs, client_resource)
|
||||
sendtodevice.register_servlets(hs, client_resource)
|
||||
user_directory.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
|
||||
capabilities.register_servlets(hs, client_resource)
|
||||
if is_main_process:
|
||||
account_validity.register_servlets(hs, client_resource)
|
||||
relations.register_servlets(hs, client_resource)
|
||||
password_policy.register_servlets(hs, client_resource)
|
||||
knock.register_servlets(hs, client_resource)
|
||||
appservice_ping.register_servlets(hs, client_resource)
|
||||
if hs.config.media.can_load_media_repo:
|
||||
from synapse.rest.client import media
|
||||
|
||||
media.register_servlets(hs, client_resource)
|
||||
|
||||
# moving to /_synapse/admin
|
||||
if is_main_process:
|
||||
admin.register_servlets_for_client_rest_resource(hs, client_resource)
|
||||
|
||||
# unstable
|
||||
if is_main_process:
|
||||
mutual_rooms.register_servlets(hs, client_resource)
|
||||
login_token_request.register_servlets(hs, client_resource)
|
||||
rendezvous.register_servlets(hs, client_resource)
|
||||
auth_issuer.register_servlets(hs, client_resource)
|
||||
servletfunc(hs, client_resource)
|
||||
|
||||
@@ -256,9 +256,15 @@ class KeyChangesServlet(RestServlet):
|
||||
|
||||
user_id = requester.user.to_string()
|
||||
|
||||
results = await self.device_handler.get_user_ids_changed(user_id, from_token)
|
||||
device_list_updates = await self.device_handler.get_user_ids_changed(
|
||||
user_id, from_token
|
||||
)
|
||||
|
||||
return 200, results
|
||||
response: JsonDict = {}
|
||||
response["changed"] = list(device_list_updates.changed)
|
||||
response["left"] = list(device_list_updates.left)
|
||||
|
||||
return 200, response
|
||||
|
||||
|
||||
class OneTimeKeyServlet(RestServlet):
|
||||
|
||||
@@ -67,7 +67,8 @@ from synapse.streams.config import PaginationConfig
|
||||
from synapse.types import JsonDict, Requester, StreamToken, ThirdPartyInstanceID, UserID
|
||||
from synapse.types.state import StateFilter
|
||||
from synapse.util.cancellation import cancellable
|
||||
from synapse.util.stringutils import parse_and_validate_server_name, random_string
|
||||
from synapse.util.events import generate_fake_event_id
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -325,7 +326,7 @@ class RoomStateEventRestServlet(RestServlet):
|
||||
)
|
||||
event_id = event.event_id
|
||||
except ShadowBanError:
|
||||
event_id = "$" + random_string(43)
|
||||
event_id = generate_fake_event_id()
|
||||
|
||||
set_tag("event_id", event_id)
|
||||
ret = {"event_id": event_id}
|
||||
@@ -377,7 +378,7 @@ class RoomSendEventRestServlet(TransactionRestServlet):
|
||||
)
|
||||
event_id = event.event_id
|
||||
except ShadowBanError:
|
||||
event_id = "$" + random_string(43)
|
||||
event_id = generate_fake_event_id()
|
||||
|
||||
set_tag("event_id", event_id)
|
||||
return 200, {"event_id": event_id}
|
||||
@@ -1193,7 +1194,7 @@ class RoomRedactEventRestServlet(TransactionRestServlet):
|
||||
|
||||
event_id = event.event_id
|
||||
except ShadowBanError:
|
||||
event_id = "$" + random_string(43)
|
||||
event_id = generate_fake_event_id()
|
||||
|
||||
set_tag("event_id", event_id)
|
||||
return 200, {"event_id": event_id}
|
||||
|
||||
@@ -52,9 +52,9 @@ from synapse.http.servlet import (
|
||||
parse_string,
|
||||
)
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.opentracing import trace_with_opname
|
||||
from synapse.logging.opentracing import log_kv, set_tag, trace_with_opname
|
||||
from synapse.rest.admin.experimental_features import ExperimentalFeature
|
||||
from synapse.types import JsonDict, Requester, StreamToken
|
||||
from synapse.types import JsonDict, Requester, SlidingSyncStreamToken, StreamToken
|
||||
from synapse.types.rest.client import SlidingSyncBody
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.caches.lrucache import LruCache
|
||||
@@ -881,7 +881,6 @@ class SlidingSyncRestServlet(RestServlet):
|
||||
)
|
||||
|
||||
user = requester.user
|
||||
device_id = requester.device_id
|
||||
|
||||
timeout = parse_integer(request, "timeout", default=0)
|
||||
# Position in the stream
|
||||
@@ -889,22 +888,50 @@ class SlidingSyncRestServlet(RestServlet):
|
||||
|
||||
from_token = None
|
||||
if from_token_string is not None:
|
||||
from_token = await StreamToken.from_string(self.store, from_token_string)
|
||||
from_token = await SlidingSyncStreamToken.from_string(
|
||||
self.store, from_token_string
|
||||
)
|
||||
|
||||
# TODO: We currently don't know whether we're going to use sticky params or
|
||||
# maybe some filters like sync v2 where they are built up once and referenced
|
||||
# by filter ID. For now, we will just prototype with always passing everything
|
||||
# in.
|
||||
body = parse_and_validate_json_object_from_request(request, SlidingSyncBody)
|
||||
logger.info("Sliding sync request: %r", body)
|
||||
|
||||
# Tag and log useful data to differentiate requests.
|
||||
set_tag(
|
||||
"sliding_sync.sync_type", "initial" if from_token is None else "incremental"
|
||||
)
|
||||
set_tag("sliding_sync.conn_id", body.conn_id or "")
|
||||
log_kv(
|
||||
{
|
||||
"sliding_sync.lists": {
|
||||
list_name: {
|
||||
"ranges": list_config.ranges,
|
||||
"timeline_limit": list_config.timeline_limit,
|
||||
}
|
||||
for list_name, list_config in (body.lists or {}).items()
|
||||
},
|
||||
"sliding_sync.room_subscriptions": list(
|
||||
(body.room_subscriptions or {}).keys()
|
||||
),
|
||||
# We also include the number of room subscriptions because logs are
|
||||
# limited to 1024 characters and the large room ID list above can be cut
|
||||
# off.
|
||||
"sliding_sync.num_room_subscriptions": len(
|
||||
(body.room_subscriptions or {}).keys()
|
||||
),
|
||||
}
|
||||
)
|
||||
|
||||
sync_config = SlidingSyncConfig(
|
||||
user=user,
|
||||
device_id=device_id,
|
||||
requester=requester,
|
||||
# FIXME: Currently, we're just manually copying the fields from the
|
||||
# `SlidingSyncBody` into the config. How can we gurantee into the future
|
||||
# `SlidingSyncBody` into the config. How can we guarantee into the future
|
||||
# that we don't forget any? I would like something more structured like
|
||||
# `copy_attributes(from=body, to=config)`
|
||||
conn_id=body.conn_id,
|
||||
lists=body.lists,
|
||||
room_subscriptions=body.room_subscriptions,
|
||||
extensions=body.extensions,
|
||||
@@ -927,7 +954,6 @@ class SlidingSyncRestServlet(RestServlet):
|
||||
|
||||
return 200, response_content
|
||||
|
||||
# TODO: Is there a better way to encode things?
|
||||
async def encode_response(
|
||||
self,
|
||||
requester: Requester,
|
||||
@@ -1018,6 +1044,11 @@ class SlidingSyncRestServlet(RestServlet):
|
||||
if room_result.initial:
|
||||
serialized_rooms[room_id]["initial"] = room_result.initial
|
||||
|
||||
if room_result.unstable_expanded_timeline:
|
||||
serialized_rooms[room_id][
|
||||
"unstable_expanded_timeline"
|
||||
] = room_result.unstable_expanded_timeline
|
||||
|
||||
# This will be omitted for invite/knock rooms with `stripped_state`
|
||||
if (
|
||||
room_result.required_state is not None
|
||||
@@ -1081,15 +1112,69 @@ class SlidingSyncRestServlet(RestServlet):
|
||||
async def encode_extensions(
|
||||
self, requester: Requester, extensions: SlidingSyncResult.Extensions
|
||||
) -> JsonDict:
|
||||
result = {}
|
||||
serialized_extensions: JsonDict = {}
|
||||
|
||||
if extensions.to_device is not None:
|
||||
result["to_device"] = {
|
||||
serialized_extensions["to_device"] = {
|
||||
"next_batch": extensions.to_device.next_batch,
|
||||
"events": extensions.to_device.events,
|
||||
}
|
||||
|
||||
return result
|
||||
if extensions.e2ee is not None:
|
||||
serialized_extensions["e2ee"] = {
|
||||
# We always include this because
|
||||
# https://github.com/vector-im/element-android/issues/3725. The spec
|
||||
# isn't terribly clear on when this can be omitted and how a client
|
||||
# would tell the difference between "no keys present" and "nothing
|
||||
# changed" in terms of whole field absent / individual key type entry
|
||||
# absent Corresponding synapse issue:
|
||||
# https://github.com/matrix-org/synapse/issues/10456
|
||||
"device_one_time_keys_count": extensions.e2ee.device_one_time_keys_count,
|
||||
# https://github.com/matrix-org/matrix-doc/blob/54255851f642f84a4f1aaf7bc063eebe3d76752b/proposals/2732-olm-fallback-keys.md
|
||||
# states that this field should always be included, as long as the
|
||||
# server supports the feature.
|
||||
"device_unused_fallback_key_types": extensions.e2ee.device_unused_fallback_key_types,
|
||||
}
|
||||
|
||||
if extensions.e2ee.device_list_updates is not None:
|
||||
serialized_extensions["e2ee"]["device_lists"] = {}
|
||||
|
||||
serialized_extensions["e2ee"]["device_lists"]["changed"] = list(
|
||||
extensions.e2ee.device_list_updates.changed
|
||||
)
|
||||
serialized_extensions["e2ee"]["device_lists"]["left"] = list(
|
||||
extensions.e2ee.device_list_updates.left
|
||||
)
|
||||
|
||||
if extensions.account_data is not None:
|
||||
serialized_extensions["account_data"] = {
|
||||
# Same as the the top-level `account_data.events` field in Sync v2.
|
||||
"global": [
|
||||
{"type": account_data_type, "content": content}
|
||||
for account_data_type, content in extensions.account_data.global_account_data_map.items()
|
||||
],
|
||||
# Same as the joined room's account_data field in Sync v2, e.g the path
|
||||
# `rooms.join["!foo:bar"].account_data.events`.
|
||||
"rooms": {
|
||||
room_id: [
|
||||
{"type": account_data_type, "content": content}
|
||||
for account_data_type, content in event_map.items()
|
||||
]
|
||||
for room_id, event_map in extensions.account_data.account_data_by_room_map.items()
|
||||
},
|
||||
}
|
||||
|
||||
if extensions.receipts is not None:
|
||||
serialized_extensions["receipts"] = {
|
||||
"rooms": extensions.receipts.room_id_to_receipt_map,
|
||||
}
|
||||
|
||||
if extensions.typing is not None:
|
||||
serialized_extensions["typing"] = {
|
||||
"rooms": extensions.typing.room_id_to_typing_map,
|
||||
}
|
||||
|
||||
return serialized_extensions
|
||||
|
||||
|
||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||
|
||||
@@ -64,6 +64,7 @@ class VersionsRestServlet(RestServlet):
|
||||
|
||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||
msc3881_enabled = self.config.experimental.msc3881_enabled
|
||||
msc3575_enabled = self.config.experimental.msc3575_enabled
|
||||
|
||||
if self.auth.has_access_token(request):
|
||||
requester = await self.auth.get_user_by_req(
|
||||
@@ -77,6 +78,9 @@ class VersionsRestServlet(RestServlet):
|
||||
msc3881_enabled = await self.store.is_feature_enabled(
|
||||
user_id, ExperimentalFeature.MSC3881
|
||||
)
|
||||
msc3575_enabled = await self.store.is_feature_enabled(
|
||||
user_id, ExperimentalFeature.MSC3575
|
||||
)
|
||||
|
||||
return (
|
||||
200,
|
||||
@@ -169,6 +173,8 @@ class VersionsRestServlet(RestServlet):
|
||||
),
|
||||
# MSC4151: Report room API (Client-Server API)
|
||||
"org.matrix.msc4151": self.config.experimental.msc4151_enabled,
|
||||
# Simplified sliding sync
|
||||
"org.matrix.simplified_msc3575": msc3575_enabled,
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
@@ -84,7 +84,7 @@ class DownloadResource(RestServlet):
|
||||
|
||||
if self._is_mine_server_name(server_name):
|
||||
await self.media_repo.get_local_media(
|
||||
request, media_id, file_name, max_timeout_ms
|
||||
request, media_id, file_name, max_timeout_ms, allow_authenticated=False
|
||||
)
|
||||
else:
|
||||
allow_remote = parse_boolean(request, "allow_remote", default=True)
|
||||
@@ -106,4 +106,5 @@ class DownloadResource(RestServlet):
|
||||
max_timeout_ms,
|
||||
ip_address,
|
||||
False,
|
||||
allow_authenticated=False,
|
||||
)
|
||||
|
||||
@@ -96,6 +96,7 @@ class ThumbnailResource(RestServlet):
|
||||
m_type,
|
||||
max_timeout_ms,
|
||||
False,
|
||||
allow_authenticated=False,
|
||||
)
|
||||
else:
|
||||
await self.thumbnail_provider.respond_local_thumbnail(
|
||||
@@ -107,6 +108,7 @@ class ThumbnailResource(RestServlet):
|
||||
m_type,
|
||||
max_timeout_ms,
|
||||
False,
|
||||
allow_authenticated=False,
|
||||
)
|
||||
self.media_repo.mark_recently_accessed(None, media_id)
|
||||
else:
|
||||
@@ -134,6 +136,7 @@ class ThumbnailResource(RestServlet):
|
||||
m_type,
|
||||
max_timeout_ms,
|
||||
ip_address,
|
||||
False,
|
||||
use_federation=False,
|
||||
allow_authenticated=False,
|
||||
)
|
||||
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||
|
||||
@@ -34,6 +34,7 @@ from typing_extensions import TypeAlias
|
||||
|
||||
from twisted.internet.interfaces import IOpenSSLContextFactory
|
||||
from twisted.internet.tcp import Port
|
||||
from twisted.python.threadpool import ThreadPool
|
||||
from twisted.web.iweb import IPolicyForHTTPS
|
||||
from twisted.web.resource import Resource
|
||||
|
||||
@@ -123,6 +124,7 @@ from synapse.http.client import (
|
||||
)
|
||||
from synapse.http.matrixfederationclient import MatrixFederationHttpClient
|
||||
from synapse.media.media_repository import MediaRepository
|
||||
from synapse.metrics import register_threadpool
|
||||
from synapse.metrics.common_usage_metrics import CommonUsageMetricsManager
|
||||
from synapse.module_api import ModuleApi
|
||||
from synapse.module_api.callbacks import ModuleApiCallbacks
|
||||
@@ -559,6 +561,7 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||
def get_sync_handler(self) -> SyncHandler:
|
||||
return SyncHandler(self)
|
||||
|
||||
@cache_in_self
|
||||
def get_sliding_sync_handler(self) -> SlidingSyncHandler:
|
||||
return SlidingSyncHandler(self)
|
||||
|
||||
@@ -940,3 +943,24 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||
@cache_in_self
|
||||
def get_task_scheduler(self) -> TaskScheduler:
|
||||
return TaskScheduler(self)
|
||||
|
||||
@cache_in_self
|
||||
def get_media_sender_thread_pool(self) -> ThreadPool:
|
||||
"""Fetch the threadpool used to read files when responding to media
|
||||
download requests."""
|
||||
|
||||
# We can choose a large threadpool size as these threads predominately
|
||||
# do IO rather than CPU work.
|
||||
media_threadpool = ThreadPool(
|
||||
name="media_threadpool", minthreads=1, maxthreads=50
|
||||
)
|
||||
|
||||
media_threadpool.start()
|
||||
self.get_reactor().addSystemEventTrigger(
|
||||
"during", "shutdown", media_threadpool.stop
|
||||
)
|
||||
|
||||
# Register the threadpool with our metrics.
|
||||
register_threadpool("media", media_threadpool)
|
||||
|
||||
return media_threadpool
|
||||
|
||||
@@ -120,10 +120,15 @@ class SQLBaseStore(metaclass=ABCMeta):
|
||||
"get_user_in_room_with_profile", (room_id, user_id)
|
||||
)
|
||||
self._attempt_to_invalidate_cache("get_rooms_for_user", (user_id,))
|
||||
self._attempt_to_invalidate_cache(
|
||||
"_get_rooms_for_local_user_where_membership_is_inner", (user_id,)
|
||||
)
|
||||
|
||||
# Purge other caches based on room state.
|
||||
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_partial_current_state_ids", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
||||
|
||||
def _invalidate_state_caches_all(self, room_id: str) -> None:
|
||||
"""Invalidates caches that are based on the current state, but does
|
||||
@@ -146,7 +151,12 @@ class SQLBaseStore(metaclass=ABCMeta):
|
||||
self._attempt_to_invalidate_cache("does_pair_of_users_share_a_room", None)
|
||||
self._attempt_to_invalidate_cache("get_user_in_room_with_profile", None)
|
||||
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"_get_rooms_for_local_user_where_membership_is_inner", None
|
||||
)
|
||||
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
||||
|
||||
def _attempt_to_invalidate_cache(
|
||||
self, cache_name: str, key: Optional[Collection[Any]]
|
||||
|
||||
@@ -268,13 +268,23 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
self._curr_state_delta_stream_cache.entity_has_changed(data.room_id, token) # type: ignore[attr-defined]
|
||||
|
||||
if data.type == EventTypes.Member:
|
||||
self.get_rooms_for_user.invalidate((data.state_key,)) # type: ignore[attr-defined]
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_rooms_for_user", (data.state_key,)
|
||||
)
|
||||
elif data.type == EventTypes.RoomEncryption:
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_room_encryption", (data.room_id,)
|
||||
)
|
||||
elif data.type == EventTypes.Create:
|
||||
self._attempt_to_invalidate_cache("get_room_type", (data.room_id,))
|
||||
elif row.type == EventsStreamAllStateRow.TypeId:
|
||||
assert isinstance(data, EventsStreamAllStateRow)
|
||||
# Similar to the above, but the entire caches are invalidated. This is
|
||||
# unfortunate for the membership caches, but should recover quickly.
|
||||
self._curr_state_delta_stream_cache.entity_has_changed(data.room_id, token) # type: ignore[attr-defined]
|
||||
self.get_rooms_for_user.invalidate_all() # type: ignore[attr-defined]
|
||||
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache("get_room_type", (data.room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_encryption", (data.room_id,))
|
||||
else:
|
||||
raise Exception("Unknown events stream row type %s" % (row.type,))
|
||||
|
||||
@@ -331,6 +341,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
"get_invited_rooms_for_local_user", (state_key,)
|
||||
)
|
||||
self._attempt_to_invalidate_cache("get_rooms_for_user", (state_key,))
|
||||
self._attempt_to_invalidate_cache(
|
||||
"_get_rooms_for_local_user_where_membership_is_inner", (state_key,)
|
||||
)
|
||||
|
||||
self._attempt_to_invalidate_cache(
|
||||
"did_forget",
|
||||
@@ -342,6 +355,10 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_forgotten_rooms_for_user", (state_key,)
|
||||
)
|
||||
elif etype == EventTypes.Create:
|
||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
||||
elif etype == EventTypes.RoomEncryption:
|
||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
||||
|
||||
if relates_to:
|
||||
self._attempt_to_invalidate_cache(
|
||||
@@ -393,12 +410,17 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
self._attempt_to_invalidate_cache("get_thread_id_for_receipts", None)
|
||||
self._attempt_to_invalidate_cache("get_invited_rooms_for_local_user", None)
|
||||
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"_get_rooms_for_local_user_where_membership_is_inner", None
|
||||
)
|
||||
self._attempt_to_invalidate_cache("did_forget", None)
|
||||
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache("get_references_for_event", None)
|
||||
self._attempt_to_invalidate_cache("get_thread_summary", None)
|
||||
self._attempt_to_invalidate_cache("get_thread_participated", None)
|
||||
self._attempt_to_invalidate_cache("get_threads", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
||||
|
||||
self._attempt_to_invalidate_cache("_get_state_group_for_event", None)
|
||||
|
||||
@@ -451,6 +473,8 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
|
||||
self._attempt_to_invalidate_cache("get_room_version_id", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
||||
|
||||
# And delete state caches.
|
||||
|
||||
|
||||
@@ -1313,6 +1313,11 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
||||
# We want to make the cache more effective, so we clamp to the last
|
||||
# change before the given ordering.
|
||||
last_change = self._events_stream_cache.get_max_pos_of_last_change(room_id) # type: ignore[attr-defined]
|
||||
if last_change is None:
|
||||
# If the room isn't in the cache we know that the last change was
|
||||
# somewhere before the earliest known position of the cache, so we
|
||||
# can clamp to that.
|
||||
last_change = self._events_stream_cache.get_earliest_known_position() # type: ignore[attr-defined]
|
||||
|
||||
# We don't always have a full stream_to_exterm_id table, e.g. after
|
||||
# the upgrade that introduced it, so we make sure we never ask for a
|
||||
|
||||
@@ -64,6 +64,7 @@ class LocalMedia:
|
||||
quarantined_by: Optional[str]
|
||||
safe_from_quarantine: bool
|
||||
user_id: Optional[str]
|
||||
authenticated: Optional[bool]
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
@@ -77,6 +78,7 @@ class RemoteMedia:
|
||||
created_ts: int
|
||||
last_access_ts: int
|
||||
quarantined_by: Optional[str]
|
||||
authenticated: Optional[bool]
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
@@ -218,6 +220,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||
"last_access_ts",
|
||||
"safe_from_quarantine",
|
||||
"user_id",
|
||||
"authenticated",
|
||||
),
|
||||
allow_none=True,
|
||||
desc="get_local_media",
|
||||
@@ -235,6 +238,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||
last_access_ts=row[6],
|
||||
safe_from_quarantine=row[7],
|
||||
user_id=row[8],
|
||||
authenticated=row[9],
|
||||
)
|
||||
|
||||
async def get_local_media_by_user_paginate(
|
||||
@@ -290,7 +294,8 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||
last_access_ts,
|
||||
quarantined_by,
|
||||
safe_from_quarantine,
|
||||
user_id
|
||||
user_id,
|
||||
authenticated
|
||||
FROM local_media_repository
|
||||
WHERE user_id = ?
|
||||
ORDER BY {order_by_column} {order}, media_id ASC
|
||||
@@ -314,6 +319,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||
quarantined_by=row[7],
|
||||
safe_from_quarantine=bool(row[8]),
|
||||
user_id=row[9],
|
||||
authenticated=row[10],
|
||||
)
|
||||
for row in txn
|
||||
]
|
||||
@@ -417,12 +423,18 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||
time_now_ms: int,
|
||||
user_id: UserID,
|
||||
) -> None:
|
||||
if self.hs.config.media.enable_authenticated_media:
|
||||
authenticated = True
|
||||
else:
|
||||
authenticated = False
|
||||
|
||||
await self.db_pool.simple_insert(
|
||||
"local_media_repository",
|
||||
{
|
||||
"media_id": media_id,
|
||||
"created_ts": time_now_ms,
|
||||
"user_id": user_id.to_string(),
|
||||
"authenticated": authenticated,
|
||||
},
|
||||
desc="store_local_media_id",
|
||||
)
|
||||
@@ -438,6 +450,11 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||
user_id: UserID,
|
||||
url_cache: Optional[str] = None,
|
||||
) -> None:
|
||||
if self.hs.config.media.enable_authenticated_media:
|
||||
authenticated = True
|
||||
else:
|
||||
authenticated = False
|
||||
|
||||
await self.db_pool.simple_insert(
|
||||
"local_media_repository",
|
||||
{
|
||||
@@ -448,6 +465,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||
"media_length": media_length,
|
||||
"user_id": user_id.to_string(),
|
||||
"url_cache": url_cache,
|
||||
"authenticated": authenticated,
|
||||
},
|
||||
desc="store_local_media",
|
||||
)
|
||||
@@ -638,6 +656,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||
"filesystem_id",
|
||||
"last_access_ts",
|
||||
"quarantined_by",
|
||||
"authenticated",
|
||||
),
|
||||
allow_none=True,
|
||||
desc="get_cached_remote_media",
|
||||
@@ -654,6 +673,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||
filesystem_id=row[4],
|
||||
last_access_ts=row[5],
|
||||
quarantined_by=row[6],
|
||||
authenticated=row[7],
|
||||
)
|
||||
|
||||
async def store_cached_remote_media(
|
||||
@@ -666,6 +686,11 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||
upload_name: Optional[str],
|
||||
filesystem_id: str,
|
||||
) -> None:
|
||||
if self.hs.config.media.enable_authenticated_media:
|
||||
authenticated = True
|
||||
else:
|
||||
authenticated = False
|
||||
|
||||
await self.db_pool.simple_insert(
|
||||
"remote_media_cache",
|
||||
{
|
||||
@@ -677,6 +702,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
|
||||
"upload_name": upload_name,
|
||||
"filesystem_id": filesystem_id,
|
||||
"last_access_ts": time_now_ms,
|
||||
"authenticated": authenticated,
|
||||
},
|
||||
desc="store_cached_remote_media",
|
||||
)
|
||||
|
||||
@@ -144,6 +144,16 @@ class ProfileWorkerStore(SQLBaseStore):
|
||||
return 50
|
||||
|
||||
async def get_profileinfo(self, user_id: UserID) -> ProfileInfo:
|
||||
"""
|
||||
Fetch the display name and avatar URL of a user.
|
||||
|
||||
Args:
|
||||
user_id: The user ID to fetch the profile for.
|
||||
|
||||
Returns:
|
||||
The user's display name and avatar URL. Values may be null if unset
|
||||
or if the user doesn't exist.
|
||||
"""
|
||||
profile = await self.db_pool.simple_select_one(
|
||||
table="profiles",
|
||||
keyvalues={"full_user_id": user_id.to_string()},
|
||||
@@ -158,6 +168,15 @@ class ProfileWorkerStore(SQLBaseStore):
|
||||
return ProfileInfo(avatar_url=profile[1], display_name=profile[0])
|
||||
|
||||
async def get_profile_displayname(self, user_id: UserID) -> Optional[str]:
|
||||
"""
|
||||
Fetch the display name of a user.
|
||||
|
||||
Args:
|
||||
user_id: The user to get the display name for.
|
||||
|
||||
Raises:
|
||||
404 if the user does not exist.
|
||||
"""
|
||||
return await self.db_pool.simple_select_one_onecol(
|
||||
table="profiles",
|
||||
keyvalues={"full_user_id": user_id.to_string()},
|
||||
@@ -166,6 +185,15 @@ class ProfileWorkerStore(SQLBaseStore):
|
||||
)
|
||||
|
||||
async def get_profile_avatar_url(self, user_id: UserID) -> Optional[str]:
|
||||
"""
|
||||
Fetch the avatar URL of a user.
|
||||
|
||||
Args:
|
||||
user_id: The user to get the avatar URL for.
|
||||
|
||||
Raises:
|
||||
404 if the user does not exist.
|
||||
"""
|
||||
return await self.db_pool.simple_select_one_onecol(
|
||||
table="profiles",
|
||||
keyvalues={"full_user_id": user_id.to_string()},
|
||||
@@ -174,6 +202,12 @@ class ProfileWorkerStore(SQLBaseStore):
|
||||
)
|
||||
|
||||
async def create_profile(self, user_id: UserID) -> None:
|
||||
"""
|
||||
Create a blank profile for a user.
|
||||
|
||||
Args:
|
||||
user_id: The user to create the profile for.
|
||||
"""
|
||||
user_localpart = user_id.localpart
|
||||
await self.db_pool.simple_insert(
|
||||
table="profiles",
|
||||
|
||||
@@ -43,6 +43,7 @@ from synapse.storage.database import (
|
||||
DatabasePool,
|
||||
LoggingDatabaseConnection,
|
||||
LoggingTransaction,
|
||||
make_tuple_in_list_sql_clause,
|
||||
)
|
||||
from synapse.storage.engines._base import IsolationLevel
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||
@@ -51,10 +52,12 @@ from synapse.types import (
|
||||
JsonMapping,
|
||||
MultiWriterStreamToken,
|
||||
PersistedPosition,
|
||||
StrCollection,
|
||||
)
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.caches.descriptors import cached, cachedList
|
||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
from synapse.util.iterutils import batch_iter
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -479,6 +482,83 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
||||
}
|
||||
return results
|
||||
|
||||
async def get_linearized_receipts_for_events(
|
||||
self,
|
||||
room_and_event_ids: Collection[Tuple[str, str]],
|
||||
) -> Sequence[JsonMapping]:
|
||||
"""Get all receipts for the given set of events.
|
||||
|
||||
Arguments:
|
||||
room_and_event_ids: A collection of 2-tuples of room ID and
|
||||
event IDs to fetch receipts for
|
||||
|
||||
Returns:
|
||||
A list of receipts, one per room.
|
||||
"""
|
||||
|
||||
def get_linearized_receipts_for_events_txn(
|
||||
txn: LoggingTransaction,
|
||||
room_id_event_id_tuples: Collection[Tuple[str, str]],
|
||||
) -> List[Tuple[str, str, str, str, Optional[str], str]]:
|
||||
clause, args = make_tuple_in_list_sql_clause(
|
||||
self.database_engine, ("room_id", "event_id"), room_id_event_id_tuples
|
||||
)
|
||||
|
||||
sql = f"""
|
||||
SELECT room_id, receipt_type, user_id, event_id, thread_id, data
|
||||
FROM receipts_linearized
|
||||
WHERE {clause}
|
||||
"""
|
||||
|
||||
txn.execute(sql, args)
|
||||
|
||||
return txn.fetchall()
|
||||
|
||||
# room_id -> event_id -> receipt_type -> user_id -> receipt data
|
||||
room_to_content: Dict[str, Dict[str, Dict[str, Dict[str, JsonMapping]]]] = {}
|
||||
for batch in batch_iter(room_and_event_ids, 1000):
|
||||
batch_results = await self.db_pool.runInteraction(
|
||||
"get_linearized_receipts_for_events",
|
||||
get_linearized_receipts_for_events_txn,
|
||||
batch,
|
||||
)
|
||||
|
||||
for (
|
||||
room_id,
|
||||
receipt_type,
|
||||
user_id,
|
||||
event_id,
|
||||
thread_id,
|
||||
data,
|
||||
) in batch_results:
|
||||
content = room_to_content.setdefault(room_id, {})
|
||||
user_receipts = content.setdefault(event_id, {}).setdefault(
|
||||
receipt_type, {}
|
||||
)
|
||||
|
||||
receipt_data = db_to_json(data)
|
||||
if thread_id is not None:
|
||||
receipt_data["thread_id"] = thread_id
|
||||
|
||||
# MSC4102: always replace threaded receipts with unthreaded ones
|
||||
# if there is a clash. Specifically:
|
||||
# - if there is no existing receipt, great, set the data.
|
||||
# - if there is an existing receipt, is it threaded (thread_id
|
||||
# present)? YES: replace if this receipt has no thread id.
|
||||
# NO: do not replace. This means we will drop some receipts, but
|
||||
# MSC4102 is designed to drop semantically meaningless receipts,
|
||||
# so this is okay. Previously, we would drop meaningful data!
|
||||
if user_id in user_receipts:
|
||||
if "thread_id" in user_receipts[user_id] and not thread_id:
|
||||
user_receipts[user_id] = receipt_data
|
||||
else:
|
||||
user_receipts[user_id] = receipt_data
|
||||
|
||||
return [
|
||||
{"type": EduTypes.RECEIPT, "room_id": room_id, "content": content}
|
||||
for room_id, content in room_to_content.items()
|
||||
]
|
||||
|
||||
@cached(
|
||||
num_args=2,
|
||||
)
|
||||
@@ -550,6 +630,46 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
||||
|
||||
return results
|
||||
|
||||
async def get_rooms_with_receipts_between(
|
||||
self,
|
||||
room_ids: StrCollection,
|
||||
from_key: MultiWriterStreamToken,
|
||||
to_key: MultiWriterStreamToken,
|
||||
) -> StrCollection:
|
||||
"""Given a set of room_ids, find out which ones (may) have receipts
|
||||
between the two tokens (> `from_token` and <= `to_token`)."""
|
||||
|
||||
room_ids = self._receipts_stream_cache.get_entities_changed(
|
||||
room_ids, from_key.stream
|
||||
)
|
||||
if not room_ids:
|
||||
return []
|
||||
|
||||
def f(txn: LoggingTransaction, room_ids: StrCollection) -> StrCollection:
|
||||
clause, args = make_in_list_sql_clause(
|
||||
self.database_engine, "room_id", room_ids
|
||||
)
|
||||
|
||||
sql = f"""
|
||||
SELECT DISTINCT room_id FROM receipts_linearized
|
||||
WHERE {clause} AND ? < stream_id AND stream_id <= ?
|
||||
"""
|
||||
args.append(from_key.stream)
|
||||
args.append(to_key.get_max_stream_pos())
|
||||
|
||||
txn.execute(sql, args)
|
||||
|
||||
return [room_id for room_id, in txn]
|
||||
|
||||
results: List[str] = []
|
||||
for batch in batch_iter(room_ids, 1000):
|
||||
batch_result = await self.db_pool.runInteraction(
|
||||
"get_rooms_with_receipts_between", f, batch
|
||||
)
|
||||
results.extend(batch_result)
|
||||
|
||||
return results
|
||||
|
||||
async def get_users_sent_receipts_between(
|
||||
self, last_id: int, current_id: int
|
||||
) -> List[str]:
|
||||
@@ -954,6 +1074,12 @@ class ReceiptsBackgroundUpdateStore(SQLBaseStore):
|
||||
self.RECEIPTS_GRAPH_UNIQUE_INDEX_UPDATE_NAME,
|
||||
self._background_receipts_graph_unique_index,
|
||||
)
|
||||
self.db_pool.updates.register_background_index_update(
|
||||
update_name="receipts_room_id_event_id_index",
|
||||
index_name="receipts_linearized_event_id",
|
||||
table="receipts_linearized",
|
||||
columns=("room_id", "event_id"),
|
||||
)
|
||||
|
||||
async def _populate_receipt_event_stream_ordering(
|
||||
self, progress: JsonDict, batch_size: int
|
||||
|
||||
@@ -39,6 +39,7 @@ from typing import (
|
||||
import attr
|
||||
|
||||
from synapse.api.constants import EventTypes, Membership
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.metrics import LaterGauge
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
|
||||
@@ -279,8 +280,19 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
|
||||
@cached(max_entries=100000) # type: ignore[synapse-@cached-mutable]
|
||||
async def get_room_summary(self, room_id: str) -> Mapping[str, MemberSummary]:
|
||||
"""Get the details of a room roughly suitable for use by the room
|
||||
"""
|
||||
Get the details of a room roughly suitable for use by the room
|
||||
summary extension to /sync. Useful when lazy loading room members.
|
||||
|
||||
Returns the total count of members in the room by membership type, and a
|
||||
truncated list of members (the heroes). This will be the first 6 members of the
|
||||
room:
|
||||
- We want 5 heroes plus 1, in case one of them is the
|
||||
calling user.
|
||||
- They are ordered by `stream_ordering`, which are joined or
|
||||
invited. When no joined or invited members are available, this also includes
|
||||
banned and left users.
|
||||
|
||||
Args:
|
||||
room_id: The room ID to query
|
||||
Returns:
|
||||
@@ -308,23 +320,36 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
for count, membership in txn:
|
||||
res.setdefault(membership, MemberSummary([], count))
|
||||
|
||||
# we order by membership and then fairly arbitrarily by event_id so
|
||||
# heroes are consistent
|
||||
# Note, rejected events will have a null membership field, so
|
||||
# we we manually filter them out.
|
||||
# Order by membership (joins -> invites -> leave (former insiders) ->
|
||||
# everything else (outsiders like bans/knocks), then by `stream_ordering` so
|
||||
# the first members in the room show up first and to make the sort stable
|
||||
# (consistent heroes).
|
||||
#
|
||||
# Note: rejected events will have a null membership field, so we we manually
|
||||
# filter them out.
|
||||
sql = """
|
||||
SELECT state_key, membership, event_id
|
||||
FROM current_state_events
|
||||
WHERE type = 'm.room.member' AND room_id = ?
|
||||
AND membership IS NOT NULL
|
||||
ORDER BY
|
||||
CASE membership WHEN ? THEN 1 WHEN ? THEN 2 ELSE 3 END ASC,
|
||||
event_id ASC
|
||||
CASE membership WHEN ? THEN 1 WHEN ? THEN 2 WHEN ? THEN 3 ELSE 4 END ASC,
|
||||
event_stream_ordering ASC
|
||||
LIMIT ?
|
||||
"""
|
||||
|
||||
# 6 is 5 (number of heroes) plus 1, in case one of them is the calling user.
|
||||
txn.execute(sql, (room_id, Membership.JOIN, Membership.INVITE, 6))
|
||||
txn.execute(
|
||||
sql,
|
||||
(
|
||||
room_id,
|
||||
# Sort order
|
||||
Membership.JOIN,
|
||||
Membership.INVITE,
|
||||
Membership.LEAVE,
|
||||
# 6 is 5 (number of heroes) plus 1, in case one of them is the calling user.
|
||||
6,
|
||||
),
|
||||
)
|
||||
for user_id, membership, event_id in txn:
|
||||
summary = res[membership]
|
||||
# we will always have a summary for this membership type at this
|
||||
@@ -398,6 +423,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
return invite
|
||||
return None
|
||||
|
||||
@trace
|
||||
async def get_rooms_for_local_user_where_membership_is(
|
||||
self,
|
||||
user_id: str,
|
||||
@@ -421,9 +447,11 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
if not membership_list:
|
||||
return []
|
||||
|
||||
rooms = await self.db_pool.runInteraction(
|
||||
"get_rooms_for_local_user_where_membership_is",
|
||||
self._get_rooms_for_local_user_where_membership_is_txn,
|
||||
# Convert membership list to frozen set as a) it needs to be hashable,
|
||||
# and b) we don't care about the order.
|
||||
membership_list = frozenset(membership_list)
|
||||
|
||||
rooms = await self._get_rooms_for_local_user_where_membership_is_inner(
|
||||
user_id,
|
||||
membership_list,
|
||||
)
|
||||
@@ -442,6 +470,24 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
|
||||
return [room for room in rooms if room.room_id not in rooms_to_exclude]
|
||||
|
||||
@cached(max_entries=1000, tree=True)
|
||||
async def _get_rooms_for_local_user_where_membership_is_inner(
|
||||
self,
|
||||
user_id: str,
|
||||
membership_list: Collection[str],
|
||||
) -> Sequence[RoomsForUser]:
|
||||
if not membership_list:
|
||||
return []
|
||||
|
||||
rooms = await self.db_pool.runInteraction(
|
||||
"get_rooms_for_local_user_where_membership_is",
|
||||
self._get_rooms_for_local_user_where_membership_is_txn,
|
||||
user_id,
|
||||
membership_list,
|
||||
)
|
||||
|
||||
return rooms
|
||||
|
||||
def _get_rooms_for_local_user_where_membership_is_txn(
|
||||
self,
|
||||
txn: LoggingTransaction,
|
||||
@@ -1509,10 +1555,19 @@ def extract_heroes_from_room_summary(
|
||||
) -> List[str]:
|
||||
"""Determine the users that represent a room, from the perspective of the `me` user.
|
||||
|
||||
This function expects `MemberSummary.members` to already be sorted by
|
||||
`stream_ordering` like the results from `get_room_summary(...)`.
|
||||
|
||||
The rules which say which users we select are specified in the "Room Summary"
|
||||
section of
|
||||
https://spec.matrix.org/v1.4/client-server-api/#get_matrixclientv3sync
|
||||
|
||||
|
||||
Args:
|
||||
details: Mapping from membership type to member summary. We expect
|
||||
`MemberSummary.members` to already be sorted by `stream_ordering`.
|
||||
me: The user for whom we are determining the heroes for.
|
||||
|
||||
Returns a list (possibly empty) of heroes' mxids.
|
||||
"""
|
||||
empty_ms = MemberSummary([], 0)
|
||||
@@ -1527,11 +1582,11 @@ def extract_heroes_from_room_summary(
|
||||
r[0] for r in details.get(Membership.LEAVE, empty_ms).members if r[0] != me
|
||||
] + [r[0] for r in details.get(Membership.BAN, empty_ms).members if r[0] != me]
|
||||
|
||||
# FIXME: order by stream ordering rather than as returned by SQL
|
||||
# We expect `MemberSummary.members` to already be sorted by `stream_ordering`
|
||||
if joined_user_ids or invited_user_ids:
|
||||
return sorted(joined_user_ids + invited_user_ids)[0:5]
|
||||
return (joined_user_ids + invited_user_ids)[0:5]
|
||||
else:
|
||||
return sorted(gone_user_ids)[0:5]
|
||||
return gone_user_ids[0:5]
|
||||
|
||||
|
||||
@attr.s(slots=True, auto_attribs=True)
|
||||
|
||||
@@ -30,6 +30,7 @@ from typing import (
|
||||
Iterable,
|
||||
List,
|
||||
Mapping,
|
||||
MutableMapping,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
@@ -72,10 +73,18 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
_T = TypeVar("_T")
|
||||
|
||||
|
||||
MAX_STATE_DELTA_HOPS = 100
|
||||
|
||||
|
||||
# Freeze so it's immutable and we can use it as a cache value
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class Sentinel:
|
||||
pass
|
||||
|
||||
|
||||
ROOM_UNKNOWN_SENTINEL = Sentinel()
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class EventMetadata:
|
||||
"""Returned by `get_metadata_for_events`"""
|
||||
@@ -300,51 +309,189 @@ class StateGroupWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
|
||||
@cached(max_entries=10000)
|
||||
async def get_room_type(self, room_id: str) -> Optional[str]:
|
||||
"""Get the room type for a given room. The server must be joined to the
|
||||
given room.
|
||||
"""
|
||||
|
||||
row = await self.db_pool.simple_select_one(
|
||||
table="room_stats_state",
|
||||
keyvalues={"room_id": room_id},
|
||||
retcols=("room_type",),
|
||||
allow_none=True,
|
||||
desc="get_room_type",
|
||||
)
|
||||
|
||||
if row is not None:
|
||||
return row[0]
|
||||
|
||||
# If we haven't updated `room_stats_state` with the room yet, query the
|
||||
# create event directly.
|
||||
create_event = await self.get_create_event_for_room(room_id)
|
||||
room_type = create_event.content.get(EventContentFields.ROOM_TYPE)
|
||||
return room_type
|
||||
raise NotImplementedError()
|
||||
|
||||
@cachedList(cached_method_name="get_room_type", list_name="room_ids")
|
||||
async def bulk_get_room_type(
|
||||
self, room_ids: Set[str]
|
||||
) -> Mapping[str, Optional[str]]:
|
||||
"""Bulk fetch room types for the given rooms, the server must be in all
|
||||
the rooms given.
|
||||
) -> Mapping[str, Union[Optional[str], Sentinel]]:
|
||||
"""
|
||||
Bulk fetch room types for the given rooms (via current state).
|
||||
|
||||
Since this function is cached, any missing values would be cached as `None`. In
|
||||
order to distinguish between an unencrypted room that has `None` encryption and
|
||||
a room that is unknown to the server where we might want to omit the value
|
||||
(which would make it cached as `None`), instead we use the sentinel value
|
||||
`ROOM_UNKNOWN_SENTINEL`.
|
||||
|
||||
Returns:
|
||||
A mapping from room ID to the room's type (`None` is a valid room type).
|
||||
Rooms unknown to this server will return `ROOM_UNKNOWN_SENTINEL`.
|
||||
"""
|
||||
|
||||
rows = await self.db_pool.simple_select_many_batch(
|
||||
table="room_stats_state",
|
||||
column="room_id",
|
||||
iterable=room_ids,
|
||||
retcols=("room_id", "room_type"),
|
||||
desc="bulk_get_room_type",
|
||||
def txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> MutableMapping[str, Union[Optional[str], Sentinel]]:
|
||||
clause, args = make_in_list_sql_clause(
|
||||
txn.database_engine, "room_id", room_ids
|
||||
)
|
||||
|
||||
# We can't rely on `room_stats_state.room_type` if the server has left the
|
||||
# room because the `room_id` will still be in the table but everything will
|
||||
# be set to `None` but `None` is a valid room type value. We join against
|
||||
# the `room_stats_current` table which keeps track of the
|
||||
# `current_state_events` count (and a proxy value `local_users_in_room`
|
||||
# which can used to assume the server is participating in the room and has
|
||||
# current state) to ensure that the data in `room_stats_state` is up-to-date
|
||||
# with the current state.
|
||||
#
|
||||
# FIXME: Use `room_stats_current.current_state_events` instead of
|
||||
# `room_stats_current.local_users_in_room` once
|
||||
# https://github.com/element-hq/synapse/issues/17457 is fixed.
|
||||
sql = f"""
|
||||
SELECT room_id, room_type
|
||||
FROM room_stats_state
|
||||
INNER JOIN room_stats_current USING (room_id)
|
||||
WHERE
|
||||
{clause}
|
||||
AND local_users_in_room > 0
|
||||
"""
|
||||
|
||||
txn.execute(sql, args)
|
||||
|
||||
room_id_to_type_map = {}
|
||||
for row in txn:
|
||||
room_id_to_type_map[row[0]] = row[1]
|
||||
|
||||
return room_id_to_type_map
|
||||
|
||||
results = await self.db_pool.runInteraction(
|
||||
"bulk_get_room_type",
|
||||
txn,
|
||||
)
|
||||
|
||||
# If we haven't updated `room_stats_state` with the room yet, query the
|
||||
# create events directly. This should happen only rarely so we don't
|
||||
# mind if we do this in a loop.
|
||||
results = dict(rows)
|
||||
for room_id in room_ids - results.keys():
|
||||
create_event = await self.get_create_event_for_room(room_id)
|
||||
room_type = create_event.content.get(EventContentFields.ROOM_TYPE)
|
||||
results[room_id] = room_type
|
||||
try:
|
||||
create_event = await self.get_create_event_for_room(room_id)
|
||||
room_type = create_event.content.get(EventContentFields.ROOM_TYPE)
|
||||
results[room_id] = room_type
|
||||
except NotFoundError:
|
||||
# We use the sentinel value to distinguish between `None` which is a
|
||||
# valid room type and a room that is unknown to the server so the value
|
||||
# is just unset.
|
||||
results[room_id] = ROOM_UNKNOWN_SENTINEL
|
||||
|
||||
return results
|
||||
|
||||
@cached(max_entries=10000)
|
||||
async def get_room_encryption(self, room_id: str) -> Optional[str]:
|
||||
raise NotImplementedError()
|
||||
|
||||
@cachedList(cached_method_name="get_room_encryption", list_name="room_ids")
|
||||
async def bulk_get_room_encryption(
|
||||
self, room_ids: Set[str]
|
||||
) -> Mapping[str, Union[Optional[str], Sentinel]]:
|
||||
"""
|
||||
Bulk fetch room encryption for the given rooms (via current state).
|
||||
|
||||
Since this function is cached, any missing values would be cached as `None`. In
|
||||
order to distinguish between an unencrypted room that has `None` encryption and
|
||||
a room that is unknown to the server where we might want to omit the value
|
||||
(which would make it cached as `None`), instead we use the sentinel value
|
||||
`ROOM_UNKNOWN_SENTINEL`.
|
||||
|
||||
Returns:
|
||||
A mapping from room ID to the room's encryption algorithm if the room is
|
||||
encrypted, otherwise `None`. Rooms unknown to this server will return
|
||||
`ROOM_UNKNOWN_SENTINEL`.
|
||||
"""
|
||||
|
||||
def txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> MutableMapping[str, Union[Optional[str], Sentinel]]:
|
||||
clause, args = make_in_list_sql_clause(
|
||||
txn.database_engine, "room_id", room_ids
|
||||
)
|
||||
|
||||
# We can't rely on `room_stats_state.encryption` if the server has left the
|
||||
# room because the `room_id` will still be in the table but everything will
|
||||
# be set to `None` but `None` is a valid encryption value. We join against
|
||||
# the `room_stats_current` table which keeps track of the
|
||||
# `current_state_events` count (and a proxy value `local_users_in_room`
|
||||
# which can used to assume the server is participating in the room and has
|
||||
# current state) to ensure that the data in `room_stats_state` is up-to-date
|
||||
# with the current state.
|
||||
#
|
||||
# FIXME: Use `room_stats_current.current_state_events` instead of
|
||||
# `room_stats_current.local_users_in_room` once
|
||||
# https://github.com/element-hq/synapse/issues/17457 is fixed.
|
||||
sql = f"""
|
||||
SELECT room_id, encryption
|
||||
FROM room_stats_state
|
||||
INNER JOIN room_stats_current USING (room_id)
|
||||
WHERE
|
||||
{clause}
|
||||
AND local_users_in_room > 0
|
||||
"""
|
||||
|
||||
txn.execute(sql, args)
|
||||
|
||||
room_id_to_encryption_map = {}
|
||||
for row in txn:
|
||||
room_id_to_encryption_map[row[0]] = row[1]
|
||||
|
||||
return room_id_to_encryption_map
|
||||
|
||||
results = await self.db_pool.runInteraction(
|
||||
"bulk_get_room_encryption",
|
||||
txn,
|
||||
)
|
||||
|
||||
# If we haven't updated `room_stats_state` with the room yet, query the state
|
||||
# directly. This should happen only rarely so we don't mind if we do this in a
|
||||
# loop.
|
||||
encryption_event_ids: List[str] = []
|
||||
for room_id in room_ids - results.keys():
|
||||
state_map = await self.get_partial_filtered_current_state_ids(
|
||||
room_id,
|
||||
state_filter=StateFilter.from_types(
|
||||
[
|
||||
(EventTypes.Create, ""),
|
||||
(EventTypes.RoomEncryption, ""),
|
||||
]
|
||||
),
|
||||
)
|
||||
# We can use the create event as a canary to tell whether the server has
|
||||
# seen the room before
|
||||
create_event_id = state_map.get((EventTypes.Create, ""))
|
||||
encryption_event_id = state_map.get((EventTypes.RoomEncryption, ""))
|
||||
|
||||
if create_event_id is None:
|
||||
# We use the sentinel value to distinguish between `None` which is a
|
||||
# valid room type and a room that is unknown to the server so the value
|
||||
# is just unset.
|
||||
results[room_id] = ROOM_UNKNOWN_SENTINEL
|
||||
continue
|
||||
|
||||
if encryption_event_id is None:
|
||||
results[room_id] = None
|
||||
else:
|
||||
encryption_event_ids.append(encryption_event_id)
|
||||
|
||||
encryption_event_map = await self.get_events(encryption_event_ids)
|
||||
|
||||
for encryption_event_id in encryption_event_ids:
|
||||
encryption_event = encryption_event_map.get(encryption_event_id)
|
||||
# If the curent state says there is an encryption event, we should have it
|
||||
# in the database.
|
||||
assert encryption_event is not None
|
||||
|
||||
results[encryption_event.room_id] = encryption_event.content.get(
|
||||
EventContentFields.ENCRYPTION_ALGORITHM
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
@@ -24,9 +24,13 @@ from typing import List, Optional, Tuple
|
||||
|
||||
import attr
|
||||
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.storage._base import SQLBaseStore
|
||||
from synapse.storage.database import LoggingTransaction
|
||||
from synapse.storage.database import LoggingTransaction, make_in_list_sql_clause
|
||||
from synapse.storage.databases.main.stream import _filter_results_by_stream
|
||||
from synapse.types import RoomStreamToken, StrCollection
|
||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
from synapse.util.iterutils import batch_iter
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -156,3 +160,103 @@ class StateDeltasStore(SQLBaseStore):
|
||||
"get_max_stream_id_in_current_state_deltas",
|
||||
self._get_max_stream_id_in_current_state_deltas_txn,
|
||||
)
|
||||
|
||||
@trace
|
||||
async def get_current_state_deltas_for_room(
|
||||
self, room_id: str, from_token: RoomStreamToken, to_token: RoomStreamToken
|
||||
) -> List[StateDelta]:
|
||||
"""Get the state deltas between two tokens."""
|
||||
|
||||
if not self._curr_state_delta_stream_cache.has_entity_changed(
|
||||
room_id, from_token.stream
|
||||
):
|
||||
return []
|
||||
|
||||
def get_current_state_deltas_for_room_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> List[StateDelta]:
|
||||
sql = """
|
||||
SELECT instance_name, stream_id, type, state_key, event_id, prev_event_id
|
||||
FROM current_state_delta_stream
|
||||
WHERE room_id = ? AND ? < stream_id AND stream_id <= ?
|
||||
ORDER BY stream_id ASC
|
||||
"""
|
||||
txn.execute(
|
||||
sql, (room_id, from_token.stream, to_token.get_max_stream_pos())
|
||||
)
|
||||
|
||||
return [
|
||||
StateDelta(
|
||||
stream_id=row[1],
|
||||
room_id=room_id,
|
||||
event_type=row[2],
|
||||
state_key=row[3],
|
||||
event_id=row[4],
|
||||
prev_event_id=row[5],
|
||||
)
|
||||
for row in txn
|
||||
if _filter_results_by_stream(from_token, to_token, row[0], row[1])
|
||||
]
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"get_current_state_deltas_for_room", get_current_state_deltas_for_room_txn
|
||||
)
|
||||
|
||||
@trace
|
||||
async def get_current_state_deltas_for_rooms(
|
||||
self,
|
||||
room_ids: StrCollection,
|
||||
from_token: RoomStreamToken,
|
||||
to_token: RoomStreamToken,
|
||||
) -> List[StateDelta]:
|
||||
"""Get the state deltas between two tokens for the set of rooms."""
|
||||
|
||||
room_ids = self._curr_state_delta_stream_cache.get_entities_changed(
|
||||
room_ids, from_token.stream
|
||||
)
|
||||
if not room_ids:
|
||||
return []
|
||||
|
||||
def get_current_state_deltas_for_rooms_txn(
|
||||
txn: LoggingTransaction,
|
||||
room_ids: StrCollection,
|
||||
) -> List[StateDelta]:
|
||||
clause, args = make_in_list_sql_clause(
|
||||
self.database_engine, "room_id", room_ids
|
||||
)
|
||||
|
||||
sql = f"""
|
||||
SELECT instance_name, stream_id, room_id, type, state_key, event_id, prev_event_id
|
||||
FROM current_state_delta_stream
|
||||
WHERE {clause} AND ? < stream_id AND stream_id <= ?
|
||||
ORDER BY stream_id ASC
|
||||
"""
|
||||
args.append(from_token.stream)
|
||||
args.append(to_token.get_max_stream_pos())
|
||||
|
||||
txn.execute(sql, args)
|
||||
|
||||
return [
|
||||
StateDelta(
|
||||
stream_id=row[1],
|
||||
room_id=row[2],
|
||||
event_type=row[3],
|
||||
state_key=row[4],
|
||||
event_id=row[5],
|
||||
prev_event_id=row[6],
|
||||
)
|
||||
for row in txn
|
||||
if _filter_results_by_stream(from_token, to_token, row[0], row[1])
|
||||
]
|
||||
|
||||
results = []
|
||||
for batch in batch_iter(room_ids, 1000):
|
||||
deltas = await self.db_pool.runInteraction(
|
||||
"get_current_state_deltas_for_rooms",
|
||||
get_current_state_deltas_for_rooms_txn,
|
||||
batch,
|
||||
)
|
||||
|
||||
results.extend(deltas)
|
||||
|
||||
return results
|
||||
|
||||
@@ -51,6 +51,7 @@ from typing import (
|
||||
Iterable,
|
||||
List,
|
||||
Optional,
|
||||
Protocol,
|
||||
Set,
|
||||
Tuple,
|
||||
cast,
|
||||
@@ -59,7 +60,7 @@ from typing import (
|
||||
|
||||
import attr
|
||||
from immutabledict import immutabledict
|
||||
from typing_extensions import Literal
|
||||
from typing_extensions import Literal, assert_never
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
@@ -67,7 +68,7 @@ from synapse.api.constants import Direction, EventTypes, Membership
|
||||
from synapse.api.filtering import Filter
|
||||
from synapse.events import EventBase
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.logging.opentracing import tag_args, trace
|
||||
from synapse.storage._base import SQLBaseStore
|
||||
from synapse.storage.database import (
|
||||
DatabasePool,
|
||||
@@ -78,10 +79,11 @@ from synapse.storage.database import (
|
||||
from synapse.storage.databases.main.events_worker import EventsWorkerStore
|
||||
from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||
from synapse.types import PersistedEventPosition, RoomStreamToken
|
||||
from synapse.types import PersistedEventPosition, RoomStreamToken, StrCollection
|
||||
from synapse.util.caches.descriptors import cached
|
||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
from synapse.util.cancellation import cancellable
|
||||
from synapse.util.iterutils import batch_iter
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -96,6 +98,18 @@ _STREAM_TOKEN = "stream"
|
||||
_TOPOLOGICAL_TOKEN = "topological"
|
||||
|
||||
|
||||
class PaginateFunction(Protocol):
|
||||
async def __call__(
|
||||
self,
|
||||
*,
|
||||
room_id: str,
|
||||
from_key: RoomStreamToken,
|
||||
to_key: Optional[RoomStreamToken] = None,
|
||||
direction: Direction = Direction.BACKWARDS,
|
||||
limit: int = 0,
|
||||
) -> Tuple[List[EventBase], RoomStreamToken]: ...
|
||||
|
||||
|
||||
# Used as return values for pagination APIs
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class _EventDictReturn:
|
||||
@@ -279,7 +293,7 @@ def generate_pagination_bounds(
|
||||
|
||||
|
||||
def generate_next_token(
|
||||
direction: Direction, last_topo_ordering: int, last_stream_ordering: int
|
||||
direction: Direction, last_topo_ordering: Optional[int], last_stream_ordering: int
|
||||
) -> RoomStreamToken:
|
||||
"""
|
||||
Generate the next room stream token based on the currently returned data.
|
||||
@@ -446,7 +460,6 @@ def _filter_results_by_stream(
|
||||
The `instance_name` arg is optional to handle historic rows, and is
|
||||
interpreted as if it was "master".
|
||||
"""
|
||||
|
||||
if instance_name is None:
|
||||
instance_name = "master"
|
||||
|
||||
@@ -659,33 +672,43 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
|
||||
async def get_room_events_stream_for_rooms(
|
||||
self,
|
||||
*,
|
||||
room_ids: Collection[str],
|
||||
from_key: RoomStreamToken,
|
||||
to_key: RoomStreamToken,
|
||||
to_key: Optional[RoomStreamToken] = None,
|
||||
direction: Direction = Direction.BACKWARDS,
|
||||
limit: int = 0,
|
||||
order: str = "DESC",
|
||||
) -> Dict[str, Tuple[List[EventBase], RoomStreamToken]]:
|
||||
"""Get new room events in stream ordering since `from_key`.
|
||||
|
||||
Args:
|
||||
room_ids
|
||||
from_key: Token from which no events are returned before
|
||||
to_key: Token from which no events are returned after. (This
|
||||
is typically the current stream token)
|
||||
from_key: The token to stream from (starting point and heading in the given
|
||||
direction)
|
||||
to_key: The token representing the end stream position (end point)
|
||||
limit: Maximum number of events to return
|
||||
order: Either "DESC" or "ASC". Determines which events are
|
||||
returned when the result is limited. If "DESC" then the most
|
||||
recent `limit` events are returned, otherwise returns the
|
||||
oldest `limit` events.
|
||||
direction: Indicates whether we are paginating forwards or backwards
|
||||
from `from_key`.
|
||||
|
||||
Returns:
|
||||
A map from room id to a tuple containing:
|
||||
- list of recent events in the room
|
||||
- stream ordering key for the start of the chunk of events returned.
|
||||
|
||||
When Direction.FORWARDS: from_key < x <= to_key, (ascending order)
|
||||
When Direction.BACKWARDS: from_key >= x > to_key, (descending order)
|
||||
"""
|
||||
room_ids = self._events_stream_cache.get_entities_changed(
|
||||
room_ids, from_key.stream
|
||||
)
|
||||
if direction == Direction.FORWARDS:
|
||||
room_ids = self._events_stream_cache.get_entities_changed(
|
||||
room_ids, from_key.stream
|
||||
)
|
||||
elif direction == Direction.BACKWARDS:
|
||||
if to_key is not None:
|
||||
room_ids = self._events_stream_cache.get_entities_changed(
|
||||
room_ids, to_key.stream
|
||||
)
|
||||
else:
|
||||
assert_never(direction)
|
||||
|
||||
if not room_ids:
|
||||
return {}
|
||||
@@ -697,12 +720,12 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
defer.gatherResults(
|
||||
[
|
||||
run_in_background(
|
||||
self.get_room_events_stream_for_room,
|
||||
room_id,
|
||||
from_key,
|
||||
to_key,
|
||||
limit,
|
||||
order=order,
|
||||
self.paginate_room_events_by_stream_ordering,
|
||||
room_id=room_id,
|
||||
from_key=from_key,
|
||||
to_key=to_key,
|
||||
direction=direction,
|
||||
limit=limit,
|
||||
)
|
||||
for room_id in rm_ids
|
||||
],
|
||||
@@ -726,69 +749,122 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
if self._events_stream_cache.has_entity_changed(room_id, from_id)
|
||||
}
|
||||
|
||||
async def get_room_events_stream_for_room(
|
||||
async def paginate_room_events_by_stream_ordering(
|
||||
self,
|
||||
*,
|
||||
room_id: str,
|
||||
from_key: RoomStreamToken,
|
||||
to_key: RoomStreamToken,
|
||||
to_key: Optional[RoomStreamToken] = None,
|
||||
direction: Direction = Direction.BACKWARDS,
|
||||
limit: int = 0,
|
||||
order: str = "DESC",
|
||||
) -> Tuple[List[EventBase], RoomStreamToken]:
|
||||
"""Get new room events in stream ordering since `from_key`.
|
||||
"""
|
||||
Paginate events by `stream_ordering` in the room from the `from_key` in the
|
||||
given `direction` to the `to_key` or `limit`.
|
||||
|
||||
Args:
|
||||
room_id
|
||||
from_key: Token from which no events are returned before
|
||||
to_key: Token from which no events are returned after. (This
|
||||
is typically the current stream token)
|
||||
from_key: The token to stream from (starting point and heading in the given
|
||||
direction)
|
||||
to_key: The token representing the end stream position (end point)
|
||||
direction: Indicates whether we are paginating forwards or backwards
|
||||
from `from_key`.
|
||||
limit: Maximum number of events to return
|
||||
order: Either "DESC" or "ASC". Determines which events are
|
||||
returned when the result is limited. If "DESC" then the most
|
||||
recent `limit` events are returned, otherwise returns the
|
||||
oldest `limit` events.
|
||||
|
||||
Returns:
|
||||
The list of events (in ascending stream order) and the token from the start
|
||||
of the chunk of events returned.
|
||||
"""
|
||||
if from_key == to_key:
|
||||
return [], from_key
|
||||
The results as a list of events and a token that points to the end
|
||||
of the result set. If no events are returned then the end of the
|
||||
stream has been reached (i.e. there are no events between `from_key`
|
||||
and `to_key`).
|
||||
|
||||
has_changed = self._events_stream_cache.has_entity_changed(
|
||||
room_id, from_key.stream
|
||||
)
|
||||
When Direction.FORWARDS: from_key < x <= to_key, (ascending order)
|
||||
When Direction.BACKWARDS: from_key >= x > to_key, (descending order)
|
||||
"""
|
||||
|
||||
# FIXME: When going forwards, we should enforce that the `to_key` is not `None`
|
||||
# because we always need an upper bound when querying the events stream (as
|
||||
# otherwise we'll potentially pick up events that are not fully persisted).
|
||||
|
||||
# We should only be working with `stream_ordering` tokens here
|
||||
assert from_key is None or from_key.topological is None
|
||||
assert to_key is None or to_key.topological is None
|
||||
|
||||
# We can bail early if we're looking forwards, and our `to_key` is already
|
||||
# before our `from_key`.
|
||||
if (
|
||||
direction == Direction.FORWARDS
|
||||
and to_key is not None
|
||||
and to_key.is_before_or_eq(from_key)
|
||||
):
|
||||
# Token selection matches what we do below if there are no rows
|
||||
return [], to_key if to_key else from_key
|
||||
# Or vice-versa, if we're looking backwards and our `from_key` is already before
|
||||
# our `to_key`.
|
||||
elif (
|
||||
direction == Direction.BACKWARDS
|
||||
and to_key is not None
|
||||
and from_key.is_before_or_eq(to_key)
|
||||
):
|
||||
# Token selection matches what we do below if there are no rows
|
||||
return [], to_key if to_key else from_key
|
||||
|
||||
# We can do a quick sanity check to see if any events have been sent in the room
|
||||
# since the earlier token.
|
||||
has_changed = True
|
||||
if direction == Direction.FORWARDS:
|
||||
has_changed = self._events_stream_cache.has_entity_changed(
|
||||
room_id, from_key.stream
|
||||
)
|
||||
elif direction == Direction.BACKWARDS:
|
||||
if to_key is not None:
|
||||
has_changed = self._events_stream_cache.has_entity_changed(
|
||||
room_id, to_key.stream
|
||||
)
|
||||
else:
|
||||
assert_never(direction)
|
||||
|
||||
if not has_changed:
|
||||
return [], from_key
|
||||
# Token selection matches what we do below if there are no rows
|
||||
return [], to_key if to_key else from_key
|
||||
|
||||
order, from_bound, to_bound = generate_pagination_bounds(
|
||||
direction, from_key, to_key
|
||||
)
|
||||
|
||||
bounds = generate_pagination_where_clause(
|
||||
direction=direction,
|
||||
# The empty string will shortcut downstream code to only use the
|
||||
# `stream_ordering` column
|
||||
column_names=("", "stream_ordering"),
|
||||
from_token=from_bound,
|
||||
to_token=to_bound,
|
||||
engine=self.database_engine,
|
||||
)
|
||||
|
||||
def f(txn: LoggingTransaction) -> List[_EventDictReturn]:
|
||||
# To handle tokens with a non-empty instance_map we fetch more
|
||||
# results than necessary and then filter down
|
||||
min_from_id = from_key.stream
|
||||
max_to_id = to_key.get_max_stream_pos()
|
||||
|
||||
sql = """
|
||||
SELECT event_id, instance_name, topological_ordering, stream_ordering
|
||||
sql = f"""
|
||||
SELECT event_id, instance_name, stream_ordering
|
||||
FROM events
|
||||
WHERE
|
||||
room_id = ?
|
||||
AND not outlier
|
||||
AND stream_ordering > ? AND stream_ordering <= ?
|
||||
ORDER BY stream_ordering %s LIMIT ?
|
||||
""" % (
|
||||
order,
|
||||
)
|
||||
txn.execute(sql, (room_id, min_from_id, max_to_id, 2 * limit))
|
||||
AND {bounds}
|
||||
ORDER BY stream_ordering {order} LIMIT ?
|
||||
"""
|
||||
txn.execute(sql, (room_id, 2 * limit))
|
||||
|
||||
rows = [
|
||||
_EventDictReturn(event_id, None, stream_ordering)
|
||||
for event_id, instance_name, topological_ordering, stream_ordering in txn
|
||||
if _filter_results(
|
||||
from_key,
|
||||
to_key,
|
||||
instance_name,
|
||||
topological_ordering,
|
||||
stream_ordering,
|
||||
for event_id, instance_name, stream_ordering in txn
|
||||
if _filter_results_by_stream(
|
||||
lower_token=(
|
||||
to_key if direction == Direction.BACKWARDS else from_key
|
||||
),
|
||||
upper_token=(
|
||||
from_key if direction == Direction.BACKWARDS else to_key
|
||||
),
|
||||
instance_name=instance_name,
|
||||
stream_ordering=stream_ordering,
|
||||
)
|
||||
][:limit]
|
||||
return rows
|
||||
@@ -799,18 +875,20 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
[r.event_id for r in rows], get_prev_content=True
|
||||
)
|
||||
|
||||
if order.lower() == "desc":
|
||||
ret.reverse()
|
||||
|
||||
if rows:
|
||||
key = RoomStreamToken(stream=min(r.stream_ordering for r in rows))
|
||||
next_key = generate_next_token(
|
||||
direction=direction,
|
||||
last_topo_ordering=None,
|
||||
last_stream_ordering=rows[-1].stream_ordering,
|
||||
)
|
||||
else:
|
||||
# Assume we didn't get anything because there was nothing to
|
||||
# get.
|
||||
key = from_key
|
||||
# TODO (erikj): We should work out what to do here instead. (same as
|
||||
# `_paginate_room_events_by_topological_ordering_txn(...)`)
|
||||
next_key = to_key if to_key else from_key
|
||||
|
||||
return ret, key
|
||||
return ret, next_key
|
||||
|
||||
@trace
|
||||
async def get_current_state_delta_membership_changes_for_user(
|
||||
self,
|
||||
user_id: str,
|
||||
@@ -1116,7 +1194,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
|
||||
rows, token = await self.db_pool.runInteraction(
|
||||
"get_recent_event_ids_for_room",
|
||||
self._paginate_room_events_txn,
|
||||
self._paginate_room_events_by_topological_ordering_txn,
|
||||
room_id,
|
||||
from_token=end_token,
|
||||
limit=limit,
|
||||
@@ -1185,6 +1263,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
|
||||
return None
|
||||
|
||||
@trace
|
||||
async def get_last_event_pos_in_room_before_stream_ordering(
|
||||
self,
|
||||
room_id: str,
|
||||
@@ -1293,6 +1372,126 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
get_last_event_pos_in_room_before_stream_ordering_txn,
|
||||
)
|
||||
|
||||
async def bulk_get_last_event_pos_in_room_before_stream_ordering(
|
||||
self,
|
||||
room_ids: StrCollection,
|
||||
end_token: RoomStreamToken,
|
||||
) -> Dict[str, int]:
|
||||
"""Bulk fetch the stream position of the latest events in the given
|
||||
rooms
|
||||
"""
|
||||
|
||||
min_token = end_token.stream
|
||||
max_token = end_token.get_max_stream_pos()
|
||||
results: Dict[str, int] = {}
|
||||
|
||||
# First, we check for the rooms in the stream change cache to see if we
|
||||
# can just use the latest position from it.
|
||||
missing_room_ids: Set[str] = set()
|
||||
for room_id in room_ids:
|
||||
stream_pos = self._events_stream_cache.get_max_pos_of_last_change(room_id)
|
||||
if stream_pos and stream_pos <= min_token:
|
||||
results[room_id] = stream_pos
|
||||
else:
|
||||
missing_room_ids.add(room_id)
|
||||
|
||||
# Next, we query the stream position from the DB. At first we fetch all
|
||||
# positions less than the *max* stream pos in the token, then filter
|
||||
# them down. We do this as a) this is a cheaper query, and b) the vast
|
||||
# majority of rooms will have a latest token from before the min stream
|
||||
# pos.
|
||||
|
||||
def bulk_get_last_event_pos_txn(
|
||||
txn: LoggingTransaction, batch_room_ids: StrCollection
|
||||
) -> Dict[str, int]:
|
||||
# This query fetches the latest stream position in the rooms before
|
||||
# the given max position.
|
||||
clause, args = make_in_list_sql_clause(
|
||||
self.database_engine, "room_id", batch_room_ids
|
||||
)
|
||||
sql = f"""
|
||||
SELECT room_id, (
|
||||
SELECT stream_ordering FROM events AS e
|
||||
LEFT JOIN rejections USING (event_id)
|
||||
WHERE e.room_id = r.room_id
|
||||
AND stream_ordering <= ?
|
||||
AND NOT outlier
|
||||
AND rejection_reason IS NULL
|
||||
ORDER BY stream_ordering DESC
|
||||
LIMIT 1
|
||||
)
|
||||
FROM rooms AS r
|
||||
WHERE {clause}
|
||||
"""
|
||||
txn.execute(sql, [max_token] + args)
|
||||
return {row[0]: row[1] for row in txn}
|
||||
|
||||
recheck_rooms: Set[str] = set()
|
||||
for batched in batch_iter(missing_room_ids, 1000):
|
||||
result = await self.db_pool.runInteraction(
|
||||
"bulk_get_last_event_pos_in_room_before_stream_ordering",
|
||||
bulk_get_last_event_pos_txn,
|
||||
batched,
|
||||
)
|
||||
|
||||
# Check that the stream position for the rooms are from before the
|
||||
# minimum position of the token. If not then we need to fetch more
|
||||
# rows.
|
||||
for room_id, stream in result.items():
|
||||
if stream <= min_token:
|
||||
results[room_id] = stream
|
||||
else:
|
||||
recheck_rooms.add(room_id)
|
||||
|
||||
if not recheck_rooms:
|
||||
return results
|
||||
|
||||
# For the remaining rooms we need to fetch all rows between the min and
|
||||
# max stream positions in the end token, and filter out the rows that
|
||||
# are after the end token.
|
||||
#
|
||||
# This query should be fast as the range between the min and max should
|
||||
# be small.
|
||||
|
||||
def bulk_get_last_event_pos_recheck_txn(
|
||||
txn: LoggingTransaction, batch_room_ids: StrCollection
|
||||
) -> Dict[str, int]:
|
||||
clause, args = make_in_list_sql_clause(
|
||||
self.database_engine, "room_id", batch_room_ids
|
||||
)
|
||||
sql = f"""
|
||||
SELECT room_id, instance_name, stream_ordering
|
||||
FROM events
|
||||
WHERE ? < stream_ordering AND stream_ordering <= ?
|
||||
AND NOT outlier
|
||||
AND rejection_reason IS NULL
|
||||
AND {clause}
|
||||
ORDER BY stream_ordering ASC
|
||||
"""
|
||||
txn.execute(sql, [min_token, max_token] + args)
|
||||
|
||||
# We take the max stream ordering that is less than the token. Since
|
||||
# we ordered by stream ordering we just need to iterate through and
|
||||
# take the last matching stream ordering.
|
||||
txn_results: Dict[str, int] = {}
|
||||
for row in txn:
|
||||
room_id = row[0]
|
||||
event_pos = PersistedEventPosition(row[1], row[2])
|
||||
if not event_pos.persisted_after(end_token):
|
||||
txn_results[room_id] = event_pos.stream
|
||||
|
||||
return txn_results
|
||||
|
||||
for batched in batch_iter(recheck_rooms, 1000):
|
||||
recheck_result = await self.db_pool.runInteraction(
|
||||
"bulk_get_last_event_pos_in_room_before_stream_ordering_recheck",
|
||||
bulk_get_last_event_pos_recheck_txn,
|
||||
batched,
|
||||
)
|
||||
results.update(recheck_result)
|
||||
|
||||
return results
|
||||
|
||||
async def get_current_room_stream_token_for_room_id(
|
||||
self, room_id: str
|
||||
) -> RoomStreamToken:
|
||||
@@ -1501,7 +1700,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
topological=topological_ordering, stream=stream_ordering
|
||||
)
|
||||
|
||||
rows, start_token = self._paginate_room_events_txn(
|
||||
rows, start_token = self._paginate_room_events_by_topological_ordering_txn(
|
||||
txn,
|
||||
room_id,
|
||||
before_token,
|
||||
@@ -1511,7 +1710,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
)
|
||||
events_before = [r.event_id for r in rows]
|
||||
|
||||
rows, end_token = self._paginate_room_events_txn(
|
||||
rows, end_token = self._paginate_room_events_by_topological_ordering_txn(
|
||||
txn,
|
||||
room_id,
|
||||
after_token,
|
||||
@@ -1674,14 +1873,14 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
def has_room_changed_since(self, room_id: str, stream_id: int) -> bool:
|
||||
return self._events_stream_cache.has_entity_changed(room_id, stream_id)
|
||||
|
||||
def _paginate_room_events_txn(
|
||||
def _paginate_room_events_by_topological_ordering_txn(
|
||||
self,
|
||||
txn: LoggingTransaction,
|
||||
room_id: str,
|
||||
from_token: RoomStreamToken,
|
||||
to_token: Optional[RoomStreamToken] = None,
|
||||
direction: Direction = Direction.BACKWARDS,
|
||||
limit: int = -1,
|
||||
limit: int = 0,
|
||||
event_filter: Optional[Filter] = None,
|
||||
) -> Tuple[List[_EventDictReturn], RoomStreamToken]:
|
||||
"""Returns list of events before or after a given token.
|
||||
@@ -1703,6 +1902,24 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
been reached (i.e. there are no events between `from_token` and
|
||||
`to_token`), or `limit` is zero.
|
||||
"""
|
||||
# We can bail early if we're looking forwards, and our `to_key` is already
|
||||
# before our `from_token`.
|
||||
if (
|
||||
direction == Direction.FORWARDS
|
||||
and to_token is not None
|
||||
and to_token.is_before_or_eq(from_token)
|
||||
):
|
||||
# Token selection matches what we do below if there are no rows
|
||||
return [], to_token if to_token else from_token
|
||||
# Or vice-versa, if we're looking backwards and our `from_token` is already before
|
||||
# our `to_token`.
|
||||
elif (
|
||||
direction == Direction.BACKWARDS
|
||||
and to_token is not None
|
||||
and from_token.is_before_or_eq(to_token)
|
||||
):
|
||||
# Token selection matches what we do below if there are no rows
|
||||
return [], to_token if to_token else from_token
|
||||
|
||||
args: List[Any] = [room_id]
|
||||
|
||||
@@ -1787,7 +2004,6 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
"bounds": bounds,
|
||||
"order": order,
|
||||
}
|
||||
|
||||
txn.execute(sql, args)
|
||||
|
||||
# Filter the result set.
|
||||
@@ -1819,27 +2035,30 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
return rows, next_token
|
||||
|
||||
@trace
|
||||
async def paginate_room_events(
|
||||
@tag_args
|
||||
async def paginate_room_events_by_topological_ordering(
|
||||
self,
|
||||
*,
|
||||
room_id: str,
|
||||
from_key: RoomStreamToken,
|
||||
to_key: Optional[RoomStreamToken] = None,
|
||||
direction: Direction = Direction.BACKWARDS,
|
||||
limit: int = -1,
|
||||
limit: int = 0,
|
||||
event_filter: Optional[Filter] = None,
|
||||
) -> Tuple[List[EventBase], RoomStreamToken]:
|
||||
"""Returns list of events before or after a given token.
|
||||
|
||||
When Direction.FORWARDS: from_key < x <= to_key
|
||||
When Direction.BACKWARDS: from_key >= x > to_key
|
||||
"""
|
||||
Paginate events by `topological_ordering` (tie-break with `stream_ordering`) in
|
||||
the room from the `from_key` in the given `direction` to the `to_key` or
|
||||
`limit`.
|
||||
|
||||
Args:
|
||||
room_id
|
||||
from_key: The token used to stream from
|
||||
to_key: A token which if given limits the results to only those before
|
||||
from_key: The token to stream from (starting point and heading in the given
|
||||
direction)
|
||||
to_key: The token representing the end stream position (end point)
|
||||
direction: Indicates whether we are paginating forwards or backwards
|
||||
from `from_key`.
|
||||
limit: The maximum number of events to return.
|
||||
limit: Maximum number of events to return
|
||||
event_filter: If provided filters the events to those that match the filter.
|
||||
|
||||
Returns:
|
||||
@@ -1847,8 +2066,18 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
of the result set. If no events are returned then the end of the
|
||||
stream has been reached (i.e. there are no events between `from_key`
|
||||
and `to_key`).
|
||||
|
||||
When Direction.FORWARDS: from_key < x <= to_key, (ascending order)
|
||||
When Direction.BACKWARDS: from_key >= x > to_key, (descending order)
|
||||
"""
|
||||
|
||||
# FIXME: When going forwards, we should enforce that the `to_key` is not `None`
|
||||
# because we always need an upper bound when querying the events stream (as
|
||||
# otherwise we'll potentially pick up events that are not fully persisted).
|
||||
|
||||
# We have these checks outside of the transaction function (txn) to save getting
|
||||
# a DB connection and switching threads if we don't need to.
|
||||
#
|
||||
# We can bail early if we're looking forwards, and our `to_key` is already
|
||||
# before our `from_key`.
|
||||
if (
|
||||
@@ -1871,8 +2100,8 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
return [], to_key if to_key else from_key
|
||||
|
||||
rows, token = await self.db_pool.runInteraction(
|
||||
"paginate_room_events",
|
||||
self._paginate_room_events_txn,
|
||||
"paginate_room_events_by_topological_ordering",
|
||||
self._paginate_room_events_by_topological_ordering_txn,
|
||||
room_id,
|
||||
from_key,
|
||||
to_key,
|
||||
@@ -1983,3 +2212,14 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
return RoomStreamToken(stream=last_position.stream - 1)
|
||||
|
||||
return None
|
||||
|
||||
@trace
|
||||
def get_rooms_that_might_have_updates(
|
||||
self, room_ids: StrCollection, from_token: RoomStreamToken
|
||||
) -> StrCollection:
|
||||
"""Filters given room IDs down to those that might have updates, i.e.
|
||||
removes rooms that definitely do not have updates.
|
||||
"""
|
||||
return self._events_stream_cache.get_entities_changed(
|
||||
room_ids, from_token.stream
|
||||
)
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
#
|
||||
#
|
||||
|
||||
SCHEMA_VERSION = 85 # remember to update the list below when updating
|
||||
SCHEMA_VERSION = 86 # remember to update the list below when updating
|
||||
"""Represents the expectations made by the codebase about the database schema
|
||||
|
||||
This should be incremented whenever the codebase changes its requirements on the
|
||||
@@ -139,6 +139,9 @@ Changes in SCHEMA_VERSION = 84
|
||||
|
||||
Changes in SCHEMA_VERSION = 85
|
||||
- Add a column `suspended` to the `users` table
|
||||
|
||||
Changes in SCHEMA_VERSION = 86
|
||||
- Add a column `authenticated` to the tables `local_media_repository` and `remote_media_cache`
|
||||
"""
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,15 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2024 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
ALTER TABLE remote_media_cache ADD COLUMN authenticated BOOLEAN DEFAULT FALSE NOT NULL;
|
||||
ALTER TABLE local_media_repository ADD COLUMN authenticated BOOLEAN DEFAULT FALSE NOT NULL;
|
||||
@@ -0,0 +1,15 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2024 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||
(8602, 'receipts_room_id_event_id_index', '{}');
|
||||
@@ -777,6 +777,13 @@ class RoomStreamToken(AbstractMultiWriterStreamToken):
|
||||
|
||||
return super().bound_stream_token(max_stream)
|
||||
|
||||
def __str__(self) -> str:
|
||||
instances = ", ".join(f"{k}: {v}" for k, v in sorted(self.instance_map.items()))
|
||||
return (
|
||||
f"RoomStreamToken(stream: {self.stream}, topological: {self.topological}, "
|
||||
f"instances: {{{instances}}})"
|
||||
)
|
||||
|
||||
|
||||
@attr.s(frozen=True, slots=True, order=False)
|
||||
class MultiWriterStreamToken(AbstractMultiWriterStreamToken):
|
||||
@@ -873,6 +880,13 @@ class MultiWriterStreamToken(AbstractMultiWriterStreamToken):
|
||||
|
||||
return True
|
||||
|
||||
def __str__(self) -> str:
|
||||
instances = ", ".join(f"{k}: {v}" for k, v in sorted(self.instance_map.items()))
|
||||
return (
|
||||
f"MultiWriterStreamToken(stream: {self.stream}, "
|
||||
f"instances: {{{instances}}})"
|
||||
)
|
||||
|
||||
|
||||
class StreamKeyType(Enum):
|
||||
"""Known stream types.
|
||||
@@ -1131,12 +1145,64 @@ class StreamToken:
|
||||
|
||||
return True
|
||||
|
||||
def __str__(self) -> str:
|
||||
return (
|
||||
f"StreamToken(room: {self.room_key}, presence: {self.presence_key}, "
|
||||
f"typing: {self.typing_key}, receipt: {self.receipt_key}, "
|
||||
f"account_data: {self.account_data_key}, push_rules: {self.push_rules_key}, "
|
||||
f"to_device: {self.to_device_key}, device_list: {self.device_list_key}, "
|
||||
f"groups: {self.groups_key}, un_partial_stated_rooms: {self.un_partial_stated_rooms_key})"
|
||||
)
|
||||
|
||||
|
||||
StreamToken.START = StreamToken(
|
||||
RoomStreamToken(stream=0), 0, 0, MultiWriterStreamToken(stream=0), 0, 0, 0, 0, 0, 0
|
||||
)
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class SlidingSyncStreamToken:
|
||||
"""The same as a `StreamToken`, but includes an extra field at the start for
|
||||
the sliding sync connection token (separated by a '/'). This is used to
|
||||
store per-connection state.
|
||||
|
||||
This then looks something like:
|
||||
5/s2633508_17_338_6732159_1082514_541479_274711_265584_1_379
|
||||
|
||||
Attributes:
|
||||
stream_token: Token representing the position of all the standard
|
||||
streams.
|
||||
connection_position: Token used by sliding sync to track updates to any
|
||||
per-connection state stored by Synapse.
|
||||
"""
|
||||
|
||||
stream_token: StreamToken
|
||||
connection_position: int
|
||||
|
||||
@staticmethod
|
||||
@cancellable
|
||||
async def from_string(store: "DataStore", string: str) -> "SlidingSyncStreamToken":
|
||||
"""Creates a SlidingSyncStreamToken from its textual representation."""
|
||||
try:
|
||||
connection_position_str, stream_token_str = string.split("/", 1)
|
||||
connection_position = int(connection_position_str)
|
||||
stream_token = await StreamToken.from_string(store, stream_token_str)
|
||||
|
||||
return SlidingSyncStreamToken(
|
||||
stream_token=stream_token,
|
||||
connection_position=connection_position,
|
||||
)
|
||||
except CancelledError:
|
||||
raise
|
||||
except Exception:
|
||||
raise SynapseError(400, "Invalid stream token")
|
||||
|
||||
async def to_string(self, store: "DataStore") -> str:
|
||||
"""Serializes the token to a string"""
|
||||
stream_token_str = await self.stream_token.to_string(store)
|
||||
return f"{self.connection_position}/{stream_token_str}"
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class PersistedPosition:
|
||||
"""Position of a newly persisted row with instance that persisted it."""
|
||||
@@ -1219,11 +1285,12 @@ class ReadReceipt:
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class DeviceListUpdates:
|
||||
"""
|
||||
An object containing a diff of information regarding other users' device lists, intended for
|
||||
a recipient to carry out device list tracking.
|
||||
An object containing a diff of information regarding other users' device lists,
|
||||
intended for a recipient to carry out device list tracking.
|
||||
|
||||
Attributes:
|
||||
changed: A set of users whose device lists have changed recently.
|
||||
changed: A set of users who have updated their device identity or
|
||||
cross-signing keys, or who now share an encrypted room with.
|
||||
left: A set of users who the recipient no longer needs to track the device lists of.
|
||||
Typically when those users no longer share any end-to-end encryption enabled rooms.
|
||||
"""
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
#
|
||||
#
|
||||
from enum import Enum
|
||||
from typing import TYPE_CHECKING, Dict, Final, List, Optional, Sequence, Tuple
|
||||
from typing import TYPE_CHECKING, Dict, Final, List, Mapping, Optional, Sequence, Tuple
|
||||
|
||||
import attr
|
||||
from typing_extensions import TypedDict
|
||||
@@ -31,7 +31,15 @@ else:
|
||||
from pydantic import Extra
|
||||
|
||||
from synapse.events import EventBase
|
||||
from synapse.types import JsonDict, JsonMapping, StreamToken, UserID
|
||||
from synapse.types import (
|
||||
DeviceListUpdates,
|
||||
JsonDict,
|
||||
JsonMapping,
|
||||
Requester,
|
||||
SlidingSyncStreamToken,
|
||||
StreamToken,
|
||||
UserID,
|
||||
)
|
||||
from synapse.types.rest.client import SlidingSyncBody
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -102,7 +110,7 @@ class SlidingSyncConfig(SlidingSyncBody):
|
||||
"""
|
||||
|
||||
user: UserID
|
||||
device_id: Optional[str]
|
||||
requester: Requester
|
||||
|
||||
# Pydantic config
|
||||
class Config:
|
||||
@@ -144,7 +152,7 @@ class SlidingSyncResult:
|
||||
Attributes:
|
||||
next_pos: The next position token in the sliding window to request (next_batch).
|
||||
lists: Sliding window API. A map of list key to list results.
|
||||
rooms: Room subscription API. A map of room ID to room subscription to room results.
|
||||
rooms: Room subscription API. A map of room ID to room results.
|
||||
extensions: Extensions API. A map of extension key to extension results.
|
||||
"""
|
||||
|
||||
@@ -163,6 +171,9 @@ class SlidingSyncResult:
|
||||
their local state. When there is an update, servers MUST omit this flag
|
||||
entirely and NOT send "initial":false as this is wasteful on bandwidth. The
|
||||
absence of this flag means 'false'.
|
||||
unstable_expanded_timeline: Flag which is set if we're returning more historic
|
||||
events due to the timeline limit having increased. See "XXX: Odd behavior"
|
||||
comment ing `synapse.handlers.sliding_sync`.
|
||||
required_state: The current state of the room
|
||||
timeline: Latest events in the room. The last event is the most recent.
|
||||
bundled_aggregations: A mapping of event ID to the bundled aggregations for
|
||||
@@ -174,8 +185,8 @@ class SlidingSyncResult:
|
||||
absent on joined/left rooms
|
||||
prev_batch: A token that can be passed as a start parameter to the
|
||||
`/rooms/<room_id>/messages` API to retrieve earlier messages.
|
||||
limited: True if their are more events than fit between the given position and now.
|
||||
Sync again to get more.
|
||||
limited: True if there are more events than `timeline_limit` looking
|
||||
backwards from the `response.pos` to the `request.pos`.
|
||||
num_live: The number of timeline events which have just occurred and are not historical.
|
||||
The last N events are 'live' and should be treated as such. This is mostly
|
||||
useful to determine whether a given @mention event should make a noise or not.
|
||||
@@ -211,6 +222,7 @@ class SlidingSyncResult:
|
||||
heroes: Optional[List[StrippedHero]]
|
||||
is_dm: bool
|
||||
initial: bool
|
||||
unstable_expanded_timeline: bool
|
||||
# Should be empty for invite/knock rooms with `stripped_state`
|
||||
required_state: List[EventBase]
|
||||
# Should be empty for invite/knock rooms with `stripped_state`
|
||||
@@ -230,6 +242,17 @@ class SlidingSyncResult:
|
||||
notification_count: int
|
||||
highlight_count: int
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
return (
|
||||
# If this is the first time the client is seeing the room, we should not filter it out
|
||||
# under any circumstance.
|
||||
self.initial
|
||||
# We need to let the client know if there are any new events
|
||||
or bool(self.required_state)
|
||||
or bool(self.timeline_events)
|
||||
or bool(self.stripped_state)
|
||||
)
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class SlidingWindowList:
|
||||
"""
|
||||
@@ -264,6 +287,7 @@ class SlidingSyncResult:
|
||||
|
||||
Attributes:
|
||||
to_device: The to-device extension (MSC3885)
|
||||
e2ee: The E2EE device extension (MSC3884)
|
||||
"""
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
@@ -282,12 +306,109 @@ class SlidingSyncResult:
|
||||
def __bool__(self) -> bool:
|
||||
return bool(self.events)
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class E2eeExtension:
|
||||
"""The E2EE device extension (MSC3884)
|
||||
|
||||
Attributes:
|
||||
device_list_updates: List of user_ids whose devices have changed or left (only
|
||||
present on incremental syncs).
|
||||
device_one_time_keys_count: Map from key algorithm to the number of
|
||||
unclaimed one-time keys currently held on the server for this device. If
|
||||
an algorithm is unlisted, the count for that algorithm is assumed to be
|
||||
zero. If this entire parameter is missing, the count for all algorithms
|
||||
is assumed to be zero.
|
||||
device_unused_fallback_key_types: List of unused fallback key algorithms
|
||||
for this device.
|
||||
"""
|
||||
|
||||
# Only present on incremental syncs
|
||||
device_list_updates: Optional[DeviceListUpdates]
|
||||
device_one_time_keys_count: Mapping[str, int]
|
||||
device_unused_fallback_key_types: Sequence[str]
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
# Note that "signed_curve25519" is always returned in key count responses
|
||||
# regardless of whether we uploaded any keys for it. This is necessary until
|
||||
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
|
||||
#
|
||||
# Also related:
|
||||
# https://github.com/element-hq/element-android/issues/3725 and
|
||||
# https://github.com/matrix-org/synapse/issues/10456
|
||||
default_otk = self.device_one_time_keys_count.get("signed_curve25519")
|
||||
more_than_default_otk = len(self.device_one_time_keys_count) > 1 or (
|
||||
default_otk is not None and default_otk > 0
|
||||
)
|
||||
|
||||
return bool(
|
||||
more_than_default_otk
|
||||
or self.device_list_updates
|
||||
or self.device_unused_fallback_key_types
|
||||
)
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class AccountDataExtension:
|
||||
"""The Account Data extension (MSC3959)
|
||||
|
||||
Attributes:
|
||||
global_account_data_map: Mapping from `type` to `content` of global account
|
||||
data events.
|
||||
account_data_by_room_map: Mapping from room_id to mapping of `type` to
|
||||
`content` of room account data events.
|
||||
"""
|
||||
|
||||
global_account_data_map: Mapping[str, JsonMapping]
|
||||
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]]
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
return bool(
|
||||
self.global_account_data_map or self.account_data_by_room_map
|
||||
)
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class ReceiptsExtension:
|
||||
"""The Receipts extension (MSC3960)
|
||||
|
||||
Attributes:
|
||||
room_id_to_receipt_map: Mapping from room_id to `m.receipt` ephemeral
|
||||
event (type, content)
|
||||
"""
|
||||
|
||||
room_id_to_receipt_map: Mapping[str, JsonMapping]
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
return bool(self.room_id_to_receipt_map)
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class TypingExtension:
|
||||
"""The Typing Notification extension (MSC3961)
|
||||
|
||||
Attributes:
|
||||
room_id_to_typing_map: Mapping from room_id to `m.typing` ephemeral
|
||||
event (type, content)
|
||||
"""
|
||||
|
||||
room_id_to_typing_map: Mapping[str, JsonMapping]
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
return bool(self.room_id_to_typing_map)
|
||||
|
||||
to_device: Optional[ToDeviceExtension] = None
|
||||
e2ee: Optional[E2eeExtension] = None
|
||||
account_data: Optional[AccountDataExtension] = None
|
||||
receipts: Optional[ReceiptsExtension] = None
|
||||
typing: Optional[TypingExtension] = None
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
return bool(self.to_device)
|
||||
return bool(
|
||||
self.to_device
|
||||
or self.e2ee
|
||||
or self.account_data
|
||||
or self.receipts
|
||||
or self.typing
|
||||
)
|
||||
|
||||
next_pos: StreamToken
|
||||
next_pos: SlidingSyncStreamToken
|
||||
lists: Dict[str, SlidingWindowList]
|
||||
rooms: Dict[str, RoomResult]
|
||||
extensions: Extensions
|
||||
@@ -297,10 +418,14 @@ class SlidingSyncResult:
|
||||
to tell if the notifier needs to wait for more events when polling for
|
||||
events.
|
||||
"""
|
||||
return bool(self.lists or self.rooms or self.extensions)
|
||||
# We don't include `self.lists` here, as a) `lists` is always non-empty even if
|
||||
# there are no changes, and b) since we're sorting rooms by `stream_ordering` of
|
||||
# the latest activity, anything that would cause the order to change would end
|
||||
# up in `self.rooms` and cause us to send down the change.
|
||||
return bool(self.rooms or self.extensions)
|
||||
|
||||
@staticmethod
|
||||
def empty(next_pos: StreamToken) -> "SlidingSyncResult":
|
||||
def empty(next_pos: SlidingSyncStreamToken) -> "SlidingSyncResult":
|
||||
"Return a new empty result"
|
||||
return SlidingSyncResult(
|
||||
next_pos=next_pos,
|
||||
|
||||
@@ -120,6 +120,9 @@ class SlidingSyncBody(RequestBodyModel):
|
||||
Sliding Sync API request body.
|
||||
|
||||
Attributes:
|
||||
conn_id: An optional string to identify this connection to the server.
|
||||
Only one sliding sync connection is allowed per given conn_id (empty
|
||||
or not).
|
||||
lists: Sliding window API. A map of list key to list information
|
||||
(:class:`SlidingSyncList`). Max lists: 100. The list keys should be
|
||||
arbitrary strings which the client is using to refer to the list. Keep this
|
||||
@@ -313,7 +316,73 @@ class SlidingSyncBody(RequestBodyModel):
|
||||
|
||||
return value
|
||||
|
||||
class E2eeExtension(RequestBodyModel):
|
||||
"""The E2EE device extension (MSC3884)
|
||||
|
||||
Attributes:
|
||||
enabled
|
||||
"""
|
||||
|
||||
enabled: Optional[StrictBool] = False
|
||||
|
||||
class AccountDataExtension(RequestBodyModel):
|
||||
"""The Account Data extension (MSC3959)
|
||||
|
||||
Attributes:
|
||||
enabled
|
||||
lists: List of list keys (from the Sliding Window API) to apply this
|
||||
extension to.
|
||||
rooms: List of room IDs (from the Room Subscription API) to apply this
|
||||
extension to.
|
||||
"""
|
||||
|
||||
enabled: Optional[StrictBool] = False
|
||||
# Process all lists defined in the Sliding Window API. (This is the default.)
|
||||
lists: Optional[List[StrictStr]] = ["*"]
|
||||
# Process all room subscriptions defined in the Room Subscription API. (This is the default.)
|
||||
rooms: Optional[List[StrictStr]] = ["*"]
|
||||
|
||||
class ReceiptsExtension(RequestBodyModel):
|
||||
"""The Receipts extension (MSC3960)
|
||||
|
||||
Attributes:
|
||||
enabled
|
||||
lists: List of list keys (from the Sliding Window API) to apply this
|
||||
extension to.
|
||||
rooms: List of room IDs (from the Room Subscription API) to apply this
|
||||
extension to.
|
||||
"""
|
||||
|
||||
enabled: Optional[StrictBool] = False
|
||||
# Process all lists defined in the Sliding Window API. (This is the default.)
|
||||
lists: Optional[List[StrictStr]] = ["*"]
|
||||
# Process all room subscriptions defined in the Room Subscription API. (This is the default.)
|
||||
rooms: Optional[List[StrictStr]] = ["*"]
|
||||
|
||||
class TypingExtension(RequestBodyModel):
|
||||
"""The Typing Notification extension (MSC3961)
|
||||
|
||||
Attributes:
|
||||
enabled
|
||||
lists: List of list keys (from the Sliding Window API) to apply this
|
||||
extension to.
|
||||
rooms: List of room IDs (from the Room Subscription API) to apply this
|
||||
extension to.
|
||||
"""
|
||||
|
||||
enabled: Optional[StrictBool] = False
|
||||
# Process all lists defined in the Sliding Window API. (This is the default.)
|
||||
lists: Optional[List[StrictStr]] = ["*"]
|
||||
# Process all room subscriptions defined in the Room Subscription API. (This is the default.)
|
||||
rooms: Optional[List[StrictStr]] = ["*"]
|
||||
|
||||
to_device: Optional[ToDeviceExtension] = None
|
||||
e2ee: Optional[E2eeExtension] = None
|
||||
account_data: Optional[AccountDataExtension] = None
|
||||
receipts: Optional[ReceiptsExtension] = None
|
||||
typing: Optional[TypingExtension] = None
|
||||
|
||||
conn_id: Optional[str]
|
||||
|
||||
# mypy workaround via https://github.com/pydantic/pydantic/issues/156#issuecomment-1130883884
|
||||
if TYPE_CHECKING:
|
||||
|
||||
@@ -885,3 +885,46 @@ class AwakenableSleeper:
|
||||
# Cancel the sleep if we were woken up
|
||||
if call.active():
|
||||
call.cancel()
|
||||
|
||||
|
||||
class DeferredEvent:
|
||||
"""Like threading.Event but for async code"""
|
||||
|
||||
def __init__(self, reactor: IReactorTime) -> None:
|
||||
self._reactor = reactor
|
||||
self._deferred: "defer.Deferred[None]" = defer.Deferred()
|
||||
|
||||
def set(self) -> None:
|
||||
if not self._deferred.called:
|
||||
self._deferred.callback(None)
|
||||
|
||||
def clear(self) -> None:
|
||||
if self._deferred.called:
|
||||
self._deferred = defer.Deferred()
|
||||
|
||||
def is_set(self) -> bool:
|
||||
return self._deferred.called
|
||||
|
||||
async def wait(self, timeout_seconds: float) -> bool:
|
||||
if self.is_set():
|
||||
return True
|
||||
|
||||
# Create a deferred that gets called in N seconds
|
||||
sleep_deferred: "defer.Deferred[None]" = defer.Deferred()
|
||||
call = self._reactor.callLater(timeout_seconds, sleep_deferred.callback, None)
|
||||
|
||||
try:
|
||||
await make_deferred_yieldable(
|
||||
defer.DeferredList(
|
||||
[sleep_deferred, self._deferred],
|
||||
fireOnOneCallback=True,
|
||||
fireOnOneErrback=True,
|
||||
consumeErrors=True,
|
||||
)
|
||||
)
|
||||
finally:
|
||||
# Cancel the sleep if we were woken up
|
||||
if call.active():
|
||||
call.cancel()
|
||||
|
||||
return self.is_set()
|
||||
|
||||
@@ -327,7 +327,7 @@ class StreamChangeCache:
|
||||
for entity in r:
|
||||
self._entity_to_key.pop(entity, None)
|
||||
|
||||
def get_max_pos_of_last_change(self, entity: EntityType) -> int:
|
||||
def get_max_pos_of_last_change(self, entity: EntityType) -> Optional[int]:
|
||||
"""Returns an upper bound of the stream id of the last change to an
|
||||
entity.
|
||||
|
||||
@@ -335,7 +335,11 @@ class StreamChangeCache:
|
||||
entity: The entity to check.
|
||||
|
||||
Return:
|
||||
The stream position of the latest change for the given entity or
|
||||
the earliest known stream position if the entitiy is unknown.
|
||||
The stream position of the latest change for the given entity, if
|
||||
known
|
||||
"""
|
||||
return self._entity_to_key.get(entity, self._earliest_known_stream_pos)
|
||||
return self._entity_to_key.get(entity)
|
||||
|
||||
def get_earliest_known_position(self) -> int:
|
||||
"""Returns the earliest position in the cache."""
|
||||
return self._earliest_known_stream_pos
|
||||
|
||||
29
synapse/util/events.py
Normal file
29
synapse/util/events.py
Normal file
@@ -0,0 +1,29 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
#
|
||||
|
||||
from synapse.util.stringutils import random_string
|
||||
|
||||
|
||||
def generate_fake_event_id() -> str:
|
||||
"""
|
||||
Generate an event ID from random ASCII characters.
|
||||
|
||||
This is primarily useful for generating fake event IDs in response to
|
||||
requests from shadow-banned users.
|
||||
|
||||
Returns:
|
||||
A string intended to look like an event ID, but with no actual meaning.
|
||||
"""
|
||||
return "$" + random_string(43)
|
||||
@@ -43,9 +43,7 @@ from tests.unittest import override_config
|
||||
class E2eKeysHandlerTestCase(unittest.HomeserverTestCase):
|
||||
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||
self.appservice_api = mock.AsyncMock()
|
||||
return self.setup_test_homeserver(
|
||||
federation_client=mock.Mock(), application_service_api=self.appservice_api
|
||||
)
|
||||
return self.setup_test_homeserver(application_service_api=self.appservice_api)
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.handler = hs.get_e2e_keys_handler()
|
||||
@@ -1224,6 +1222,61 @@ class E2eKeysHandlerTestCase(unittest.HomeserverTestCase):
|
||||
},
|
||||
)
|
||||
|
||||
def test_query_devices_remote_down(self) -> None:
|
||||
"""Tests that querying keys for a remote user on an unreachable server returns
|
||||
results in the "failures" property
|
||||
"""
|
||||
|
||||
remote_user_id = "@test:other"
|
||||
local_user_id = "@test:test"
|
||||
|
||||
# The backoff code treats time zero as special
|
||||
self.reactor.advance(5)
|
||||
|
||||
self.hs.get_federation_http_client().agent.request = mock.AsyncMock( # type: ignore[method-assign]
|
||||
side_effect=Exception("boop")
|
||||
)
|
||||
|
||||
e2e_handler = self.hs.get_e2e_keys_handler()
|
||||
|
||||
query_result = self.get_success(
|
||||
e2e_handler.query_devices(
|
||||
{
|
||||
"device_keys": {remote_user_id: []},
|
||||
},
|
||||
timeout=10,
|
||||
from_user_id=local_user_id,
|
||||
from_device_id="some_device_id",
|
||||
)
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
query_result["failures"],
|
||||
{
|
||||
"other": {
|
||||
"message": "Failed to send request: Exception: boop",
|
||||
"status": 503,
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
# Do it again: we should hit the backoff
|
||||
query_result = self.get_success(
|
||||
e2e_handler.query_devices(
|
||||
{
|
||||
"device_keys": {remote_user_id: []},
|
||||
},
|
||||
timeout=10,
|
||||
from_user_id=local_user_id,
|
||||
from_device_id="some_device_id",
|
||||
)
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
query_result["failures"],
|
||||
{"other": {"message": "Not ready for retry", "status": 503}},
|
||||
)
|
||||
|
||||
@parameterized.expand(
|
||||
[
|
||||
# The remote homeserver's response indicates that this user has 0/1/2 devices.
|
||||
|
||||
@@ -44,7 +44,7 @@ from synapse.rest.client import login, room
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.databases.main.events_worker import EventCacheEntry
|
||||
from synapse.util import Clock
|
||||
from synapse.util.stringutils import random_string
|
||||
from synapse.util.events import generate_fake_event_id
|
||||
|
||||
from tests import unittest
|
||||
from tests.test_utils import event_injection
|
||||
@@ -52,10 +52,6 @@ from tests.test_utils import event_injection
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def generate_fake_event_id() -> str:
|
||||
return "$fake_" + random_string(43)
|
||||
|
||||
|
||||
class FederationTestCase(unittest.FederatingHomeserverTestCase):
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -211,6 +211,7 @@ class SyncTestCase(tests.unittest.HomeserverTestCase):
|
||||
|
||||
# Blow away caches (supported room versions can only change due to a restart).
|
||||
self.store.get_rooms_for_user.invalidate_all()
|
||||
self.store._get_rooms_for_local_user_where_membership_is_inner.invalidate_all()
|
||||
self.store._get_event_cache.clear()
|
||||
self.store._event_ref.clear()
|
||||
|
||||
|
||||
@@ -49,8 +49,11 @@ from tests.unittest import TestCase
|
||||
|
||||
|
||||
class ReadMultipartResponseTests(TestCase):
|
||||
data1 = b"\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: application/json\r\n\r\n{}\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: text/plain\r\nContent-Disposition: inline; filename=test_upload\r\n\r\nfile_"
|
||||
data2 = b"to_stream\r\n--6067d4698f8d40a0a794ea7d7379d53a--\r\n\r\n"
|
||||
multipart_response_data1 = b"\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: application/json\r\n\r\n{}\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: text/plain\r\nContent-Disposition: inline; filename=test_upload\r\n\r\nfile_"
|
||||
multipart_response_data2 = (
|
||||
b"to_stream\r\n--6067d4698f8d40a0a794ea7d7379d53a--\r\n\r\n"
|
||||
)
|
||||
multipart_response_data_cased = b"\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\ncOntEnt-type: application/json\r\n\r\n{}\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-tyPe: text/plain\r\nconTent-dispOsition: inline; filename=test_upload\r\n\r\nfile_"
|
||||
|
||||
redirect_data = b"\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: application/json\r\n\r\n{}\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nLocation: https://cdn.example.org/ab/c1/2345.txt\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a--\r\n\r\n"
|
||||
|
||||
@@ -103,8 +106,31 @@ class ReadMultipartResponseTests(TestCase):
|
||||
result, deferred, protocol = self._build_multipart_response(249, 250)
|
||||
|
||||
# Start sending data.
|
||||
protocol.dataReceived(self.data1)
|
||||
protocol.dataReceived(self.data2)
|
||||
protocol.dataReceived(self.multipart_response_data1)
|
||||
protocol.dataReceived(self.multipart_response_data2)
|
||||
# Close the connection.
|
||||
protocol.connectionLost(Failure(ResponseDone()))
|
||||
|
||||
multipart_response: MultipartResponse = deferred.result # type: ignore[assignment]
|
||||
|
||||
self.assertEqual(multipart_response.json, b"{}")
|
||||
self.assertEqual(result.getvalue(), b"file_to_stream")
|
||||
self.assertEqual(multipart_response.length, len(b"file_to_stream"))
|
||||
self.assertEqual(multipart_response.content_type, b"text/plain")
|
||||
self.assertEqual(
|
||||
multipart_response.disposition, b"inline; filename=test_upload"
|
||||
)
|
||||
|
||||
def test_parse_file_lowercase_headers(self) -> None:
|
||||
"""
|
||||
Check that a multipart response containing a file is properly parsed
|
||||
into the json/file parts, and the json and file are properly captured if the http headers are lowercased
|
||||
"""
|
||||
result, deferred, protocol = self._build_multipart_response(249, 250)
|
||||
|
||||
# Start sending data.
|
||||
protocol.dataReceived(self.multipart_response_data_cased)
|
||||
protocol.dataReceived(self.multipart_response_data2)
|
||||
# Close the connection.
|
||||
protocol.connectionLost(Failure(ResponseDone()))
|
||||
|
||||
@@ -143,7 +169,7 @@ class ReadMultipartResponseTests(TestCase):
|
||||
result, deferred, protocol = self._build_multipart_response(UNKNOWN_LENGTH, 180)
|
||||
|
||||
# Start sending data.
|
||||
protocol.dataReceived(self.data1)
|
||||
protocol.dataReceived(self.multipart_response_data1)
|
||||
|
||||
self.assertEqual(result.getvalue(), b"file_")
|
||||
self._assert_error(deferred, protocol)
|
||||
@@ -154,11 +180,11 @@ class ReadMultipartResponseTests(TestCase):
|
||||
result, deferred, protocol = self._build_multipart_response(UNKNOWN_LENGTH, 180)
|
||||
|
||||
# Start sending data.
|
||||
protocol.dataReceived(self.data1)
|
||||
protocol.dataReceived(self.multipart_response_data1)
|
||||
self._assert_error(deferred, protocol)
|
||||
|
||||
# More data might have come in.
|
||||
protocol.dataReceived(self.data2)
|
||||
protocol.dataReceived(self.multipart_response_data2)
|
||||
|
||||
self.assertEqual(result.getvalue(), b"file_")
|
||||
self._assert_error(deferred, protocol)
|
||||
@@ -172,7 +198,7 @@ class ReadMultipartResponseTests(TestCase):
|
||||
self.assertFalse(deferred.called)
|
||||
|
||||
# Start sending data.
|
||||
protocol.dataReceived(self.data1)
|
||||
protocol.dataReceived(self.multipart_response_data1)
|
||||
self._assert_error(deferred, protocol)
|
||||
self._cleanup_error(deferred)
|
||||
|
||||
|
||||
@@ -17,6 +17,7 @@
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
import io
|
||||
from typing import Any, Dict, Generator
|
||||
from unittest.mock import ANY, Mock, create_autospec
|
||||
|
||||
@@ -32,7 +33,9 @@ from twisted.web.http import HTTPChannel
|
||||
from twisted.web.http_headers import Headers
|
||||
|
||||
from synapse.api.errors import HttpResponseException, RequestSendFailed
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.config._base import ConfigError
|
||||
from synapse.config.ratelimiting import RatelimitSettings
|
||||
from synapse.http.matrixfederationclient import (
|
||||
ByteParser,
|
||||
MatrixFederationHttpClient,
|
||||
@@ -337,6 +340,71 @@ class FederationClientTests(HomeserverTestCase):
|
||||
r = self.successResultOf(d)
|
||||
self.assertEqual(r.code, 200)
|
||||
|
||||
def test_authed_media_redirect_response(self) -> None:
|
||||
"""
|
||||
Validate that, when following a `Location` redirect, the
|
||||
maximum size is _not_ set to the initial response `Content-Length` and
|
||||
the media file can be downloaded.
|
||||
"""
|
||||
limiter = Ratelimiter(
|
||||
store=self.hs.get_datastores().main,
|
||||
clock=self.clock,
|
||||
cfg=RatelimitSettings(key="", per_second=0.17, burst_count=1048576),
|
||||
)
|
||||
|
||||
output_stream = io.BytesIO()
|
||||
|
||||
d = defer.ensureDeferred(
|
||||
self.cl.federation_get_file(
|
||||
"testserv:8008", "path", output_stream, limiter, "127.0.0.1", 10000
|
||||
)
|
||||
)
|
||||
|
||||
self.pump()
|
||||
|
||||
conn = Mock()
|
||||
clients = self.reactor.tcpClients
|
||||
client = clients[0][2].buildProtocol(None)
|
||||
client.makeConnection(conn)
|
||||
|
||||
# Deferred does not have a result
|
||||
self.assertNoResult(d)
|
||||
|
||||
redirect_data = b"\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: application/json\r\n\r\n{}\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nLocation: http://testserv:8008/ab/c1/2345.txt\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a--\r\n\r\n"
|
||||
client.dataReceived(
|
||||
b"HTTP/1.1 200 OK\r\n"
|
||||
b"Server: Fake\r\n"
|
||||
b"Content-Length: %i\r\n"
|
||||
b"Content-Type: multipart/mixed; boundary=6067d4698f8d40a0a794ea7d7379d53a\r\n\r\n"
|
||||
% (len(redirect_data))
|
||||
)
|
||||
client.dataReceived(redirect_data)
|
||||
|
||||
# Still no result, not followed the redirect yet
|
||||
self.assertNoResult(d)
|
||||
|
||||
# Now send the response returned by the server at `Location`
|
||||
conn = Mock()
|
||||
clients = self.reactor.tcpClients
|
||||
client = clients[1][2].buildProtocol(None)
|
||||
client.makeConnection(conn)
|
||||
|
||||
# make sure the length is longer than the initial response
|
||||
data = b"Hello world!" * 30
|
||||
client.dataReceived(
|
||||
b"HTTP/1.1 200 OK\r\n"
|
||||
b"Server: Fake\r\n"
|
||||
b"Content-Length: %i\r\n"
|
||||
b"Content-Type: text/plain\r\n"
|
||||
b"\r\n"
|
||||
b"%s\r\n"
|
||||
b"\r\n" % (len(data), data)
|
||||
)
|
||||
|
||||
# We should get a successful response
|
||||
length, _, _ = self.successResultOf(d)
|
||||
self.assertEqual(length, len(data))
|
||||
|
||||
@parameterized.expand(["get_json", "post_json", "delete_json", "put_json"])
|
||||
def test_timeout_reading_body(self, method_name: str) -> None:
|
||||
"""
|
||||
|
||||
@@ -1057,13 +1057,15 @@ class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
|
||||
)
|
||||
assert channel.code == 200
|
||||
|
||||
@override_config({"remote_media_download_burst_count": "87M"})
|
||||
@patch(
|
||||
"synapse.http.matrixfederationclient.read_body_with_max_size",
|
||||
read_body_with_max_size_30MiB,
|
||||
)
|
||||
def test_download_ratelimit_max_size_sub(self) -> None:
|
||||
def test_download_ratelimit_unknown_length(self) -> None:
|
||||
"""
|
||||
Test that if no content-length is provided, the default max size is applied instead
|
||||
Test that if no content-length is provided, ratelimit will still be applied after
|
||||
download once length is known
|
||||
"""
|
||||
|
||||
# mock out actually sending the request
|
||||
@@ -1077,19 +1079,48 @@ class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
self.client._send_request = _send_request # type: ignore
|
||||
|
||||
# ten requests should go through using the max size (500MB/50MB)
|
||||
for i in range(10):
|
||||
channel2 = self.make_request(
|
||||
# 3 requests should go through (note 3rd one would technically violate ratelimit but
|
||||
# is applied *after* download - the next one will be ratelimited)
|
||||
for i in range(3):
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxy{i}",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel2.code == 200
|
||||
assert channel.code == 200
|
||||
|
||||
# eleventh will hit ratelimit
|
||||
channel3 = self.make_request(
|
||||
# 4th will hit ratelimit
|
||||
channel2 = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxyx",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel3.code == 429
|
||||
assert channel2.code == 429
|
||||
|
||||
@override_config({"max_upload_size": "29M"})
|
||||
@patch(
|
||||
"synapse.http.matrixfederationclient.read_body_with_max_size",
|
||||
read_body_with_max_size_30MiB,
|
||||
)
|
||||
def test_max_download_respected(self) -> None:
|
||||
"""
|
||||
Test that the max download size is enforced - note that max download size is determined
|
||||
by the max_upload_size
|
||||
"""
|
||||
|
||||
# mock out actually sending the request
|
||||
async def _send_request(*args: Any, **kwargs: Any) -> IResponse:
|
||||
resp = MagicMock(spec=IResponse)
|
||||
resp.code = 200
|
||||
resp.length = 31457280
|
||||
resp.headers = Headers({"Content-Type": ["application/octet-stream"]})
|
||||
resp.phrase = b"OK"
|
||||
return resp
|
||||
|
||||
self.client._send_request = _send_request # type: ignore
|
||||
|
||||
channel = self.make_request(
|
||||
"GET", "/_matrix/media/v3/download/remote.org/abcd", shorthand=False
|
||||
)
|
||||
assert channel.code == 502
|
||||
assert channel.json_body["errcode"] == "M_TOO_LARGE"
|
||||
|
||||
13
tests/rest/client/sliding_sync/__init__.py
Normal file
13
tests/rest/client/sliding_sync/__init__.py
Normal file
@@ -0,0 +1,13 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
381
tests/rest/client/sliding_sync/test_connection_tracking.py
Normal file
381
tests/rest/client/sliding_sync/test_connection_tracking.py
Normal file
@@ -0,0 +1,381 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
import logging
|
||||
|
||||
from parameterized import parameterized
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.rest.client import login, room, sync
|
||||
from synapse.server import HomeServer
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests.rest.client.sliding_sync.test_sliding_sync import SlidingSyncBase
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SlidingSyncConnectionTrackingTestCase(SlidingSyncBase):
|
||||
"""
|
||||
Test connection tracking in the Sliding Sync API.
|
||||
"""
|
||||
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets,
|
||||
login.register_servlets,
|
||||
room.register_servlets,
|
||||
sync.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.store = hs.get_datastores().main
|
||||
self.storage_controllers = hs.get_storage_controllers()
|
||||
|
||||
def test_rooms_required_state_incremental_sync_LIVE(self) -> None:
|
||||
"""Test that we only get state updates in incremental sync for rooms
|
||||
we've already seen (LIVE).
|
||||
"""
|
||||
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
|
||||
# Make the Sliding Sync request
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.RoomHistoryVisibility, ""],
|
||||
# This one doesn't exist in the room
|
||||
[EventTypes.Name, ""],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
self._assertRequiredStateIncludes(
|
||||
response_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
state_map[(EventTypes.Create, "")],
|
||||
state_map[(EventTypes.RoomHistoryVisibility, "")],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
# Send a state event
|
||||
self.helper.send_state(
|
||||
room_id1, EventTypes.Name, body={"name": "foo"}, tok=user2_tok
|
||||
)
|
||||
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
self.assertNotIn("initial", response_body["rooms"][room_id1])
|
||||
self._assertRequiredStateIncludes(
|
||||
response_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
state_map[(EventTypes.Name, "")],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
@parameterized.expand([(False,), (True,)])
|
||||
def test_rooms_timeline_incremental_sync_PREVIOUSLY(self, limited: bool) -> None:
|
||||
"""
|
||||
Test getting room data where we have previously sent down the room, but
|
||||
we missed sending down some timeline events previously and so its status
|
||||
is considered PREVIOUSLY.
|
||||
|
||||
There are two versions of this test, one where there are more messages
|
||||
than the timeline limit, and one where there isn't.
|
||||
"""
|
||||
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
|
||||
self.helper.send(room_id1, "msg", tok=user1_tok)
|
||||
|
||||
timeline_limit = 5
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 0]],
|
||||
"required_state": [],
|
||||
"timeline_limit": timeline_limit,
|
||||
}
|
||||
},
|
||||
"conn_id": "conn_id",
|
||||
}
|
||||
|
||||
# The first room gets sent down the initial sync
|
||||
response_body, initial_from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
self.assertCountEqual(
|
||||
response_body["rooms"].keys(), {room_id1}, response_body["rooms"]
|
||||
)
|
||||
|
||||
# We now send down some events in room1 (depending on the test param).
|
||||
expected_events = [] # The set of events in the timeline
|
||||
if limited:
|
||||
for _ in range(10):
|
||||
resp = self.helper.send(room_id1, "msg1", tok=user1_tok)
|
||||
expected_events.append(resp["event_id"])
|
||||
else:
|
||||
resp = self.helper.send(room_id1, "msg1", tok=user1_tok)
|
||||
expected_events.append(resp["event_id"])
|
||||
|
||||
# A second messages happens in the other room, so room1 won't get sent down.
|
||||
self.helper.send(room_id2, "msg", tok=user1_tok)
|
||||
|
||||
# Only the second room gets sent down sync.
|
||||
response_body, from_token = self.do_sync(
|
||||
sync_body, since=initial_from_token, tok=user1_tok
|
||||
)
|
||||
|
||||
self.assertCountEqual(
|
||||
response_body["rooms"].keys(), {room_id2}, response_body["rooms"]
|
||||
)
|
||||
|
||||
# We now send another event to room1, so we should sync all the missing events.
|
||||
resp = self.helper.send(room_id1, "msg2", tok=user1_tok)
|
||||
expected_events.append(resp["event_id"])
|
||||
|
||||
# This sync should contain the messages from room1 not yet sent down.
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
self.assertCountEqual(
|
||||
response_body["rooms"].keys(), {room_id1}, response_body["rooms"]
|
||||
)
|
||||
self.assertNotIn("initial", response_body["rooms"][room_id1])
|
||||
|
||||
self.assertEqual(
|
||||
[ev["event_id"] for ev in response_body["rooms"][room_id1]["timeline"]],
|
||||
expected_events[-timeline_limit:],
|
||||
)
|
||||
self.assertEqual(response_body["rooms"][room_id1]["limited"], limited)
|
||||
self.assertEqual(response_body["rooms"][room_id1].get("required_state"), None)
|
||||
|
||||
def test_rooms_required_state_incremental_sync_PREVIOUSLY(self) -> None:
|
||||
"""
|
||||
Test getting room data where we have previously sent down the room, but
|
||||
we missed sending down some state previously and so its status is
|
||||
considered PREVIOUSLY.
|
||||
"""
|
||||
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
|
||||
self.helper.send(room_id1, "msg", tok=user1_tok)
|
||||
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 0]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.RoomHistoryVisibility, ""],
|
||||
# This one doesn't exist in the room
|
||||
[EventTypes.Name, ""],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
}
|
||||
},
|
||||
"conn_id": "conn_id",
|
||||
}
|
||||
|
||||
# The first room gets sent down the initial sync
|
||||
response_body, initial_from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
self.assertCountEqual(
|
||||
response_body["rooms"].keys(), {room_id1}, response_body["rooms"]
|
||||
)
|
||||
|
||||
# We now send down some state in room1
|
||||
resp = self.helper.send_state(
|
||||
room_id1, EventTypes.Name, {"name": "foo"}, tok=user1_tok
|
||||
)
|
||||
name_change_id = resp["event_id"]
|
||||
|
||||
# A second messages happens in the other room, so room1 won't get sent down.
|
||||
self.helper.send(room_id2, "msg", tok=user1_tok)
|
||||
|
||||
# Only the second room gets sent down sync.
|
||||
response_body, from_token = self.do_sync(
|
||||
sync_body, since=initial_from_token, tok=user1_tok
|
||||
)
|
||||
|
||||
self.assertCountEqual(
|
||||
response_body["rooms"].keys(), {room_id2}, response_body["rooms"]
|
||||
)
|
||||
|
||||
# We now send another event to room1, so we should sync all the missing state.
|
||||
self.helper.send(room_id1, "msg", tok=user1_tok)
|
||||
|
||||
# This sync should contain the state changes from room1.
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
self.assertCountEqual(
|
||||
response_body["rooms"].keys(), {room_id1}, response_body["rooms"]
|
||||
)
|
||||
self.assertNotIn("initial", response_body["rooms"][room_id1])
|
||||
|
||||
# We should only see the name change.
|
||||
self.assertEqual(
|
||||
[
|
||||
ev["event_id"]
|
||||
for ev in response_body["rooms"][room_id1]["required_state"]
|
||||
],
|
||||
[name_change_id],
|
||||
)
|
||||
|
||||
def test_rooms_required_state_incremental_sync_NEVER(self) -> None:
|
||||
"""
|
||||
Test getting `required_state` where we have NEVER sent down the room before
|
||||
"""
|
||||
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
|
||||
self.helper.send(room_id1, "msg", tok=user1_tok)
|
||||
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 0]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.RoomHistoryVisibility, ""],
|
||||
# This one doesn't exist in the room
|
||||
[EventTypes.Name, ""],
|
||||
],
|
||||
"timeline_limit": 1,
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
# A message happens in the other room, so room1 won't get sent down.
|
||||
self.helper.send(room_id2, "msg", tok=user1_tok)
|
||||
|
||||
# Only the second room gets sent down sync.
|
||||
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
self.assertCountEqual(
|
||||
response_body["rooms"].keys(), {room_id2}, response_body["rooms"]
|
||||
)
|
||||
|
||||
# We now send another event to room1, so we should send down the full
|
||||
# room.
|
||||
self.helper.send(room_id1, "msg2", tok=user1_tok)
|
||||
|
||||
# This sync should contain the messages from room1 not yet sent down.
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
self.assertCountEqual(
|
||||
response_body["rooms"].keys(), {room_id1}, response_body["rooms"]
|
||||
)
|
||||
|
||||
self.assertEqual(response_body["rooms"][room_id1]["initial"], True)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
self._assertRequiredStateIncludes(
|
||||
response_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
state_map[(EventTypes.Create, "")],
|
||||
state_map[(EventTypes.RoomHistoryVisibility, "")],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_rooms_timeline_incremental_sync_NEVER(self) -> None:
|
||||
"""
|
||||
Test getting timeline room data where we have NEVER sent down the room
|
||||
before
|
||||
"""
|
||||
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 0]],
|
||||
"required_state": [],
|
||||
"timeline_limit": 5,
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
expected_events = []
|
||||
for _ in range(4):
|
||||
resp = self.helper.send(room_id1, "msg", tok=user1_tok)
|
||||
expected_events.append(resp["event_id"])
|
||||
|
||||
# A message happens in the other room, so room1 won't get sent down.
|
||||
self.helper.send(room_id2, "msg", tok=user1_tok)
|
||||
|
||||
# Only the second room gets sent down sync.
|
||||
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
self.assertCountEqual(
|
||||
response_body["rooms"].keys(), {room_id2}, response_body["rooms"]
|
||||
)
|
||||
|
||||
# We now send another event to room1 so it comes down sync
|
||||
resp = self.helper.send(room_id1, "msg2", tok=user1_tok)
|
||||
expected_events.append(resp["event_id"])
|
||||
|
||||
# This sync should contain the messages from room1 not yet sent down.
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
self.assertCountEqual(
|
||||
response_body["rooms"].keys(), {room_id1}, response_body["rooms"]
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
[ev["event_id"] for ev in response_body["rooms"][room_id1]["timeline"]],
|
||||
expected_events,
|
||||
)
|
||||
self.assertEqual(response_body["rooms"][room_id1]["limited"], True)
|
||||
self.assertEqual(response_body["rooms"][room_id1]["initial"], True)
|
||||
495
tests/rest/client/sliding_sync/test_extension_account_data.py
Normal file
495
tests/rest/client/sliding_sync/test_extension_account_data.py
Normal file
@@ -0,0 +1,495 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
import logging
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.constants import AccountDataTypes
|
||||
from synapse.rest.client import login, room, sendtodevice, sync
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import StreamKeyType
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests.rest.client.sliding_sync.test_sliding_sync import SlidingSyncBase
|
||||
from tests.server import TimedOutException
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
|
||||
"""Tests for the account_data sliding sync extension"""
|
||||
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets,
|
||||
login.register_servlets,
|
||||
room.register_servlets,
|
||||
sync.register_servlets,
|
||||
sendtodevice.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.store = hs.get_datastores().main
|
||||
self.account_data_handler = hs.get_account_data_handler()
|
||||
|
||||
def test_no_data_initial_sync(self) -> None:
|
||||
"""
|
||||
Test that enabling the account_data extension works during an intitial sync,
|
||||
even if there is no-data.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
# Make an initial Sliding Sync request with the account_data extension enabled
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"account_data": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
self.assertIncludes(
|
||||
{
|
||||
global_event["type"]
|
||||
for global_event in response_body["extensions"]["account_data"].get(
|
||||
"global"
|
||||
)
|
||||
},
|
||||
# Even though we don't have any global account data set, Synapse saves some
|
||||
# default push rules for us.
|
||||
{AccountDataTypes.PUSH_RULES},
|
||||
exact=True,
|
||||
)
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["account_data"].get("rooms").keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_no_data_incremental_sync(self) -> None:
|
||||
"""
|
||||
Test that enabling account_data extension works during an incremental sync, even
|
||||
if there is no-data.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"account_data": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Make an incremental Sliding Sync request with the account_data extension enabled
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
# There has been no account data changes since the `from_token` so we shouldn't
|
||||
# see any account data here.
|
||||
self.assertIncludes(
|
||||
{
|
||||
global_event["type"]
|
||||
for global_event in response_body["extensions"]["account_data"].get(
|
||||
"global"
|
||||
)
|
||||
},
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["account_data"].get("rooms").keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_global_account_data_initial_sync(self) -> None:
|
||||
"""
|
||||
On initial sync, we should return all global account data on initial sync.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
# Update the global account data
|
||||
self.get_success(
|
||||
self.account_data_handler.add_account_data_for_user(
|
||||
user_id=user1_id,
|
||||
account_data_type="org.matrix.foobarbaz",
|
||||
content={"foo": "bar"},
|
||||
)
|
||||
)
|
||||
|
||||
# Make an initial Sliding Sync request with the account_data extension enabled
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"account_data": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# It should show us all of the global account data
|
||||
self.assertIncludes(
|
||||
{
|
||||
global_event["type"]
|
||||
for global_event in response_body["extensions"]["account_data"].get(
|
||||
"global"
|
||||
)
|
||||
},
|
||||
{AccountDataTypes.PUSH_RULES, "org.matrix.foobarbaz"},
|
||||
exact=True,
|
||||
)
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["account_data"].get("rooms").keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_global_account_data_incremental_sync(self) -> None:
|
||||
"""
|
||||
On incremental sync, we should only account data that has changed since the
|
||||
`from_token`.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
# Add some global account data
|
||||
self.get_success(
|
||||
self.account_data_handler.add_account_data_for_user(
|
||||
user_id=user1_id,
|
||||
account_data_type="org.matrix.foobarbaz",
|
||||
content={"foo": "bar"},
|
||||
)
|
||||
)
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"account_data": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Add some other global account data
|
||||
self.get_success(
|
||||
self.account_data_handler.add_account_data_for_user(
|
||||
user_id=user1_id,
|
||||
account_data_type="org.matrix.doodardaz",
|
||||
content={"doo": "dar"},
|
||||
)
|
||||
)
|
||||
|
||||
# Make an incremental Sliding Sync request with the account_data extension enabled
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
self.assertIncludes(
|
||||
{
|
||||
global_event["type"]
|
||||
for global_event in response_body["extensions"]["account_data"].get(
|
||||
"global"
|
||||
)
|
||||
},
|
||||
# We should only see the new global account data that happened after the `from_token`
|
||||
{"org.matrix.doodardaz"},
|
||||
exact=True,
|
||||
)
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["account_data"].get("rooms").keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_room_account_data_initial_sync(self) -> None:
|
||||
"""
|
||||
On initial sync, we return all account data for a given room but only for
|
||||
rooms that we request and are being returned in the Sliding Sync response.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
# Create a room and add some room account data
|
||||
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
self.get_success(
|
||||
self.account_data_handler.add_account_data_to_room(
|
||||
user_id=user1_id,
|
||||
room_id=room_id1,
|
||||
account_data_type="org.matrix.roorarraz",
|
||||
content={"roo": "rar"},
|
||||
)
|
||||
)
|
||||
|
||||
# Create another room with some room account data
|
||||
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
self.get_success(
|
||||
self.account_data_handler.add_account_data_to_room(
|
||||
user_id=user1_id,
|
||||
room_id=room_id2,
|
||||
account_data_type="org.matrix.roorarraz",
|
||||
content={"roo": "rar"},
|
||||
)
|
||||
)
|
||||
|
||||
# Make an initial Sliding Sync request with the account_data extension enabled
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"room_subscriptions": {
|
||||
room_id1: {
|
||||
"required_state": [],
|
||||
"timeline_limit": 0,
|
||||
}
|
||||
},
|
||||
"extensions": {
|
||||
"account_data": {
|
||||
"enabled": True,
|
||||
"rooms": [room_id1, room_id2],
|
||||
}
|
||||
},
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
self.assertIsNotNone(response_body["extensions"]["account_data"].get("global"))
|
||||
# Even though we requested room2, we only expect room1 to show up because that's
|
||||
# the only room in the Sliding Sync response (room2 is not one of our room
|
||||
# subscriptions or in a sliding window list).
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["account_data"].get("rooms").keys(),
|
||||
{room_id1},
|
||||
exact=True,
|
||||
)
|
||||
self.assertIncludes(
|
||||
{
|
||||
event["type"]
|
||||
for event in response_body["extensions"]["account_data"]
|
||||
.get("rooms")
|
||||
.get(room_id1)
|
||||
},
|
||||
{"org.matrix.roorarraz"},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_room_account_data_incremental_sync(self) -> None:
|
||||
"""
|
||||
On incremental sync, we return all account data for a given room but only for
|
||||
rooms that we request and are being returned in the Sliding Sync response.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
# Create a room and add some room account data
|
||||
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
self.get_success(
|
||||
self.account_data_handler.add_account_data_to_room(
|
||||
user_id=user1_id,
|
||||
room_id=room_id1,
|
||||
account_data_type="org.matrix.roorarraz",
|
||||
content={"roo": "rar"},
|
||||
)
|
||||
)
|
||||
|
||||
# Create another room with some room account data
|
||||
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok)
|
||||
self.get_success(
|
||||
self.account_data_handler.add_account_data_to_room(
|
||||
user_id=user1_id,
|
||||
room_id=room_id2,
|
||||
account_data_type="org.matrix.roorarraz",
|
||||
content={"roo": "rar"},
|
||||
)
|
||||
)
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"room_subscriptions": {
|
||||
room_id1: {
|
||||
"required_state": [],
|
||||
"timeline_limit": 0,
|
||||
}
|
||||
},
|
||||
"extensions": {
|
||||
"account_data": {
|
||||
"enabled": True,
|
||||
"rooms": [room_id1, room_id2],
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Add some other room account data
|
||||
self.get_success(
|
||||
self.account_data_handler.add_account_data_to_room(
|
||||
user_id=user1_id,
|
||||
room_id=room_id1,
|
||||
account_data_type="org.matrix.roorarraz2",
|
||||
content={"roo": "rar"},
|
||||
)
|
||||
)
|
||||
self.get_success(
|
||||
self.account_data_handler.add_account_data_to_room(
|
||||
user_id=user1_id,
|
||||
room_id=room_id2,
|
||||
account_data_type="org.matrix.roorarraz2",
|
||||
content={"roo": "rar"},
|
||||
)
|
||||
)
|
||||
|
||||
# Make an incremental Sliding Sync request with the account_data extension enabled
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
self.assertIsNotNone(response_body["extensions"]["account_data"].get("global"))
|
||||
# Even though we requested room2, we only expect room1 to show up because that's
|
||||
# the only room in the Sliding Sync response (room2 is not one of our room
|
||||
# subscriptions or in a sliding window list).
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["account_data"].get("rooms").keys(),
|
||||
{room_id1},
|
||||
exact=True,
|
||||
)
|
||||
# We should only see the new room account data that happened after the `from_token`
|
||||
self.assertIncludes(
|
||||
{
|
||||
event["type"]
|
||||
for event in response_body["extensions"]["account_data"]
|
||||
.get("rooms")
|
||||
.get(room_id1)
|
||||
},
|
||||
{"org.matrix.roorarraz2"},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_wait_for_new_data(self) -> None:
|
||||
"""
|
||||
Test to make sure that the Sliding Sync request waits for new data to arrive.
|
||||
|
||||
(Only applies to incremental syncs with a `timeout` specified)
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id, user1_id, tok=user1_tok)
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"account_data": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Make an incremental Sliding Sync request with the account_data extension enabled
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint + f"?timeout=10000&pos={from_token}",
|
||||
content=sync_body,
|
||||
access_token=user1_tok,
|
||||
await_result=False,
|
||||
)
|
||||
# Block for 5 seconds to make sure we are `notifier.wait_for_events(...)`
|
||||
with self.assertRaises(TimedOutException):
|
||||
channel.await_result(timeout_ms=5000)
|
||||
# Bump the global account data to trigger new results
|
||||
self.get_success(
|
||||
self.account_data_handler.add_account_data_for_user(
|
||||
user1_id,
|
||||
"org.matrix.foobarbaz",
|
||||
{"foo": "bar"},
|
||||
)
|
||||
)
|
||||
# Should respond before the 10 second timeout
|
||||
channel.await_result(timeout_ms=3000)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# We should see the global account data update
|
||||
self.assertIncludes(
|
||||
{
|
||||
global_event["type"]
|
||||
for global_event in channel.json_body["extensions"]["account_data"].get(
|
||||
"global"
|
||||
)
|
||||
},
|
||||
{"org.matrix.foobarbaz"},
|
||||
exact=True,
|
||||
)
|
||||
self.assertIncludes(
|
||||
channel.json_body["extensions"]["account_data"].get("rooms").keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_wait_for_new_data_timeout(self) -> None:
|
||||
"""
|
||||
Test to make sure that the Sliding Sync request waits for new data to arrive but
|
||||
no data ever arrives so we timeout. We're also making sure that the default data
|
||||
from the account_data extension doesn't trigger a false-positive for new data.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"account_data": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Make the Sliding Sync request
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint + f"?timeout=10000&pos={from_token}",
|
||||
content=sync_body,
|
||||
access_token=user1_tok,
|
||||
await_result=False,
|
||||
)
|
||||
# Block for 5 seconds to make sure we are `notifier.wait_for_events(...)`
|
||||
with self.assertRaises(TimedOutException):
|
||||
channel.await_result(timeout_ms=5000)
|
||||
# Wake-up `notifier.wait_for_events(...)` that will cause us test
|
||||
# `SlidingSyncResult.__bool__` for new results.
|
||||
self._bump_notifier_wait_for_events(
|
||||
user1_id,
|
||||
# We choose `StreamKeyType.PRESENCE` because we're testing for account data
|
||||
# and don't want to contaminate the account data results using
|
||||
# `StreamKeyType.ACCOUNT_DATA`.
|
||||
wake_stream_key=StreamKeyType.PRESENCE,
|
||||
)
|
||||
# Block for a little bit more to ensure we don't see any new results.
|
||||
with self.assertRaises(TimedOutException):
|
||||
channel.await_result(timeout_ms=4000)
|
||||
# Wait for the sync to complete (wait for the rest of the 10 second timeout,
|
||||
# 5000 + 4000 + 1200 > 10000)
|
||||
channel.await_result(timeout_ms=1200)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
self.assertIsNotNone(
|
||||
channel.json_body["extensions"]["account_data"].get("global")
|
||||
)
|
||||
self.assertIsNotNone(
|
||||
channel.json_body["extensions"]["account_data"].get("rooms")
|
||||
)
|
||||
441
tests/rest/client/sliding_sync/test_extension_e2ee.py
Normal file
441
tests/rest/client/sliding_sync/test_extension_e2ee.py
Normal file
@@ -0,0 +1,441 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
import logging
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.rest.client import devices, login, room, sync
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import JsonDict, StreamKeyType
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests.rest.client.sliding_sync.test_sliding_sync import SlidingSyncBase
|
||||
from tests.server import TimedOutException
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SlidingSyncE2eeExtensionTestCase(SlidingSyncBase):
|
||||
"""Tests for the e2ee sliding sync extension"""
|
||||
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets,
|
||||
login.register_servlets,
|
||||
room.register_servlets,
|
||||
sync.register_servlets,
|
||||
devices.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.store = hs.get_datastores().main
|
||||
self.e2e_keys_handler = hs.get_e2e_keys_handler()
|
||||
|
||||
def test_no_data_initial_sync(self) -> None:
|
||||
"""
|
||||
Test that enabling e2ee extension works during an intitial sync, even if there
|
||||
is no-data
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
# Make an initial Sliding Sync request with the e2ee extension enabled
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"e2ee": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Device list updates are only present for incremental syncs
|
||||
self.assertIsNone(response_body["extensions"]["e2ee"].get("device_lists"))
|
||||
|
||||
# Both of these should be present even when empty
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["e2ee"]["device_one_time_keys_count"],
|
||||
{
|
||||
# This is always present because of
|
||||
# https://github.com/element-hq/element-android/issues/3725 and
|
||||
# https://github.com/matrix-org/synapse/issues/10456
|
||||
"signed_curve25519": 0
|
||||
},
|
||||
)
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["e2ee"]["device_unused_fallback_key_types"],
|
||||
[],
|
||||
)
|
||||
|
||||
def test_no_data_incremental_sync(self) -> None:
|
||||
"""
|
||||
Test that enabling e2ee extension works during an incremental sync, even if
|
||||
there is no-data
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"e2ee": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Make an incremental Sliding Sync request with the e2ee extension enabled
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
# Device list shows up for incremental syncs
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["e2ee"].get("device_lists", {}).get("changed"),
|
||||
[],
|
||||
)
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["e2ee"].get("device_lists", {}).get("left"),
|
||||
[],
|
||||
)
|
||||
|
||||
# Both of these should be present even when empty
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["e2ee"]["device_one_time_keys_count"],
|
||||
{
|
||||
# Note that "signed_curve25519" is always returned in key count responses
|
||||
# regardless of whether we uploaded any keys for it. This is necessary until
|
||||
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
|
||||
#
|
||||
# Also related:
|
||||
# https://github.com/element-hq/element-android/issues/3725 and
|
||||
# https://github.com/matrix-org/synapse/issues/10456
|
||||
"signed_curve25519": 0
|
||||
},
|
||||
)
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["e2ee"]["device_unused_fallback_key_types"],
|
||||
[],
|
||||
)
|
||||
|
||||
def test_wait_for_new_data(self) -> None:
|
||||
"""
|
||||
Test to make sure that the Sliding Sync request waits for new data to arrive.
|
||||
|
||||
(Only applies to incremental syncs with a `timeout` specified)
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
test_device_id = "TESTDEVICE"
|
||||
user3_id = self.register_user("user3", "pass")
|
||||
user3_tok = self.login(user3_id, "pass", device_id=test_device_id)
|
||||
|
||||
room_id = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id, user1_id, tok=user1_tok)
|
||||
self.helper.join(room_id, user3_id, tok=user3_tok)
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"e2ee": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Make the Sliding Sync request
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint + "?timeout=10000" + f"&pos={from_token}",
|
||||
content=sync_body,
|
||||
access_token=user1_tok,
|
||||
await_result=False,
|
||||
)
|
||||
# Block for 5 seconds to make sure we are `notifier.wait_for_events(...)`
|
||||
with self.assertRaises(TimedOutException):
|
||||
channel.await_result(timeout_ms=5000)
|
||||
# Bump the device lists to trigger new results
|
||||
# Have user3 update their device list
|
||||
device_update_channel = self.make_request(
|
||||
"PUT",
|
||||
f"/devices/{test_device_id}",
|
||||
{
|
||||
"display_name": "New Device Name",
|
||||
},
|
||||
access_token=user3_tok,
|
||||
)
|
||||
self.assertEqual(
|
||||
device_update_channel.code, 200, device_update_channel.json_body
|
||||
)
|
||||
# Should respond before the 10 second timeout
|
||||
channel.await_result(timeout_ms=3000)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# We should see the device list update
|
||||
self.assertEqual(
|
||||
channel.json_body["extensions"]["e2ee"]
|
||||
.get("device_lists", {})
|
||||
.get("changed"),
|
||||
[user3_id],
|
||||
)
|
||||
self.assertEqual(
|
||||
channel.json_body["extensions"]["e2ee"].get("device_lists", {}).get("left"),
|
||||
[],
|
||||
)
|
||||
|
||||
def test_wait_for_new_data_timeout(self) -> None:
|
||||
"""
|
||||
Test to make sure that the Sliding Sync request waits for new data to arrive but
|
||||
no data ever arrives so we timeout. We're also making sure that the default data
|
||||
from the E2EE extension doesn't trigger a false-positive for new data (see
|
||||
`device_one_time_keys_count.signed_curve25519`).
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"e2ee": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Make the Sliding Sync request
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint + f"?timeout=10000&pos={from_token}",
|
||||
content=sync_body,
|
||||
access_token=user1_tok,
|
||||
await_result=False,
|
||||
)
|
||||
# Block for 5 seconds to make sure we are `notifier.wait_for_events(...)`
|
||||
with self.assertRaises(TimedOutException):
|
||||
channel.await_result(timeout_ms=5000)
|
||||
# Wake-up `notifier.wait_for_events(...)` that will cause us test
|
||||
# `SlidingSyncResult.__bool__` for new results.
|
||||
self._bump_notifier_wait_for_events(
|
||||
user1_id, wake_stream_key=StreamKeyType.ACCOUNT_DATA
|
||||
)
|
||||
# Block for a little bit more to ensure we don't see any new results.
|
||||
with self.assertRaises(TimedOutException):
|
||||
channel.await_result(timeout_ms=4000)
|
||||
# Wait for the sync to complete (wait for the rest of the 10 second timeout,
|
||||
# 5000 + 4000 + 1200 > 10000)
|
||||
channel.await_result(timeout_ms=1200)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Device lists are present for incremental syncs but empty because no device changes
|
||||
self.assertEqual(
|
||||
channel.json_body["extensions"]["e2ee"]
|
||||
.get("device_lists", {})
|
||||
.get("changed"),
|
||||
[],
|
||||
)
|
||||
self.assertEqual(
|
||||
channel.json_body["extensions"]["e2ee"].get("device_lists", {}).get("left"),
|
||||
[],
|
||||
)
|
||||
|
||||
# Both of these should be present even when empty
|
||||
self.assertEqual(
|
||||
channel.json_body["extensions"]["e2ee"]["device_one_time_keys_count"],
|
||||
{
|
||||
# Note that "signed_curve25519" is always returned in key count responses
|
||||
# regardless of whether we uploaded any keys for it. This is necessary until
|
||||
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
|
||||
#
|
||||
# Also related:
|
||||
# https://github.com/element-hq/element-android/issues/3725 and
|
||||
# https://github.com/matrix-org/synapse/issues/10456
|
||||
"signed_curve25519": 0
|
||||
},
|
||||
)
|
||||
self.assertEqual(
|
||||
channel.json_body["extensions"]["e2ee"]["device_unused_fallback_key_types"],
|
||||
[],
|
||||
)
|
||||
|
||||
def test_device_lists(self) -> None:
|
||||
"""
|
||||
Test that device list updates are included in the response
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
test_device_id = "TESTDEVICE"
|
||||
user3_id = self.register_user("user3", "pass")
|
||||
user3_tok = self.login(user3_id, "pass", device_id=test_device_id)
|
||||
|
||||
user4_id = self.register_user("user4", "pass")
|
||||
user4_tok = self.login(user4_id, "pass")
|
||||
|
||||
room_id = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id, user1_id, tok=user1_tok)
|
||||
self.helper.join(room_id, user3_id, tok=user3_tok)
|
||||
self.helper.join(room_id, user4_id, tok=user4_tok)
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"e2ee": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Have user3 update their device list
|
||||
channel = self.make_request(
|
||||
"PUT",
|
||||
f"/devices/{test_device_id}",
|
||||
{
|
||||
"display_name": "New Device Name",
|
||||
},
|
||||
access_token=user3_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# User4 leaves the room
|
||||
self.helper.leave(room_id, user4_id, tok=user4_tok)
|
||||
|
||||
# Make an incremental Sliding Sync request with the e2ee extension enabled
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
# Device list updates show up
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["e2ee"].get("device_lists", {}).get("changed"),
|
||||
[user3_id],
|
||||
)
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["e2ee"].get("device_lists", {}).get("left"),
|
||||
[user4_id],
|
||||
)
|
||||
|
||||
def test_device_one_time_keys_count(self) -> None:
|
||||
"""
|
||||
Test that `device_one_time_keys_count` are included in the response
|
||||
"""
|
||||
test_device_id = "TESTDEVICE"
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass", device_id=test_device_id)
|
||||
|
||||
# Upload one time keys for the user/device
|
||||
keys: JsonDict = {
|
||||
"alg1:k1": "key1",
|
||||
"alg2:k2": {"key": "key2", "signatures": {"k1": "sig1"}},
|
||||
"alg2:k3": {"key": "key3"},
|
||||
}
|
||||
upload_keys_response = self.get_success(
|
||||
self.e2e_keys_handler.upload_keys_for_user(
|
||||
user1_id, test_device_id, {"one_time_keys": keys}
|
||||
)
|
||||
)
|
||||
self.assertDictEqual(
|
||||
upload_keys_response,
|
||||
{
|
||||
"one_time_key_counts": {
|
||||
"alg1": 1,
|
||||
"alg2": 2,
|
||||
# Note that "signed_curve25519" is always returned in key count responses
|
||||
# regardless of whether we uploaded any keys for it. This is necessary until
|
||||
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
|
||||
#
|
||||
# Also related:
|
||||
# https://github.com/element-hq/element-android/issues/3725 and
|
||||
# https://github.com/matrix-org/synapse/issues/10456
|
||||
"signed_curve25519": 0,
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
# Make a Sliding Sync request with the e2ee extension enabled
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"e2ee": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Check for those one time key counts
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["e2ee"].get("device_one_time_keys_count"),
|
||||
{
|
||||
"alg1": 1,
|
||||
"alg2": 2,
|
||||
# Note that "signed_curve25519" is always returned in key count responses
|
||||
# regardless of whether we uploaded any keys for it. This is necessary until
|
||||
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
|
||||
#
|
||||
# Also related:
|
||||
# https://github.com/element-hq/element-android/issues/3725 and
|
||||
# https://github.com/matrix-org/synapse/issues/10456
|
||||
"signed_curve25519": 0,
|
||||
},
|
||||
)
|
||||
|
||||
def test_device_unused_fallback_key_types(self) -> None:
|
||||
"""
|
||||
Test that `device_unused_fallback_key_types` are included in the response
|
||||
"""
|
||||
test_device_id = "TESTDEVICE"
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass", device_id=test_device_id)
|
||||
|
||||
# We shouldn't have any unused fallback keys yet
|
||||
res = self.get_success(
|
||||
self.store.get_e2e_unused_fallback_key_types(user1_id, test_device_id)
|
||||
)
|
||||
self.assertEqual(res, [])
|
||||
|
||||
# Upload a fallback key for the user/device
|
||||
self.get_success(
|
||||
self.e2e_keys_handler.upload_keys_for_user(
|
||||
user1_id,
|
||||
test_device_id,
|
||||
{"fallback_keys": {"alg1:k1": "fallback_key1"}},
|
||||
)
|
||||
)
|
||||
# We should now have an unused alg1 key
|
||||
fallback_res = self.get_success(
|
||||
self.store.get_e2e_unused_fallback_key_types(user1_id, test_device_id)
|
||||
)
|
||||
self.assertEqual(fallback_res, ["alg1"], fallback_res)
|
||||
|
||||
# Make a Sliding Sync request with the e2ee extension enabled
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"e2ee": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Check for the unused fallback key types
|
||||
self.assertListEqual(
|
||||
response_body["extensions"]["e2ee"].get("device_unused_fallback_key_types"),
|
||||
["alg1"],
|
||||
)
|
||||
784
tests/rest/client/sliding_sync/test_extension_receipts.py
Normal file
784
tests/rest/client/sliding_sync/test_extension_receipts.py
Normal file
@@ -0,0 +1,784 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
import logging
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.constants import EduTypes, ReceiptTypes
|
||||
from synapse.rest.client import login, receipts, room, sync
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import StreamKeyType
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests.rest.client.sliding_sync.test_sliding_sync import SlidingSyncBase
|
||||
from tests.server import TimedOutException
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SlidingSyncReceiptsExtensionTestCase(SlidingSyncBase):
|
||||
"""Tests for the receipts sliding sync extension"""
|
||||
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets,
|
||||
login.register_servlets,
|
||||
room.register_servlets,
|
||||
sync.register_servlets,
|
||||
receipts.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.store = hs.get_datastores().main
|
||||
|
||||
def test_no_data_initial_sync(self) -> None:
|
||||
"""
|
||||
Test that enabling the receipts extension works during an intitial sync,
|
||||
even if there is no-data.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
# Make an initial Sliding Sync request with the receipts extension enabled
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"receipts": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_no_data_incremental_sync(self) -> None:
|
||||
"""
|
||||
Test that enabling receipts extension works during an incremental sync, even
|
||||
if there is no-data.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"receipts": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Make an incremental Sliding Sync request with the receipts extension enabled
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_receipts_initial_sync_with_timeline(self) -> None:
|
||||
"""
|
||||
On initial sync, we only return receipts for events in a given room's timeline.
|
||||
|
||||
We also make sure that we only return receipts for rooms that we request and are
|
||||
already being returned in the Sliding Sync response.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
user3_id = self.register_user("user3", "pass")
|
||||
user3_tok = self.login(user3_id, "pass")
|
||||
user4_id = self.register_user("user4", "pass")
|
||||
user4_tok = self.login(user4_id, "pass")
|
||||
|
||||
# Create a room
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
self.helper.join(room_id1, user3_id, tok=user3_tok)
|
||||
self.helper.join(room_id1, user4_id, tok=user4_tok)
|
||||
room1_event_response1 = self.helper.send(
|
||||
room_id1, body="new event1", tok=user2_tok
|
||||
)
|
||||
room1_event_response2 = self.helper.send(
|
||||
room_id1, body="new event2", tok=user2_tok
|
||||
)
|
||||
# User1 reads the last event
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id1}/receipt/{ReceiptTypes.READ}/{room1_event_response2['event_id']}",
|
||||
{},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
# User2 reads the last event
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id1}/receipt/{ReceiptTypes.READ}/{room1_event_response2['event_id']}",
|
||||
{},
|
||||
access_token=user2_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
# User3 reads the first event
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id1}/receipt/{ReceiptTypes.READ}/{room1_event_response1['event_id']}",
|
||||
{},
|
||||
access_token=user3_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
# User4 privately reads the last event (make sure this doesn't leak to the other users)
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id1}/receipt/{ReceiptTypes.READ_PRIVATE}/{room1_event_response2['event_id']}",
|
||||
{},
|
||||
access_token=user4_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Create another room
|
||||
room_id2 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id2, user1_id, tok=user1_tok)
|
||||
self.helper.join(room_id2, user3_id, tok=user3_tok)
|
||||
self.helper.join(room_id2, user4_id, tok=user4_tok)
|
||||
room2_event_response1 = self.helper.send(
|
||||
room_id2, body="new event2", tok=user2_tok
|
||||
)
|
||||
# User1 reads the last event
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id2}/receipt/{ReceiptTypes.READ}/{room2_event_response1['event_id']}",
|
||||
{},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
# User2 reads the last event
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id2}/receipt/{ReceiptTypes.READ}/{room2_event_response1['event_id']}",
|
||||
{},
|
||||
access_token=user2_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
# User4 privately reads the last event (make sure this doesn't leak to the other users)
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id2}/receipt/{ReceiptTypes.READ_PRIVATE}/{room2_event_response1['event_id']}",
|
||||
{},
|
||||
access_token=user4_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Make an initial Sliding Sync request with the receipts extension enabled
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"room_subscriptions": {
|
||||
room_id1: {
|
||||
"required_state": [],
|
||||
# On initial sync, we only have receipts for events in the timeline
|
||||
"timeline_limit": 1,
|
||||
}
|
||||
},
|
||||
"extensions": {
|
||||
"receipts": {
|
||||
"enabled": True,
|
||||
"rooms": [room_id1, room_id2],
|
||||
}
|
||||
},
|
||||
}
|
||||
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Only the latest event in the room is in the timelie because the `timeline_limit` is 1
|
||||
self.assertIncludes(
|
||||
{
|
||||
event["event_id"]
|
||||
for event in response_body["rooms"][room_id1].get("timeline", [])
|
||||
},
|
||||
{room1_event_response2["event_id"]},
|
||||
exact=True,
|
||||
message=str(response_body["rooms"][room_id1]),
|
||||
)
|
||||
|
||||
# Even though we requested room2, we only expect room1 to show up because that's
|
||||
# the only room in the Sliding Sync response (room2 is not one of our room
|
||||
# subscriptions or in a sliding window list).
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||
{room_id1},
|
||||
exact=True,
|
||||
)
|
||||
# Sanity check that it's the correct ephemeral event type
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id1]["type"],
|
||||
EduTypes.RECEIPT,
|
||||
)
|
||||
# We can see user1 and user2 read receipts
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id1]["content"][
|
||||
room1_event_response2["event_id"]
|
||||
][ReceiptTypes.READ].keys(),
|
||||
{user1_id, user2_id},
|
||||
exact=True,
|
||||
)
|
||||
# User1 did not have a private read receipt and we shouldn't leak others'
|
||||
# private read receipts
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id1]["content"][
|
||||
room1_event_response2["event_id"]
|
||||
]
|
||||
.get(ReceiptTypes.READ_PRIVATE, {})
|
||||
.keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
# We shouldn't see receipts for event2 since it wasn't in the timeline and this is an initial sync
|
||||
self.assertIsNone(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id1]["content"].get(
|
||||
room1_event_response1["event_id"]
|
||||
)
|
||||
)
|
||||
|
||||
def test_receipts_incremental_sync(self) -> None:
|
||||
"""
|
||||
On incremental sync, we return all receipts in the token range for a given room
|
||||
but only for rooms that we request and are being returned in the Sliding Sync
|
||||
response.
|
||||
"""
|
||||
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
user3_id = self.register_user("user3", "pass")
|
||||
user3_tok = self.login(user3_id, "pass")
|
||||
|
||||
# Create room1
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
self.helper.join(room_id1, user3_id, tok=user3_tok)
|
||||
room1_event_response1 = self.helper.send(
|
||||
room_id1, body="new event2", tok=user2_tok
|
||||
)
|
||||
# User2 reads the last event (before the `from_token`)
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id1}/receipt/{ReceiptTypes.READ}/{room1_event_response1['event_id']}",
|
||||
{},
|
||||
access_token=user2_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Create room2
|
||||
room_id2 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id2, user1_id, tok=user1_tok)
|
||||
room2_event_response1 = self.helper.send(
|
||||
room_id2, body="new event2", tok=user2_tok
|
||||
)
|
||||
# User1 reads the last event (before the `from_token`)
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id2}/receipt/{ReceiptTypes.READ}/{room2_event_response1['event_id']}",
|
||||
{},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Create room3
|
||||
room_id3 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id3, user1_id, tok=user1_tok)
|
||||
self.helper.join(room_id3, user3_id, tok=user3_tok)
|
||||
room3_event_response1 = self.helper.send(
|
||||
room_id3, body="new event", tok=user2_tok
|
||||
)
|
||||
|
||||
# Create room4
|
||||
room_id4 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id4, user1_id, tok=user1_tok)
|
||||
self.helper.join(room_id4, user3_id, tok=user3_tok)
|
||||
event_response4 = self.helper.send(room_id4, body="new event", tok=user2_tok)
|
||||
# User1 reads the last event (before the `from_token`)
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id4}/receipt/{ReceiptTypes.READ}/{event_response4['event_id']}",
|
||||
{},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"room_subscriptions": {
|
||||
room_id1: {
|
||||
"required_state": [],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
room_id3: {
|
||||
"required_state": [],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
room_id4: {
|
||||
"required_state": [],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
},
|
||||
"extensions": {
|
||||
"receipts": {
|
||||
"enabled": True,
|
||||
"rooms": [room_id1, room_id2, room_id3, room_id4],
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Add some more read receipts after the `from_token`
|
||||
#
|
||||
# User1 reads room1
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id1}/receipt/{ReceiptTypes.READ}/{room1_event_response1['event_id']}",
|
||||
{},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
# User1 privately reads room2
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id2}/receipt/{ReceiptTypes.READ_PRIVATE}/{room2_event_response1['event_id']}",
|
||||
{},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
# User3 reads room3
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id3}/receipt/{ReceiptTypes.READ}/{room3_event_response1['event_id']}",
|
||||
{},
|
||||
access_token=user3_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
# No activity for room4 after the `from_token`
|
||||
|
||||
# Make an incremental Sliding Sync request with the receipts extension enabled
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
# Even though we requested room2, we only expect rooms to show up if they are
|
||||
# already in the Sliding Sync response. room4 doesn't show up because there is
|
||||
# no activity after the `from_token`.
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||
{room_id1, room_id3},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
# Check room1:
|
||||
#
|
||||
# Sanity check that it's the correct ephemeral event type
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id1]["type"],
|
||||
EduTypes.RECEIPT,
|
||||
)
|
||||
# We only see that user1 has read something in room1 since the `from_token`
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id1]["content"][
|
||||
room1_event_response1["event_id"]
|
||||
][ReceiptTypes.READ].keys(),
|
||||
{user1_id},
|
||||
exact=True,
|
||||
)
|
||||
# User1 did not send a private read receipt in this room and we shouldn't leak
|
||||
# others' private read receipts
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id1]["content"][
|
||||
room1_event_response1["event_id"]
|
||||
]
|
||||
.get(ReceiptTypes.READ_PRIVATE, {})
|
||||
.keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
# No events in the timeline since they were sent before the `from_token`
|
||||
self.assertNotIn(room_id1, response_body["rooms"])
|
||||
|
||||
# Check room3:
|
||||
#
|
||||
# Sanity check that it's the correct ephemeral event type
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id3]["type"],
|
||||
EduTypes.RECEIPT,
|
||||
)
|
||||
# We only see that user3 has read something in room1 since the `from_token`
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id3]["content"][
|
||||
room3_event_response1["event_id"]
|
||||
][ReceiptTypes.READ].keys(),
|
||||
{user3_id},
|
||||
exact=True,
|
||||
)
|
||||
# User1 did not send a private read receipt in this room and we shouldn't leak
|
||||
# others' private read receipts
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id3]["content"][
|
||||
room3_event_response1["event_id"]
|
||||
]
|
||||
.get(ReceiptTypes.READ_PRIVATE, {})
|
||||
.keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
# No events in the timeline since they were sent before the `from_token`
|
||||
self.assertNotIn(room_id3, response_body["rooms"])
|
||||
|
||||
def test_receipts_incremental_sync_all_live_receipts(self) -> None:
|
||||
"""
|
||||
On incremental sync, we return all receipts in the token range for a given room
|
||||
even if they are not in the timeline.
|
||||
"""
|
||||
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
# Create room1
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"room_subscriptions": {
|
||||
room_id1: {
|
||||
"required_state": [],
|
||||
# The timeline will only include event2
|
||||
"timeline_limit": 1,
|
||||
},
|
||||
},
|
||||
"extensions": {
|
||||
"receipts": {
|
||||
"enabled": True,
|
||||
"rooms": [room_id1],
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
room1_event_response1 = self.helper.send(
|
||||
room_id1, body="new event1", tok=user2_tok
|
||||
)
|
||||
room1_event_response2 = self.helper.send(
|
||||
room_id1, body="new event2", tok=user2_tok
|
||||
)
|
||||
|
||||
# User1 reads event1
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id1}/receipt/{ReceiptTypes.READ}/{room1_event_response1['event_id']}",
|
||||
{},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
# User2 reads event2
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id1}/receipt/{ReceiptTypes.READ}/{room1_event_response2['event_id']}",
|
||||
{},
|
||||
access_token=user2_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Make an incremental Sliding Sync request with the receipts extension enabled
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
|
||||
|
||||
# We should see room1 because it has receipts in the token range
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||
{room_id1},
|
||||
exact=True,
|
||||
)
|
||||
# Sanity check that it's the correct ephemeral event type
|
||||
self.assertEqual(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id1]["type"],
|
||||
EduTypes.RECEIPT,
|
||||
)
|
||||
# We should see all receipts in the token range regardless of whether the events
|
||||
# are in the timeline
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id1]["content"][
|
||||
room1_event_response1["event_id"]
|
||||
][ReceiptTypes.READ].keys(),
|
||||
{user1_id},
|
||||
exact=True,
|
||||
)
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"]["rooms"][room_id1]["content"][
|
||||
room1_event_response2["event_id"]
|
||||
][ReceiptTypes.READ].keys(),
|
||||
{user2_id},
|
||||
exact=True,
|
||||
)
|
||||
# Only the latest event in the timeline because the `timeline_limit` is 1
|
||||
self.assertIncludes(
|
||||
{
|
||||
event["event_id"]
|
||||
for event in response_body["rooms"][room_id1].get("timeline", [])
|
||||
},
|
||||
{room1_event_response2["event_id"]},
|
||||
exact=True,
|
||||
message=str(response_body["rooms"][room_id1]),
|
||||
)
|
||||
|
||||
def test_wait_for_new_data(self) -> None:
|
||||
"""
|
||||
Test to make sure that the Sliding Sync request waits for new data to arrive.
|
||||
|
||||
(Only applies to incremental syncs with a `timeout` specified)
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id, user1_id, tok=user1_tok)
|
||||
event_response = self.helper.send(room_id, body="new event", tok=user2_tok)
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"room_subscriptions": {
|
||||
room_id: {
|
||||
"required_state": [],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
},
|
||||
"extensions": {
|
||||
"receipts": {
|
||||
"enabled": True,
|
||||
"rooms": [room_id],
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Make an incremental Sliding Sync request with the receipts extension enabled
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint + f"?timeout=10000&pos={from_token}",
|
||||
content=sync_body,
|
||||
access_token=user1_tok,
|
||||
await_result=False,
|
||||
)
|
||||
# Block for 5 seconds to make sure we are `notifier.wait_for_events(...)`
|
||||
with self.assertRaises(TimedOutException):
|
||||
channel.await_result(timeout_ms=5000)
|
||||
# Bump the receipts to trigger new results
|
||||
receipt_channel = self.make_request(
|
||||
"POST",
|
||||
f"/rooms/{room_id}/receipt/{ReceiptTypes.READ}/{event_response['event_id']}",
|
||||
{},
|
||||
access_token=user2_tok,
|
||||
)
|
||||
self.assertEqual(receipt_channel.code, 200, receipt_channel.json_body)
|
||||
# Should respond before the 10 second timeout
|
||||
channel.await_result(timeout_ms=3000)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# We should see the new receipt
|
||||
self.assertIncludes(
|
||||
channel.json_body.get("extensions", {})
|
||||
.get("receipts", {})
|
||||
.get("rooms", {})
|
||||
.keys(),
|
||||
{room_id},
|
||||
exact=True,
|
||||
message=str(channel.json_body),
|
||||
)
|
||||
self.assertIncludes(
|
||||
channel.json_body["extensions"]["receipts"]["rooms"][room_id]["content"][
|
||||
event_response["event_id"]
|
||||
][ReceiptTypes.READ].keys(),
|
||||
{user2_id},
|
||||
exact=True,
|
||||
)
|
||||
# User1 did not send a private read receipt in this room and we shouldn't leak
|
||||
# others' private read receipts
|
||||
self.assertIncludes(
|
||||
channel.json_body["extensions"]["receipts"]["rooms"][room_id]["content"][
|
||||
event_response["event_id"]
|
||||
]
|
||||
.get(ReceiptTypes.READ_PRIVATE, {})
|
||||
.keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_wait_for_new_data_timeout(self) -> None:
|
||||
"""
|
||||
Test to make sure that the Sliding Sync request waits for new data to arrive but
|
||||
no data ever arrives so we timeout. We're also making sure that the default data
|
||||
from the receipts extension doesn't trigger a false-positive for new data.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
|
||||
sync_body = {
|
||||
"lists": {},
|
||||
"extensions": {
|
||||
"receipts": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# Make the Sliding Sync request
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint + f"?timeout=10000&pos={from_token}",
|
||||
content=sync_body,
|
||||
access_token=user1_tok,
|
||||
await_result=False,
|
||||
)
|
||||
# Block for 5 seconds to make sure we are `notifier.wait_for_events(...)`
|
||||
with self.assertRaises(TimedOutException):
|
||||
channel.await_result(timeout_ms=5000)
|
||||
# Wake-up `notifier.wait_for_events(...)` that will cause us test
|
||||
# `SlidingSyncResult.__bool__` for new results.
|
||||
self._bump_notifier_wait_for_events(
|
||||
user1_id, wake_stream_key=StreamKeyType.ACCOUNT_DATA
|
||||
)
|
||||
# Block for a little bit more to ensure we don't see any new results.
|
||||
with self.assertRaises(TimedOutException):
|
||||
channel.await_result(timeout_ms=4000)
|
||||
# Wait for the sync to complete (wait for the rest of the 10 second timeout,
|
||||
# 5000 + 4000 + 1200 > 10000)
|
||||
channel.await_result(timeout_ms=1200)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
self.assertIncludes(
|
||||
channel.json_body["extensions"]["receipts"].get("rooms").keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_receipts_incremental_sync_out_of_range(self) -> None:
|
||||
"""Tests that we don't return read receipts for rooms that fall out of
|
||||
range, but then do send all read receipts once they're back in range.
|
||||
"""
|
||||
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
room_id2 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id2, user1_id, tok=user1_tok)
|
||||
|
||||
# Send a message and read receipt into room2
|
||||
event_response = self.helper.send(room_id2, body="new event", tok=user2_tok)
|
||||
room2_event_id = event_response["event_id"]
|
||||
|
||||
self.helper.send_read_receipt(room_id2, room2_event_id, tok=user1_tok)
|
||||
|
||||
# Now send a message into room1 so that it is at the top of the list
|
||||
self.helper.send(room_id1, body="new event", tok=user2_tok)
|
||||
|
||||
# Make a SS request for only the top room.
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"main": {
|
||||
"ranges": [[0, 0]],
|
||||
"required_state": [],
|
||||
"timeline_limit": 5,
|
||||
}
|
||||
},
|
||||
"extensions": {
|
||||
"receipts": {
|
||||
"enabled": True,
|
||||
}
|
||||
},
|
||||
}
|
||||
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||
|
||||
# The receipt is in room2, but only room1 is returned, so we don't
|
||||
# expect to get the receipt.
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
# Move room2 into range.
|
||||
self.helper.send(room_id2, body="new event", tok=user2_tok)
|
||||
|
||||
response_body, from_token = self.do_sync(
|
||||
sync_body, since=from_token, tok=user1_tok
|
||||
)
|
||||
|
||||
# We expect to see the read receipt of room2, as that has the most
|
||||
# recent update.
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||
{room_id2},
|
||||
exact=True,
|
||||
)
|
||||
receipt = response_body["extensions"]["receipts"]["rooms"][room_id2]
|
||||
self.assertIncludes(
|
||||
receipt["content"][room2_event_id][ReceiptTypes.READ].keys(),
|
||||
{user1_id},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
# Send a message into room1 to bump it to the top, but also send a
|
||||
# receipt in room2
|
||||
self.helper.send(room_id1, body="new event", tok=user2_tok)
|
||||
self.helper.send_read_receipt(room_id2, room2_event_id, tok=user2_tok)
|
||||
|
||||
# We don't expect to see the new read receipt.
|
||||
response_body, from_token = self.do_sync(
|
||||
sync_body, since=from_token, tok=user1_tok
|
||||
)
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||
set(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
# But if we send a new message into room2, we expect to get the missing receipts
|
||||
self.helper.send(room_id2, body="new event", tok=user2_tok)
|
||||
|
||||
response_body, from_token = self.do_sync(
|
||||
sync_body, since=from_token, tok=user1_tok
|
||||
)
|
||||
self.assertIncludes(
|
||||
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||
{room_id2},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
# We should only see the new receipt
|
||||
receipt = response_body["extensions"]["receipts"]["rooms"][room_id2]
|
||||
self.assertIncludes(
|
||||
receipt["content"][room2_event_id][ReceiptTypes.READ].keys(),
|
||||
{user2_id},
|
||||
exact=True,
|
||||
)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user