mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-07 01:20:16 +00:00
Compare commits
42 Commits
erikj/dock
...
dependabot
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
970cdaee16 | ||
|
|
ab94bce02c | ||
|
|
17d6c28285 | ||
|
|
4a7c58642c | ||
|
|
ce9385819b | ||
|
|
a963f579de | ||
|
|
3f06bbc0ac | ||
|
|
fcbc79bb87 | ||
|
|
aabf577166 | ||
|
|
7d8f0ef351 | ||
|
|
eab0b548e4 | ||
|
|
81cef38d4b | ||
|
|
e2f8476044 | ||
|
|
18c1196893 | ||
|
|
8a3270075b | ||
|
|
f458dff16d | ||
|
|
6b709c512d | ||
|
|
5c2a837e3c | ||
|
|
64f5a4a353 | ||
|
|
7dd14fadb1 | ||
|
|
5624c8b961 | ||
|
|
4e3868dc46 | ||
|
|
d16910ca02 | ||
|
|
225f378ffa | ||
|
|
8bd9ff0783 | ||
|
|
466f344547 | ||
|
|
726006cdf2 | ||
|
|
967b6948b0 | ||
|
|
d7198dfb95 | ||
|
|
94ef2f4f5d | ||
|
|
bb5a692946 | ||
|
|
ad179b0136 | ||
|
|
5147ce294a | ||
|
|
f35bc08d39 | ||
|
|
f2616edb73 | ||
|
|
86a2a0258f | ||
|
|
0893ee9af8 | ||
|
|
887f773472 | ||
|
|
9edb725ebc | ||
|
|
c97251d5ba | ||
|
|
7e2412265d | ||
|
|
7ef00b7628 |
61
CHANGES.md
61
CHANGES.md
@@ -1,3 +1,64 @@
|
||||
# Synapse 1.109.0rc1 (2024-06-04)
|
||||
|
||||
### Features
|
||||
|
||||
- Add the ability to auto-accept invites on the behalf of users. See the [`auto_accept_invites`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#auto-accept-invites) config option for details. ([\#17147](https://github.com/element-hq/synapse/issues/17147))
|
||||
- Add experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync/e2ee` endpoint for to-device messages and device encryption info. ([\#17167](https://github.com/element-hq/synapse/issues/17167))
|
||||
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/issues/3916) by adding unstable media endpoints to `/_matrix/client`. ([\#17213](https://github.com/element-hq/synapse/issues/17213))
|
||||
- Add logging to tasks managed by the task scheduler, showing CPU and database usage. ([\#17219](https://github.com/element-hq/synapse/issues/17219))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix deduplicating of membership events to not create unused state groups. ([\#17164](https://github.com/element-hq/synapse/issues/17164))
|
||||
- Fix bug where duplicate events could be sent down sync when using workers that are overloaded. ([\#17215](https://github.com/element-hq/synapse/issues/17215))
|
||||
- Ignore attempts to send to-device messages to bad users, to avoid log spam when we try to connect to the bad server. ([\#17240](https://github.com/element-hq/synapse/issues/17240))
|
||||
- Fix handling of duplicate concurrent uploading of device one-time-keys. ([\#17241](https://github.com/element-hq/synapse/issues/17241))
|
||||
- Fix reporting of default tags to Sentry, such as worker name. Broke in v1.108.0. ([\#17251](https://github.com/element-hq/synapse/issues/17251))
|
||||
- Fix bug where typing updates would not be sent when using workers after a restart. ([\#17252](https://github.com/element-hq/synapse/issues/17252))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Update the LemonLDAP documentation to say that claims should be explicitly included in the returned `id_token`, as Synapse won't request them. ([\#17204](https://github.com/element-hq/synapse/issues/17204))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Improve DB usage when fetching related events. ([\#17083](https://github.com/element-hq/synapse/issues/17083))
|
||||
- Log exceptions when failing to auto-join new user according to the `auto_join_rooms` option. ([\#17176](https://github.com/element-hq/synapse/issues/17176))
|
||||
- Reduce work of calculating outbound device lists updates. ([\#17211](https://github.com/element-hq/synapse/issues/17211))
|
||||
- Improve performance of calculating device lists changes in `/sync`. ([\#17216](https://github.com/element-hq/synapse/issues/17216))
|
||||
- Move towards using `MultiWriterIdGenerator` everywhere. ([\#17226](https://github.com/element-hq/synapse/issues/17226))
|
||||
- Replaces all usages of `StreamIdGenerator` with `MultiWriterIdGenerator`. ([\#17229](https://github.com/element-hq/synapse/issues/17229))
|
||||
- Change the `allow_unsafe_locale` config option to also apply when setting up new databases. ([\#17238](https://github.com/element-hq/synapse/issues/17238))
|
||||
- Fix errors in logs about closing incorrect logging contexts when media gets rejected by a module. ([\#17239](https://github.com/element-hq/synapse/issues/17239), [\#17246](https://github.com/element-hq/synapse/issues/17246))
|
||||
- Clean out invalid destinations from `device_federation_outbox` table. ([\#17242](https://github.com/element-hq/synapse/issues/17242))
|
||||
- Stop logging errors when receiving invalid User IDs in key querys requests. ([\#17250](https://github.com/element-hq/synapse/issues/17250))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump anyhow from 1.0.83 to 1.0.86. ([\#17220](https://github.com/element-hq/synapse/issues/17220))
|
||||
* Bump bcrypt from 4.1.2 to 4.1.3. ([\#17224](https://github.com/element-hq/synapse/issues/17224))
|
||||
* Bump lxml from 5.2.1 to 5.2.2. ([\#17261](https://github.com/element-hq/synapse/issues/17261))
|
||||
* Bump mypy-zope from 1.0.3 to 1.0.4. ([\#17262](https://github.com/element-hq/synapse/issues/17262))
|
||||
* Bump phonenumbers from 8.13.35 to 8.13.37. ([\#17235](https://github.com/element-hq/synapse/issues/17235))
|
||||
* Bump prometheus-client from 0.19.0 to 0.20.0. ([\#17233](https://github.com/element-hq/synapse/issues/17233))
|
||||
* Bump pyasn1 from 0.5.1 to 0.6.0. ([\#17223](https://github.com/element-hq/synapse/issues/17223))
|
||||
* Bump pyicu from 2.13 to 2.13.1. ([\#17236](https://github.com/element-hq/synapse/issues/17236))
|
||||
* Bump pyopenssl from 24.0.0 to 24.1.0. ([\#17234](https://github.com/element-hq/synapse/issues/17234))
|
||||
* Bump serde from 1.0.201 to 1.0.202. ([\#17221](https://github.com/element-hq/synapse/issues/17221))
|
||||
* Bump serde from 1.0.202 to 1.0.203. ([\#17232](https://github.com/element-hq/synapse/issues/17232))
|
||||
* Bump twine from 5.0.0 to 5.1.0. ([\#17225](https://github.com/element-hq/synapse/issues/17225))
|
||||
* Bump types-psycopg2 from 2.9.21.20240311 to 2.9.21.20240417. ([\#17222](https://github.com/element-hq/synapse/issues/17222))
|
||||
* Bump types-pyopenssl from 24.0.0.20240311 to 24.1.0.20240425. ([\#17260](https://github.com/element-hq/synapse/issues/17260))
|
||||
|
||||
# Synapse 1.108.0 (2024-05-28)
|
||||
|
||||
No significant changes since 1.108.0rc1.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.108.0rc1 (2024-05-21)
|
||||
|
||||
### Features
|
||||
|
||||
12
Cargo.lock
generated
12
Cargo.lock
generated
@@ -444,9 +444,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "1.10.4"
|
||||
version = "1.10.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c117dbdfde9c8308975b6a18d71f3f385c89461f7b3fb054288ecf2a2058ba4c"
|
||||
checksum = "b91213439dad192326a0d7c6ee3955910425f441d7038e0d6933b0aec5c4517f"
|
||||
dependencies = [
|
||||
"aho-corasick",
|
||||
"memchr",
|
||||
@@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
|
||||
|
||||
[[package]]
|
||||
name = "serde"
|
||||
version = "1.0.202"
|
||||
version = "1.0.203"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "226b61a0d411b2ba5ff6d7f73a476ac4f8bb900373459cd00fab8512828ba395"
|
||||
checksum = "7253ab4de971e72fb7be983802300c30b5a7f0c2e56fab8abfc6a214307c0094"
|
||||
dependencies = [
|
||||
"serde_derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_derive"
|
||||
version = "1.0.202"
|
||||
version = "1.0.203"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6048858004bcff69094cd972ed40a32500f153bd3be9f716b2eed2e8217c4838"
|
||||
checksum = "500cbc0ebeb6f46627f50f3f5811ccf6bf00643be300b4c3eabc0ef55dc5b5ba"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Add the ability to auto-accept invites on the behalf of users. See the [`auto_accept_invites`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#auto-accept-invites) config option for details.
|
||||
2
changelog.d/17172.feature
Normal file
2
changelog.d/17172.feature
Normal file
@@ -0,0 +1,2 @@
|
||||
Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md)
|
||||
by adding a federation /download endpoint (#17172).
|
||||
1
changelog.d/17187.feature
Normal file
1
changelog.d/17187.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add initial implementation of an experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
||||
@@ -1 +0,0 @@
|
||||
Update OIDC documentation: by default Matrix doesn't query userinfo endpoint, then claims should be put on id_token.
|
||||
@@ -1 +0,0 @@
|
||||
Reduce work of calculating outbound device lists updates.
|
||||
@@ -1 +0,0 @@
|
||||
Improve performance of calculating device lists changes in `/sync`.
|
||||
1
changelog.d/17254.bugfix
Normal file
1
changelog.d/17254.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix searching for users with their exact localpart whose ID includes a hyphen.
|
||||
1
changelog.d/17256.feature
Normal file
1
changelog.d/17256.feature
Normal file
@@ -0,0 +1 @@
|
||||
Improve ratelimiting in Synapse (#17256).
|
||||
1
changelog.d/17265.misc
Normal file
1
changelog.d/17265.misc
Normal file
@@ -0,0 +1 @@
|
||||
Use fully-qualified `PersistedEventPosition` when returning `RoomsForUser` to facilitate proper comparisons and `RoomStreamToken` generation.
|
||||
1
changelog.d/17266.misc
Normal file
1
changelog.d/17266.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add debug logging for when room keys are uploaded, including whether they are replacing other room keys.
|
||||
1
changelog.d/17271.misc
Normal file
1
changelog.d/17271.misc
Normal file
@@ -0,0 +1 @@
|
||||
Handle OTK uploads off master.
|
||||
1
changelog.d/17273.misc
Normal file
1
changelog.d/17273.misc
Normal file
@@ -0,0 +1 @@
|
||||
Don't try and resync devices for remote users whose servers are marked as down.
|
||||
1
changelog.d/17275.bugfix
Normal file
1
changelog.d/17275.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix bug where OTKs were not always included in `/sync` response when using workers.
|
||||
12
debian/changelog
vendored
12
debian/changelog
vendored
@@ -1,3 +1,15 @@
|
||||
matrix-synapse-py3 (1.109.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.109.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Jun 2024 09:42:46 +0100
|
||||
|
||||
matrix-synapse-py3 (1.108.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.108.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 28 May 2024 11:54:22 +0100
|
||||
|
||||
matrix-synapse-py3 (1.108.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.108.0rc1.
|
||||
|
||||
@@ -242,12 +242,11 @@ host all all ::1/128 ident
|
||||
|
||||
### Fixing incorrect `COLLATE` or `CTYPE`
|
||||
|
||||
Synapse will refuse to set up a new database if it has the wrong values of
|
||||
`COLLATE` and `CTYPE` set. Synapse will also refuse to start an existing database with incorrect values
|
||||
of `COLLATE` and `CTYPE` unless the config flag `allow_unsafe_locale`, found in the
|
||||
`database` section of the config, is set to true. Using different locales can cause issues if the locale library is updated from
|
||||
underneath the database, or if a different version of the locale is used on any
|
||||
replicas.
|
||||
Synapse will refuse to start when using a database with incorrect values of
|
||||
`COLLATE` and `CTYPE` unless the config flag `allow_unsafe_locale`, found in the
|
||||
`database` section of the config, is set to true. Using different locales can
|
||||
cause issues if the locale library is updated from underneath the database, or
|
||||
if a different version of the locale is used on any replicas.
|
||||
|
||||
If you have a database with an unsafe locale, the safest way to fix the issue is to dump the database and recreate it with
|
||||
the correct locale parameter (as shown above). It is also possible to change the
|
||||
|
||||
@@ -1946,6 +1946,24 @@ Example configuration:
|
||||
max_image_pixels: 35M
|
||||
```
|
||||
---
|
||||
### `remote_media_download_burst_count`
|
||||
|
||||
Remote media downloads are ratelimited using a [leaky bucket algorithm](https://en.wikipedia.org/wiki/Leaky_bucket), where a given "bucket" is keyed to the IP address of the requester when requesting remote media downloads. This configuration option sets the size of the bucket against which the size in bytes of downloads are penalized - if the bucket is full, ie a given number of bytes have already been downloaded, further downloads will be denied until the bucket drains. Defaults to 500MiB. See also `remote_media_download_per_second` which determines the rate at which the "bucket" is emptied and thus has available space to authorize new requests.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
remote_media_download_burst_count: 200M
|
||||
```
|
||||
---
|
||||
### `remote_media_download_per_second`
|
||||
|
||||
Works in conjunction with `remote_media_download_burst_count` to ratelimit remote media downloads - this configuration option determines the rate at which the "bucket" (see above) leaks in bytes per second. As requests are made to download remote media, the size of those requests in bytes is added to the bucket, and once the bucket has reached it's capacity, no more requests will be allowed until a number of bytes has "drained" from the bucket. This setting determines the rate at which bytes drain from the bucket, with the practical effect that the larger the number, the faster the bucket leaks, allowing for more bytes downloaded over a shorter period of time. Defaults to 87KiB per second. See also `remote_media_download_burst_count`.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
remote_media_download_per_second: 40K
|
||||
```
|
||||
---
|
||||
### `prevent_media_downloads_from`
|
||||
|
||||
A list of domains to never download media from. Media from these
|
||||
|
||||
361
poetry.lock
generated
361
poetry.lock
generated
@@ -1005,165 +1005,153 @@ pyasn1 = ">=0.4.6"
|
||||
|
||||
[[package]]
|
||||
name = "lxml"
|
||||
version = "5.2.1"
|
||||
version = "5.2.2"
|
||||
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "lxml-5.2.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:1f7785f4f789fdb522729ae465adcaa099e2a3441519df750ebdccc481d961a1"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6cc6ee342fb7fa2471bd9b6d6fdfc78925a697bf5c2bcd0a302e98b0d35bfad3"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:794f04eec78f1d0e35d9e0c36cbbb22e42d370dda1609fb03bcd7aeb458c6377"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c817d420c60a5183953c783b0547d9eb43b7b344a2c46f69513d5952a78cddf3"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2213afee476546a7f37c7a9b4ad4d74b1e112a6fafffc9185d6d21f043128c81"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b070bbe8d3f0f6147689bed981d19bbb33070225373338df755a46893528104a"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e02c5175f63effbd7c5e590399c118d5db6183bbfe8e0d118bdb5c2d1b48d937"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:3dc773b2861b37b41a6136e0b72a1a44689a9c4c101e0cddb6b854016acc0aa8"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:d7520db34088c96cc0e0a3ad51a4fd5b401f279ee112aa2b7f8f976d8582606d"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:bcbf4af004f98793a95355980764b3d80d47117678118a44a80b721c9913436a"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:a2b44bec7adf3e9305ce6cbfa47a4395667e744097faed97abb4728748ba7d47"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:1c5bb205e9212d0ebddf946bc07e73fa245c864a5f90f341d11ce7b0b854475d"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2c9d147f754b1b0e723e6afb7ba1566ecb162fe4ea657f53d2139bbf894d050a"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:3545039fa4779be2df51d6395e91a810f57122290864918b172d5dc7ca5bb433"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a91481dbcddf1736c98a80b122afa0f7296eeb80b72344d7f45dc9f781551f56"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:2ddfe41ddc81f29a4c44c8ce239eda5ade4e7fc305fb7311759dd6229a080052"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:a7baf9ffc238e4bf401299f50e971a45bfcc10a785522541a6e3179c83eabf0a"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:31e9a882013c2f6bd2f2c974241bf4ba68c85eba943648ce88936d23209a2e01"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:0a15438253b34e6362b2dc41475e7f80de76320f335e70c5528b7148cac253a1"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-win32.whl", hash = "sha256:6992030d43b916407c9aa52e9673612ff39a575523c5f4cf72cdef75365709a5"},
|
||||
{file = "lxml-5.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:da052e7962ea2d5e5ef5bc0355d55007407087392cf465b7ad84ce5f3e25fe0f"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:70ac664a48aa64e5e635ae5566f5227f2ab7f66a3990d67566d9907edcbbf867"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1ae67b4e737cddc96c99461d2f75d218bdf7a0c3d3ad5604d1f5e7464a2f9ffe"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f18a5a84e16886898e51ab4b1d43acb3083c39b14c8caeb3589aabff0ee0b270"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c6f2c8372b98208ce609c9e1d707f6918cc118fea4e2c754c9f0812c04ca116d"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:394ed3924d7a01b5bd9a0d9d946136e1c2f7b3dc337196d99e61740ed4bc6fe1"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5d077bc40a1fe984e1a9931e801e42959a1e6598edc8a3223b061d30fbd26bbc"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:764b521b75701f60683500d8621841bec41a65eb739b8466000c6fdbc256c240"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:3a6b45da02336895da82b9d472cd274b22dc27a5cea1d4b793874eead23dd14f"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:5ea7b6766ac2dfe4bcac8b8595107665a18ef01f8c8343f00710b85096d1b53a"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:e196a4ff48310ba62e53a8e0f97ca2bca83cdd2fe2934d8b5cb0df0a841b193a"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:200e63525948e325d6a13a76ba2911f927ad399ef64f57898cf7c74e69b71095"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:dae0ed02f6b075426accbf6b2863c3d0a7eacc1b41fb40f2251d931e50188dad"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:ab31a88a651039a07a3ae327d68ebdd8bc589b16938c09ef3f32a4b809dc96ef"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:df2e6f546c4df14bc81f9498bbc007fbb87669f1bb707c6138878c46b06f6510"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5dd1537e7cc06efd81371f5d1a992bd5ab156b2b4f88834ca852de4a8ea523fa"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:9b9ec9c9978b708d488bec36b9e4c94d88fd12ccac3e62134a9d17ddba910ea9"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:8e77c69d5892cb5ba71703c4057091e31ccf534bd7f129307a4d084d90d014b8"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:a8d5c70e04aac1eda5c829a26d1f75c6e5286c74743133d9f742cda8e53b9c2f"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c94e75445b00319c1fad60f3c98b09cd63fe1134a8a953dcd48989ef42318534"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-win32.whl", hash = "sha256:4951e4f7a5680a2db62f7f4ab2f84617674d36d2d76a729b9a8be4b59b3659be"},
|
||||
{file = "lxml-5.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:5c670c0406bdc845b474b680b9a5456c561c65cf366f8db5a60154088c92d102"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:abc25c3cab9ec7fcd299b9bcb3b8d4a1231877e425c650fa1c7576c5107ab851"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:6935bbf153f9a965f1e07c2649c0849d29832487c52bb4a5c5066031d8b44fd5"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d793bebb202a6000390a5390078e945bbb49855c29c7e4d56a85901326c3b5d9"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afd5562927cdef7c4f5550374acbc117fd4ecc05b5007bdfa57cc5355864e0a4"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0e7259016bc4345a31af861fdce942b77c99049d6c2107ca07dc2bba2435c1d9"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:530e7c04f72002d2f334d5257c8a51bf409db0316feee7c87e4385043be136af"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59689a75ba8d7ffca577aefd017d08d659d86ad4585ccc73e43edbfc7476781a"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:f9737bf36262046213a28e789cc82d82c6ef19c85a0cf05e75c670a33342ac2c"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:3a74c4f27167cb95c1d4af1c0b59e88b7f3e0182138db2501c353555f7ec57f4"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:68a2610dbe138fa8c5826b3f6d98a7cfc29707b850ddcc3e21910a6fe51f6ca0"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:f0a1bc63a465b6d72569a9bba9f2ef0334c4e03958e043da1920299100bc7c08"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c2d35a1d047efd68027817b32ab1586c1169e60ca02c65d428ae815b593e65d4"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:79bd05260359170f78b181b59ce871673ed01ba048deef4bf49a36ab3e72e80b"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:865bad62df277c04beed9478fe665b9ef63eb28fe026d5dedcb89b537d2e2ea6"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:44f6c7caff88d988db017b9b0e4ab04934f11e3e72d478031efc7edcac6c622f"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:71e97313406ccf55d32cc98a533ee05c61e15d11b99215b237346171c179c0b0"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:057cdc6b86ab732cf361f8b4d8af87cf195a1f6dc5b0ff3de2dced242c2015e0"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:f3bbbc998d42f8e561f347e798b85513ba4da324c2b3f9b7969e9c45b10f6169"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:491755202eb21a5e350dae00c6d9a17247769c64dcf62d8c788b5c135e179dc4"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-win32.whl", hash = "sha256:8de8f9d6caa7f25b204fc861718815d41cbcf27ee8f028c89c882a0cf4ae4134"},
|
||||
{file = "lxml-5.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:f2a9efc53d5b714b8df2b4b3e992accf8ce5bbdfe544d74d5c6766c9e1146a3a"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:70a9768e1b9d79edca17890175ba915654ee1725975d69ab64813dd785a2bd5c"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c38d7b9a690b090de999835f0443d8aa93ce5f2064035dfc48f27f02b4afc3d0"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5670fb70a828663cc37552a2a85bf2ac38475572b0e9b91283dc09efb52c41d1"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:958244ad566c3ffc385f47dddde4145088a0ab893504b54b52c041987a8c1863"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b6241d4eee5f89453307c2f2bfa03b50362052ca0af1efecf9fef9a41a22bb4f"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:2a66bf12fbd4666dd023b6f51223aed3d9f3b40fef06ce404cb75bafd3d89536"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_ppc64le.whl", hash = "sha256:9123716666e25b7b71c4e1789ec829ed18663152008b58544d95b008ed9e21e9"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_s390x.whl", hash = "sha256:0c3f67e2aeda739d1cc0b1102c9a9129f7dc83901226cc24dd72ba275ced4218"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:5d5792e9b3fb8d16a19f46aa8208987cfeafe082363ee2745ea8b643d9cc5b45"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_aarch64.whl", hash = "sha256:88e22fc0a6684337d25c994381ed8a1580a6f5ebebd5ad41f89f663ff4ec2885"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_ppc64le.whl", hash = "sha256:21c2e6b09565ba5b45ae161b438e033a86ad1736b8c838c766146eff8ceffff9"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_s390x.whl", hash = "sha256:afbbdb120d1e78d2ba8064a68058001b871154cc57787031b645c9142b937a62"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:627402ad8dea044dde2eccde4370560a2b750ef894c9578e1d4f8ffd54000461"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-win32.whl", hash = "sha256:e89580a581bf478d8dcb97d9cd011d567768e8bc4095f8557b21c4d4c5fea7d0"},
|
||||
{file = "lxml-5.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:59565f10607c244bc4c05c0c5fa0c190c990996e0c719d05deec7030c2aa8289"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:857500f88b17a6479202ff5fe5f580fc3404922cd02ab3716197adf1ef628029"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:56c22432809085b3f3ae04e6e7bdd36883d7258fcd90e53ba7b2e463efc7a6af"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a55ee573116ba208932e2d1a037cc4b10d2c1cb264ced2184d00b18ce585b2c0"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:6cf58416653c5901e12624e4013708b6e11142956e7f35e7a83f1ab02f3fe456"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:64c2baa7774bc22dd4474248ba16fe1a7f611c13ac6123408694d4cc93d66dbd"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:74b28c6334cca4dd704e8004cba1955af0b778cf449142e581e404bd211fb619"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:7221d49259aa1e5a8f00d3d28b1e0b76031655ca74bb287123ef56c3db92f213"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:3dbe858ee582cbb2c6294dc85f55b5f19c918c2597855e950f34b660f1a5ede6"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:04ab5415bf6c86e0518d57240a96c4d1fcfc3cb370bb2ac2a732b67f579e5a04"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_ppc64le.whl", hash = "sha256:6ab833e4735a7e5533711a6ea2df26459b96f9eec36d23f74cafe03631647c41"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_s390x.whl", hash = "sha256:f443cdef978430887ed55112b491f670bba6462cea7a7742ff8f14b7abb98d75"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:9e2addd2d1866fe112bc6f80117bcc6bc25191c5ed1bfbcf9f1386a884252ae8"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-win32.whl", hash = "sha256:f51969bac61441fd31f028d7b3b45962f3ecebf691a510495e5d2cd8c8092dbd"},
|
||||
{file = "lxml-5.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b0b58fbfa1bf7367dde8a557994e3b1637294be6cf2169810375caf8571a085c"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:804f74efe22b6a227306dd890eecc4f8c59ff25ca35f1f14e7482bbce96ef10b"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08802f0c56ed150cc6885ae0788a321b73505d2263ee56dad84d200cab11c07a"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0f8c09ed18ecb4ebf23e02b8e7a22a05d6411911e6fabef3a36e4f371f4f2585"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e3d30321949861404323c50aebeb1943461a67cd51d4200ab02babc58bd06a86"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:b560e3aa4b1d49e0e6c847d72665384db35b2f5d45f8e6a5c0072e0283430533"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:058a1308914f20784c9f4674036527e7c04f7be6fb60f5d61353545aa7fcb739"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:adfb84ca6b87e06bc6b146dc7da7623395db1e31621c4785ad0658c5028b37d7"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:417d14450f06d51f363e41cace6488519038f940676ce9664b34ebf5653433a5"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:a2dfe7e2473f9b59496247aad6e23b405ddf2e12ef0765677b0081c02d6c2c0b"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:bf2e2458345d9bffb0d9ec16557d8858c9c88d2d11fed53998512504cd9df49b"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:58278b29cb89f3e43ff3e0c756abbd1518f3ee6adad9e35b51fb101c1c1daaec"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:64641a6068a16201366476731301441ce93457eb8452056f570133a6ceb15fca"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:78bfa756eab503673991bdcf464917ef7845a964903d3302c5f68417ecdc948c"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:11a04306fcba10cd9637e669fd73aa274c1c09ca64af79c041aa820ea992b637"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-win32.whl", hash = "sha256:66bc5eb8a323ed9894f8fa0ee6cb3e3fb2403d99aee635078fd19a8bc7a5a5da"},
|
||||
{file = "lxml-5.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:9676bfc686fa6a3fa10cd4ae6b76cae8be26eb5ec6811d2a325636c460da1806"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:cf22b41fdae514ee2f1691b6c3cdeae666d8b7fa9434de445f12bbeee0cf48dd"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ec42088248c596dbd61d4ae8a5b004f97a4d91a9fd286f632e42e60b706718d7"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cd53553ddad4a9c2f1f022756ae64abe16da1feb497edf4d9f87f99ec7cf86bd"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:feaa45c0eae424d3e90d78823f3828e7dc42a42f21ed420db98da2c4ecf0a2cb"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ddc678fb4c7e30cf830a2b5a8d869538bc55b28d6c68544d09c7d0d8f17694dc"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:853e074d4931dbcba7480d4dcab23d5c56bd9607f92825ab80ee2bd916edea53"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cc4691d60512798304acb9207987e7b2b7c44627ea88b9d77489bbe3e6cc3bd4"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:beb72935a941965c52990f3a32d7f07ce869fe21c6af8b34bf6a277b33a345d3"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:6588c459c5627fefa30139be4d2e28a2c2a1d0d1c265aad2ba1935a7863a4913"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:588008b8497667f1ddca7c99f2f85ce8511f8f7871b4a06ceede68ab62dff64b"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:b6787b643356111dfd4032b5bffe26d2f8331556ecb79e15dacb9275da02866e"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7c17b64b0a6ef4e5affae6a3724010a7a66bda48a62cfe0674dabd46642e8b54"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:27aa20d45c2e0b8cd05da6d4759649170e8dfc4f4e5ef33a34d06f2d79075d57"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:d4f2cc7060dc3646632d7f15fe68e2fa98f58e35dd5666cd525f3b35d3fed7f8"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff46d772d5f6f73564979cd77a4fffe55c916a05f3cb70e7c9c0590059fb29ef"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:96323338e6c14e958d775700ec8a88346014a85e5de73ac7967db0367582049b"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:52421b41ac99e9d91934e4d0d0fe7da9f02bfa7536bb4431b4c05c906c8c6919"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:7a7efd5b6d3e30d81ec68ab8a88252d7c7c6f13aaa875009fe3097eb4e30b84c"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:0ed777c1e8c99b63037b91f9d73a6aad20fd035d77ac84afcc205225f8f41188"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-win32.whl", hash = "sha256:644df54d729ef810dcd0f7732e50e5ad1bd0a135278ed8d6bcb06f33b6b6f708"},
|
||||
{file = "lxml-5.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:9ca66b8e90daca431b7ca1408cae085d025326570e57749695d6a01454790e95"},
|
||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9b0ff53900566bc6325ecde9181d89afadc59c5ffa39bddf084aaedfe3b06a11"},
|
||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fd6037392f2d57793ab98d9e26798f44b8b4da2f2464388588f48ac52c489ea1"},
|
||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b9c07e7a45bb64e21df4b6aa623cb8ba214dfb47d2027d90eac197329bb5e94"},
|
||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3249cc2989d9090eeac5467e50e9ec2d40704fea9ab72f36b034ea34ee65ca98"},
|
||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:f42038016852ae51b4088b2862126535cc4fc85802bfe30dea3500fdfaf1864e"},
|
||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:533658f8fbf056b70e434dff7e7aa611bcacb33e01f75de7f821810e48d1bb66"},
|
||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:622020d4521e22fb371e15f580d153134bfb68d6a429d1342a25f051ec72df1c"},
|
||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:efa7b51824aa0ee957ccd5a741c73e6851de55f40d807f08069eb4c5a26b2baa"},
|
||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9c6ad0fbf105f6bcc9300c00010a2ffa44ea6f555df1a2ad95c88f5656104817"},
|
||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:e233db59c8f76630c512ab4a4daf5a5986da5c3d5b44b8e9fc742f2a24dbd460"},
|
||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:6a014510830df1475176466b6087fc0c08b47a36714823e58d8b8d7709132a96"},
|
||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d38c8f50ecf57f0463399569aa388b232cf1a2ffb8f0a9a5412d0db57e054860"},
|
||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:5aea8212fb823e006b995c4dda533edcf98a893d941f173f6c9506126188860d"},
|
||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ff097ae562e637409b429a7ac958a20aab237a0378c42dabaa1e3abf2f896e5f"},
|
||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f5d65c39f16717a47c36c756af0fb36144069c4718824b7533f803ecdf91138"},
|
||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3d0c3dd24bb4605439bf91068598d00c6370684f8de4a67c2992683f6c309d6b"},
|
||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e32be23d538753a8adb6c85bd539f5fd3b15cb987404327c569dfc5fd8366e85"},
|
||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:cc518cea79fd1e2f6c90baafa28906d4309d24f3a63e801d855e7424c5b34144"},
|
||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a0af35bd8ebf84888373630f73f24e86bf016642fb8576fba49d3d6b560b7cbc"},
|
||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8aca2e3a72f37bfc7b14ba96d4056244001ddcc18382bd0daa087fd2e68a354"},
|
||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ca1e8188b26a819387b29c3895c47a5e618708fe6f787f3b1a471de2c4a94d9"},
|
||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c8ba129e6d3b0136a0f50345b2cb3db53f6bda5dd8c7f5d83fbccba97fb5dcb5"},
|
||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e998e304036198b4f6914e6a1e2b6f925208a20e2042563d9734881150c6c246"},
|
||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:d3be9b2076112e51b323bdf6d5a7f8a798de55fb8d95fcb64bd179460cdc0704"},
|
||||
{file = "lxml-5.2.1.tar.gz", hash = "sha256:3f7765e69bbce0906a7c74d5fe46d2c7a7596147318dbc08e4a2431f3060e306"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:364d03207f3e603922d0d3932ef363d55bbf48e3647395765f9bfcbdf6d23632"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:50127c186f191b8917ea2fb8b206fbebe87fd414a6084d15568c27d0a21d60db"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:74e4f025ef3db1c6da4460dd27c118d8cd136d0391da4e387a15e48e5c975147"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:981a06a3076997adf7c743dcd0d7a0415582661e2517c7d961493572e909aa1d"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aef5474d913d3b05e613906ba4090433c515e13ea49c837aca18bde190853dff"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1e275ea572389e41e8b039ac076a46cb87ee6b8542df3fff26f5baab43713bca"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5b65529bb2f21ac7861a0e94fdbf5dc0daab41497d18223b46ee8515e5ad297"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:bcc98f911f10278d1daf14b87d65325851a1d29153caaf146877ec37031d5f36"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:b47633251727c8fe279f34025844b3b3a3e40cd1b198356d003aa146258d13a2"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:fbc9d316552f9ef7bba39f4edfad4a734d3d6f93341232a9dddadec4f15d425f"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:13e69be35391ce72712184f69000cda04fc89689429179bc4c0ae5f0b7a8c21b"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:3b6a30a9ab040b3f545b697cb3adbf3696c05a3a68aad172e3fd7ca73ab3c835"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:a233bb68625a85126ac9f1fc66d24337d6e8a0f9207b688eec2e7c880f012ec0"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:dfa7c241073d8f2b8e8dbc7803c434f57dbb83ae2a3d7892dd068d99e96efe2c"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:1a7aca7964ac4bb07680d5c9d63b9d7028cace3e2d43175cb50bba8c5ad33316"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:ae4073a60ab98529ab8a72ebf429f2a8cc612619a8c04e08bed27450d52103c0"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:ffb2be176fed4457e445fe540617f0252a72a8bc56208fd65a690fdb1f57660b"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:e290d79a4107d7d794634ce3e985b9ae4f920380a813717adf61804904dc4393"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:96e85aa09274955bb6bd483eaf5b12abadade01010478154b0ec70284c1b1526"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-win32.whl", hash = "sha256:f956196ef61369f1685d14dad80611488d8dc1ef00be57c0c5a03064005b0f30"},
|
||||
{file = "lxml-5.2.2-cp310-cp310-win_amd64.whl", hash = "sha256:875a3f90d7eb5c5d77e529080d95140eacb3c6d13ad5b616ee8095447b1d22e7"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:45f9494613160d0405682f9eee781c7e6d1bf45f819654eb249f8f46a2c22545"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b0b3f2df149efb242cee2ffdeb6674b7f30d23c9a7af26595099afaf46ef4e88"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d28cb356f119a437cc58a13f8135ab8a4c8ece18159eb9194b0d269ec4e28083"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:657a972f46bbefdbba2d4f14413c0d079f9ae243bd68193cb5061b9732fa54c1"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b74b9ea10063efb77a965a8d5f4182806fbf59ed068b3c3fd6f30d2ac7bee734"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:07542787f86112d46d07d4f3c4e7c760282011b354d012dc4141cc12a68cef5f"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:303f540ad2dddd35b92415b74b900c749ec2010e703ab3bfd6660979d01fd4ed"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:2eb2227ce1ff998faf0cd7fe85bbf086aa41dfc5af3b1d80867ecfe75fb68df3"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:1d8a701774dfc42a2f0b8ccdfe7dbc140500d1049e0632a611985d943fcf12df"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:56793b7a1a091a7c286b5f4aa1fe4ae5d1446fe742d00cdf2ffb1077865db10d"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:eb00b549b13bd6d884c863554566095bf6fa9c3cecb2e7b399c4bc7904cb33b5"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1a2569a1f15ae6c8c64108a2cd2b4a858fc1e13d25846be0666fc144715e32ab"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:8cf85a6e40ff1f37fe0f25719aadf443686b1ac7652593dc53c7ef9b8492b115"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:d237ba6664b8e60fd90b8549a149a74fcc675272e0e95539a00522e4ca688b04"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0b3f5016e00ae7630a4b83d0868fca1e3d494c78a75b1c7252606a3a1c5fc2ad"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:23441e2b5339bc54dc949e9e675fa35efe858108404ef9aa92f0456929ef6fe8"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:2fb0ba3e8566548d6c8e7dd82a8229ff47bd8fb8c2da237607ac8e5a1b8312e5"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:79d1fb9252e7e2cfe4de6e9a6610c7cbb99b9708e2c3e29057f487de5a9eaefa"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6dcc3d17eac1df7859ae01202e9bb11ffa8c98949dcbeb1069c8b9a75917e01b"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-win32.whl", hash = "sha256:4c30a2f83677876465f44c018830f608fa3c6a8a466eb223535035fbc16f3438"},
|
||||
{file = "lxml-5.2.2-cp311-cp311-win_amd64.whl", hash = "sha256:49095a38eb333aaf44c06052fd2ec3b8f23e19747ca7ec6f6c954ffea6dbf7be"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:7429e7faa1a60cad26ae4227f4dd0459efde239e494c7312624ce228e04f6391"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:50ccb5d355961c0f12f6cf24b7187dbabd5433f29e15147a67995474f27d1776"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc911208b18842a3a57266d8e51fc3cfaccee90a5351b92079beed912a7914c2"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:33ce9e786753743159799fdf8e92a5da351158c4bfb6f2db0bf31e7892a1feb5"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ec87c44f619380878bd49ca109669c9f221d9ae6883a5bcb3616785fa8f94c97"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08ea0f606808354eb8f2dfaac095963cb25d9d28e27edcc375d7b30ab01abbf6"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75a9632f1d4f698b2e6e2e1ada40e71f369b15d69baddb8968dcc8e683839b18"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:74da9f97daec6928567b48c90ea2c82a106b2d500f397eeb8941e47d30b1ca85"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:0969e92af09c5687d769731e3f39ed62427cc72176cebb54b7a9d52cc4fa3b73"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:9164361769b6ca7769079f4d426a41df6164879f7f3568be9086e15baca61466"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:d26a618ae1766279f2660aca0081b2220aca6bd1aa06b2cf73f07383faf48927"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ab67ed772c584b7ef2379797bf14b82df9aa5f7438c5b9a09624dd834c1c1aaf"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:3d1e35572a56941b32c239774d7e9ad724074d37f90c7a7d499ab98761bd80cf"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:8268cbcd48c5375f46e000adb1390572c98879eb4f77910c6053d25cc3ac2c67"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e282aedd63c639c07c3857097fc0e236f984ceb4089a8b284da1c526491e3f3d"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6dfdc2bfe69e9adf0df4915949c22a25b39d175d599bf98e7ddf620a13678585"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:4aefd911793b5d2d7a921233a54c90329bf3d4a6817dc465f12ffdfe4fc7b8fe"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:8b8df03a9e995b6211dafa63b32f9d405881518ff1ddd775db4e7b98fb545e1c"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f11ae142f3a322d44513de1018b50f474f8f736bc3cd91d969f464b5bfef8836"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-win32.whl", hash = "sha256:16a8326e51fcdffc886294c1e70b11ddccec836516a343f9ed0f82aac043c24a"},
|
||||
{file = "lxml-5.2.2-cp312-cp312-win_amd64.whl", hash = "sha256:bbc4b80af581e18568ff07f6395c02114d05f4865c2812a1f02f2eaecf0bfd48"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e3d9d13603410b72787579769469af730c38f2f25505573a5888a94b62b920f8"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:38b67afb0a06b8575948641c1d6d68e41b83a3abeae2ca9eed2ac59892b36706"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c689d0d5381f56de7bd6966a4541bff6e08bf8d3871bbd89a0c6ab18aa699573"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:cf2a978c795b54c539f47964ec05e35c05bd045db5ca1e8366988c7f2fe6b3ce"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:739e36ef7412b2bd940f75b278749106e6d025e40027c0b94a17ef7968d55d56"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d8bbcd21769594dbba9c37d3c819e2d5847656ca99c747ddb31ac1701d0c0ed9"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:2304d3c93f2258ccf2cf7a6ba8c761d76ef84948d87bf9664e14d203da2cd264"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-win32.whl", hash = "sha256:02437fb7308386867c8b7b0e5bc4cd4b04548b1c5d089ffb8e7b31009b961dc3"},
|
||||
{file = "lxml-5.2.2-cp36-cp36m-win_amd64.whl", hash = "sha256:edcfa83e03370032a489430215c1e7783128808fd3e2e0a3225deee278585196"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:28bf95177400066596cdbcfc933312493799382879da504633d16cf60bba735b"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a745cc98d504d5bd2c19b10c79c61c7c3df9222629f1b6210c0368177589fb8"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1b590b39ef90c6b22ec0be925b211298e810b4856909c8ca60d27ffbca6c12e6"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b336b0416828022bfd5a2e3083e7f5ba54b96242159f83c7e3eebaec752f1716"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:c2faf60c583af0d135e853c86ac2735ce178f0e338a3c7f9ae8f622fd2eb788c"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:4bc6cb140a7a0ad1f7bc37e018d0ed690b7b6520ade518285dc3171f7a117905"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7ff762670cada8e05b32bf1e4dc50b140790909caa8303cfddc4d702b71ea184"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:57f0a0bbc9868e10ebe874e9f129d2917750adf008fe7b9c1598c0fbbfdde6a6"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:a6d2092797b388342c1bc932077ad232f914351932353e2e8706851c870bca1f"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:60499fe961b21264e17a471ec296dcbf4365fbea611bf9e303ab69db7159ce61"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-win32.whl", hash = "sha256:d9b342c76003c6b9336a80efcc766748a333573abf9350f4094ee46b006ec18f"},
|
||||
{file = "lxml-5.2.2-cp37-cp37m-win_amd64.whl", hash = "sha256:b16db2770517b8799c79aa80f4053cd6f8b716f21f8aca962725a9565ce3ee40"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7ed07b3062b055d7a7f9d6557a251cc655eed0b3152b76de619516621c56f5d3"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f60fdd125d85bf9c279ffb8e94c78c51b3b6a37711464e1f5f31078b45002421"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8a7e24cb69ee5f32e003f50e016d5fde438010c1022c96738b04fc2423e61706"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23cfafd56887eaed93d07bc4547abd5e09d837a002b791e9767765492a75883f"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:19b4e485cd07b7d83e3fe3b72132e7df70bfac22b14fe4bf7a23822c3a35bff5"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:7ce7ad8abebe737ad6143d9d3bf94b88b93365ea30a5b81f6877ec9c0dee0a48"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:e49b052b768bb74f58c7dda4e0bdf7b79d43a9204ca584ffe1fb48a6f3c84c66"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d14a0d029a4e176795cef99c056d58067c06195e0c7e2dbb293bf95c08f772a3"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:be49ad33819d7dcc28a309b86d4ed98e1a65f3075c6acd3cd4fe32103235222b"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:a6d17e0370d2516d5bb9062c7b4cb731cff921fc875644c3d751ad857ba9c5b1"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-win32.whl", hash = "sha256:5b8c041b6265e08eac8a724b74b655404070b636a8dd6d7a13c3adc07882ef30"},
|
||||
{file = "lxml-5.2.2-cp38-cp38-win_amd64.whl", hash = "sha256:f61efaf4bed1cc0860e567d2ecb2363974d414f7f1f124b1df368bbf183453a6"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:fb91819461b1b56d06fa4bcf86617fac795f6a99d12239fb0c68dbeba41a0a30"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d4ed0c7cbecde7194cd3228c044e86bf73e30a23505af852857c09c24e77ec5d"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54401c77a63cc7d6dc4b4e173bb484f28a5607f3df71484709fe037c92d4f0ed"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:625e3ef310e7fa3a761d48ca7ea1f9d8718a32b1542e727d584d82f4453d5eeb"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:519895c99c815a1a24a926d5b60627ce5ea48e9f639a5cd328bda0515ea0f10c"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c7079d5eb1c1315a858bbf180000757db8ad904a89476653232db835c3114001"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:343ab62e9ca78094f2306aefed67dcfad61c4683f87eee48ff2fd74902447726"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:cd9e78285da6c9ba2d5c769628f43ef66d96ac3085e59b10ad4f3707980710d3"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:546cf886f6242dff9ec206331209db9c8e1643ae642dea5fdbecae2453cb50fd"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:02f6a8eb6512fdc2fd4ca10a49c341c4e109aa6e9448cc4859af5b949622715a"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:339ee4a4704bc724757cd5dd9dc8cf4d00980f5d3e6e06d5847c1b594ace68ab"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0a028b61a2e357ace98b1615fc03f76eb517cc028993964fe08ad514b1e8892d"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:f90e552ecbad426eab352e7b2933091f2be77115bb16f09f78404861c8322981"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:d83e2d94b69bf31ead2fa45f0acdef0757fa0458a129734f59f67f3d2eb7ef32"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a02d3c48f9bb1e10c7788d92c0c7db6f2002d024ab6e74d6f45ae33e3d0288a3"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6d68ce8e7b2075390e8ac1e1d3a99e8b6372c694bbe612632606d1d546794207"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:453d037e09a5176d92ec0fd282e934ed26d806331a8b70ab431a81e2fbabf56d"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:3b019d4ee84b683342af793b56bb35034bd749e4cbdd3d33f7d1107790f8c472"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:cb3942960f0beb9f46e2a71a3aca220d1ca32feb5a398656be934320804c0df9"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-win32.whl", hash = "sha256:ac6540c9fff6e3813d29d0403ee7a81897f1d8ecc09a8ff84d2eea70ede1cdbf"},
|
||||
{file = "lxml-5.2.2-cp39-cp39-win_amd64.whl", hash = "sha256:610b5c77428a50269f38a534057444c249976433f40f53e3b47e68349cca1425"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:b537bd04d7ccd7c6350cdaaaad911f6312cbd61e6e6045542f781c7f8b2e99d2"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4820c02195d6dfb7b8508ff276752f6b2ff8b64ae5d13ebe02e7667e035000b9"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2a09f6184f17a80897172863a655467da2b11151ec98ba8d7af89f17bf63dae"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:76acba4c66c47d27c8365e7c10b3d8016a7da83d3191d053a58382311a8bf4e1"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b128092c927eaf485928cec0c28f6b8bead277e28acf56800e972aa2c2abd7a2"},
|
||||
{file = "lxml-5.2.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:ae791f6bd43305aade8c0e22f816b34f3b72b6c820477aab4d18473a37e8090b"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a2f6a1bc2460e643785a2cde17293bd7a8f990884b822f7bca47bee0a82fc66b"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e8d351ff44c1638cb6e980623d517abd9f580d2e53bfcd18d8941c052a5a009"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bec4bd9133420c5c52d562469c754f27c5c9e36ee06abc169612c959bd7dbb07"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:55ce6b6d803890bd3cc89975fca9de1dff39729b43b73cb15ddd933b8bc20484"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8ab6a358d1286498d80fe67bd3d69fcbc7d1359b45b41e74c4a26964ca99c3f8"},
|
||||
{file = "lxml-5.2.2-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:06668e39e1f3c065349c51ac27ae430719d7806c026fec462e5693b08b95696b"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9cd5323344d8ebb9fb5e96da5de5ad4ebab993bbf51674259dbe9d7a18049525"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:89feb82ca055af0fe797a2323ec9043b26bc371365847dbe83c7fd2e2f181c34"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e481bba1e11ba585fb06db666bfc23dbe181dbafc7b25776156120bf12e0d5a6"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:9d6c6ea6a11ca0ff9cd0390b885984ed31157c168565702959c25e2191674a14"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:3d98de734abee23e61f6b8c2e08a88453ada7d6486dc7cdc82922a03968928db"},
|
||||
{file = "lxml-5.2.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:69ab77a1373f1e7563e0fb5a29a8440367dec051da6c7405333699d07444f511"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:34e17913c431f5ae01d8658dbf792fdc457073dcdfbb31dc0cc6ab256e664a8d"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:05f8757b03208c3f50097761be2dea0aba02e94f0dc7023ed73a7bb14ff11eb0"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a520b4f9974b0a0a6ed73c2154de57cdfd0c8800f4f15ab2b73238ffed0b36e"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5e097646944b66207023bc3c634827de858aebc226d5d4d6d16f0b77566ea182"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b5e4ef22ff25bfd4ede5f8fb30f7b24446345f3e79d9b7455aef2836437bc38a"},
|
||||
{file = "lxml-5.2.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:ff69a9a0b4b17d78170c73abe2ab12084bdf1691550c5629ad1fe7849433f324"},
|
||||
{file = "lxml-5.2.2.tar.gz", hash = "sha256:bb2dc4898180bea79863d5487e5f9c7c34297414bad54bcd0f0852aee9cfdb87"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
@@ -1454,17 +1442,17 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "mypy-zope"
|
||||
version = "1.0.3"
|
||||
version = "1.0.4"
|
||||
description = "Plugin for mypy to support zope interfaces"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "mypy-zope-1.0.3.tar.gz", hash = "sha256:149081bd2754d947747baefac569bb1c2bc127b4a2cc1fa505492336946bb3b4"},
|
||||
{file = "mypy_zope-1.0.3-py3-none-any.whl", hash = "sha256:7a30ce1a2589173f0be66662c9a9179f75737afc40e4104df4c76fb5a8421c14"},
|
||||
{file = "mypy-zope-1.0.4.tar.gz", hash = "sha256:a9569e73ae85a65247787d98590fa6d4290e76f26aabe035d1c3e94a0b9ab6ee"},
|
||||
{file = "mypy_zope-1.0.4-py3-none-any.whl", hash = "sha256:c7298f93963a84f2b145c2b5cc98709fc2a5be4adf54bfe23fa7fdd8fd19c975"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
mypy = ">=1.0.0,<1.9.0"
|
||||
mypy = ">=1.0.0,<1.10.0"
|
||||
"zope.interface" = "*"
|
||||
"zope.schema" = "*"
|
||||
|
||||
@@ -1536,13 +1524,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "phonenumbers"
|
||||
version = "8.13.35"
|
||||
version = "8.13.37"
|
||||
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "phonenumbers-8.13.35-py2.py3-none-any.whl", hash = "sha256:58286a8e617bd75f541e04313b28c36398be6d4443a778c85e9617a93c391310"},
|
||||
{file = "phonenumbers-8.13.35.tar.gz", hash = "sha256:64f061a967dcdae11e1c59f3688649e697b897110a33bb74d5a69c3e35321245"},
|
||||
{file = "phonenumbers-8.13.37-py2.py3-none-any.whl", hash = "sha256:4ea00ef5012422c08c7955c21131e7ae5baa9a3ef52cf2d561e963f023006b80"},
|
||||
{file = "phonenumbers-8.13.37.tar.gz", hash = "sha256:bd315fed159aea0516f7c367231810fe8344d5bec26156b88fa18374c11d1cf2"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -1673,13 +1661,13 @@ test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2.1)", "pytes
|
||||
|
||||
[[package]]
|
||||
name = "prometheus-client"
|
||||
version = "0.19.0"
|
||||
version = "0.20.0"
|
||||
description = "Python client for the Prometheus monitoring system."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "prometheus_client-0.19.0-py3-none-any.whl", hash = "sha256:c88b1e6ecf6b41cd8fb5731c7ae919bf66df6ec6fafa555cd6c0e16ca169ae92"},
|
||||
{file = "prometheus_client-0.19.0.tar.gz", hash = "sha256:4585b0d1223148c27a225b10dbec5ae9bc4c81a99a3fa80774fa6209935324e1"},
|
||||
{file = "prometheus_client-0.20.0-py3-none-any.whl", hash = "sha256:cde524a85bce83ca359cc837f28b8c0db5cac7aa653a588fd7e84ba061c329e7"},
|
||||
{file = "prometheus_client-0.20.0.tar.gz", hash = "sha256:287629d00b147a32dcb2be0b9df905da599b2d82f80377083ec8463309a4bb89"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
@@ -1915,12 +1903,12 @@ plugins = ["importlib-metadata"]
|
||||
|
||||
[[package]]
|
||||
name = "pyicu"
|
||||
version = "2.13"
|
||||
version = "2.13.1"
|
||||
description = "Python extension wrapping the ICU C++ API"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "PyICU-2.13.tar.gz", hash = "sha256:d481be888975df3097c2790241bbe8518f65c9676a74957cdbe790e559c828f6"},
|
||||
{file = "PyICU-2.13.1.tar.gz", hash = "sha256:d4919085eaa07da12bade8ee721e7bbf7ade0151ca0f82946a26c8f4b98cdceb"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -1997,13 +1985,13 @@ tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "pyopenssl"
|
||||
version = "24.0.0"
|
||||
version = "24.1.0"
|
||||
description = "Python wrapper module around the OpenSSL library"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "pyOpenSSL-24.0.0-py3-none-any.whl", hash = "sha256:ba07553fb6fd6a7a2259adb9b84e12302a9a8a75c44046e8bb5d3e5ee887e3c3"},
|
||||
{file = "pyOpenSSL-24.0.0.tar.gz", hash = "sha256:6aa33039a93fffa4563e655b61d11364d01264be8ccb49906101e02a334530bf"},
|
||||
{file = "pyOpenSSL-24.1.0-py3-none-any.whl", hash = "sha256:17ed5be5936449c5418d1cd269a1a9e9081bc54c17aed272b45856a3d3dc86ad"},
|
||||
{file = "pyOpenSSL-24.1.0.tar.gz", hash = "sha256:cabed4bfaa5df9f1a16c0ef64a0cb65318b5cd077a7eda7d6970131ca2f41a6f"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2011,7 +1999,7 @@ cryptography = ">=41.0.5,<43"
|
||||
|
||||
[package.extras]
|
||||
docs = ["sphinx (!=5.2.0,!=5.2.0.post0,!=7.2.5)", "sphinx-rtd-theme"]
|
||||
test = ["flaky", "pretend", "pytest (>=3.0.1)"]
|
||||
test = ["pretend", "pytest (>=3.0.1)", "pytest-rerunfailures"]
|
||||
|
||||
[[package]]
|
||||
name = "pysaml2"
|
||||
@@ -2399,13 +2387,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
|
||||
|
||||
[[package]]
|
||||
name = "sentry-sdk"
|
||||
version = "2.1.1"
|
||||
version = "2.3.1"
|
||||
description = "Python client for Sentry (https://sentry.io)"
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "sentry_sdk-2.1.1-py2.py3-none-any.whl", hash = "sha256:99aeb78fb76771513bd3b2829d12613130152620768d00cd3e45ac00cb17950f"},
|
||||
{file = "sentry_sdk-2.1.1.tar.gz", hash = "sha256:95d8c0bb41c8b0bc37ab202c2c4a295bb84398ee05f4cdce55051cd75b926ec1"},
|
||||
{file = "sentry_sdk-2.3.1-py2.py3-none-any.whl", hash = "sha256:c5aeb095ba226391d337dd42a6f9470d86c9fc236ecc71cfc7cd1942b45010c6"},
|
||||
{file = "sentry_sdk-2.3.1.tar.gz", hash = "sha256:139a71a19f5e9eb5d3623942491ce03cf8ebc14ea2e39ba3e6fe79560d8a5b1f"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2427,7 +2415,7 @@ django = ["django (>=1.8)"]
|
||||
falcon = ["falcon (>=1.4)"]
|
||||
fastapi = ["fastapi (>=0.79.0)"]
|
||||
flask = ["blinker (>=1.1)", "flask (>=0.11)", "markupsafe"]
|
||||
grpcio = ["grpcio (>=1.21.1)"]
|
||||
grpcio = ["grpcio (>=1.21.1)", "protobuf (>=3.8.0)"]
|
||||
httpx = ["httpx (>=0.16.0)"]
|
||||
huey = ["huey (>=2)"]
|
||||
huggingface-hub = ["huggingface-hub (>=0.22)"]
|
||||
@@ -2782,6 +2770,20 @@ files = [
|
||||
[package.dependencies]
|
||||
types-html5lib = "*"
|
||||
|
||||
[[package]]
|
||||
name = "types-cffi"
|
||||
version = "1.16.0.20240331"
|
||||
description = "Typing stubs for cffi"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "types-cffi-1.16.0.20240331.tar.gz", hash = "sha256:b8b20d23a2b89cfed5f8c5bc53b0cb8677c3aac6d970dbc771e28b9c698f5dee"},
|
||||
{file = "types_cffi-1.16.0.20240331-py3-none-any.whl", hash = "sha256:a363e5ea54a4eb6a4a105d800685fde596bc318089b025b27dee09849fe41ff0"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
types-setuptools = "*"
|
||||
|
||||
[[package]]
|
||||
name = "types-commonmark"
|
||||
version = "0.9.2.20240106"
|
||||
@@ -2864,17 +2866,18 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "types-pyopenssl"
|
||||
version = "24.0.0.20240311"
|
||||
version = "24.1.0.20240425"
|
||||
description = "Typing stubs for pyOpenSSL"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "types-pyOpenSSL-24.0.0.20240311.tar.gz", hash = "sha256:7bca00cfc4e7ef9c5d2663c6a1c068c35798e59670595439f6296e7ba3d58083"},
|
||||
{file = "types_pyOpenSSL-24.0.0.20240311-py3-none-any.whl", hash = "sha256:6e8e8bfad34924067333232c93f7fc4b369856d8bea0d5c9d1808cb290ab1972"},
|
||||
{file = "types-pyOpenSSL-24.1.0.20240425.tar.gz", hash = "sha256:0a7e82626c1983dc8dc59292bf20654a51c3c3881bcbb9b337c1da6e32f0204e"},
|
||||
{file = "types_pyOpenSSL-24.1.0.20240425-py3-none-any.whl", hash = "sha256:f51a156835555dd2a1f025621e8c4fbe7493470331afeef96884d1d29bf3a473"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cryptography = ">=35.0.0"
|
||||
types-cffi = "*"
|
||||
|
||||
[[package]]
|
||||
name = "types-pyyaml"
|
||||
@@ -3184,4 +3187,4 @@ user-search = ["pyicu"]
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = "^3.8.0"
|
||||
content-hash = "987f8eccaa222367b1a2e15b0d496586ca50d46ca1277e69694922d31c93ce5b"
|
||||
content-hash = "107c8fb5c67360340854fbdba3c085fc5f9c7be24bcb592596a914eea621faea"
|
||||
|
||||
@@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.108.0rc1"
|
||||
version = "1.109.0rc1"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
@@ -200,10 +200,8 @@ netaddr = ">=0.7.18"
|
||||
# add a lower bound to the Jinja2 dependency.
|
||||
Jinja2 = ">=3.0"
|
||||
bleach = ">=1.4.3"
|
||||
# We use `ParamSpec` and `Concatenate`, which were added in `typing-extensions` 3.10.0.0.
|
||||
# Additionally we need https://github.com/python/typing/pull/817 to allow types to be
|
||||
# generic over ParamSpecs.
|
||||
typing-extensions = ">=3.10.0.1"
|
||||
# We use `Self`, which were added in `typing-extensions` 4.0.
|
||||
typing-extensions = ">=4.0"
|
||||
# We enforce that we have a `cryptography` version that bundles an `openssl`
|
||||
# with the latest security patches.
|
||||
cryptography = ">=3.4.7"
|
||||
|
||||
@@ -777,22 +777,74 @@ class Porter:
|
||||
await self._setup_events_stream_seqs()
|
||||
await self._setup_sequence(
|
||||
"un_partial_stated_event_stream_sequence",
|
||||
("un_partial_stated_event_stream",),
|
||||
[("un_partial_stated_event_stream", "stream_id")],
|
||||
)
|
||||
await self._setup_sequence(
|
||||
"device_inbox_sequence", ("device_inbox", "device_federation_outbox")
|
||||
"device_inbox_sequence",
|
||||
[
|
||||
("device_inbox", "stream_id"),
|
||||
("device_federation_outbox", "stream_id"),
|
||||
],
|
||||
)
|
||||
await self._setup_sequence(
|
||||
"account_data_sequence",
|
||||
("room_account_data", "room_tags_revisions", "account_data"),
|
||||
[
|
||||
("room_account_data", "stream_id"),
|
||||
("room_tags_revisions", "stream_id"),
|
||||
("account_data", "stream_id"),
|
||||
],
|
||||
)
|
||||
await self._setup_sequence(
|
||||
"receipts_sequence",
|
||||
[
|
||||
("receipts_linearized", "stream_id"),
|
||||
],
|
||||
)
|
||||
await self._setup_sequence(
|
||||
"presence_stream_sequence",
|
||||
[
|
||||
("presence_stream", "stream_id"),
|
||||
],
|
||||
)
|
||||
await self._setup_sequence("receipts_sequence", ("receipts_linearized",))
|
||||
await self._setup_sequence("presence_stream_sequence", ("presence_stream",))
|
||||
await self._setup_auth_chain_sequence()
|
||||
await self._setup_sequence(
|
||||
"application_services_txn_id_seq",
|
||||
("application_services_txns",),
|
||||
"txn_id",
|
||||
[
|
||||
(
|
||||
"application_services_txns",
|
||||
"txn_id",
|
||||
)
|
||||
],
|
||||
)
|
||||
await self._setup_sequence(
|
||||
"device_lists_sequence",
|
||||
[
|
||||
("device_lists_stream", "stream_id"),
|
||||
("user_signature_stream", "stream_id"),
|
||||
("device_lists_outbound_pokes", "stream_id"),
|
||||
("device_lists_changes_in_room", "stream_id"),
|
||||
("device_lists_remote_pending", "stream_id"),
|
||||
("device_lists_changes_converted_stream_position", "stream_id"),
|
||||
],
|
||||
)
|
||||
await self._setup_sequence(
|
||||
"e2e_cross_signing_keys_sequence",
|
||||
[
|
||||
("e2e_cross_signing_keys", "stream_id"),
|
||||
],
|
||||
)
|
||||
await self._setup_sequence(
|
||||
"push_rules_stream_sequence",
|
||||
[
|
||||
("push_rules_stream", "stream_id"),
|
||||
],
|
||||
)
|
||||
await self._setup_sequence(
|
||||
"pushers_sequence",
|
||||
[
|
||||
("pushers", "id"),
|
||||
("deleted_pushers", "stream_id"),
|
||||
],
|
||||
)
|
||||
|
||||
# Step 3. Get tables.
|
||||
@@ -1101,12 +1153,11 @@ class Porter:
|
||||
async def _setup_sequence(
|
||||
self,
|
||||
sequence_name: str,
|
||||
stream_id_tables: Iterable[str],
|
||||
column_name: str = "stream_id",
|
||||
stream_id_tables: Iterable[Tuple[str, str]],
|
||||
) -> None:
|
||||
"""Set a sequence to the correct value."""
|
||||
current_stream_ids = []
|
||||
for stream_id_table in stream_id_tables:
|
||||
for stream_id_table, column_name in stream_id_tables:
|
||||
max_stream_id = cast(
|
||||
int,
|
||||
await self.sqlite_store.db_pool.simple_select_one_onecol(
|
||||
|
||||
@@ -50,7 +50,7 @@ class Membership:
|
||||
KNOCK: Final = "knock"
|
||||
LEAVE: Final = "leave"
|
||||
BAN: Final = "ban"
|
||||
LIST: Final = (INVITE, JOIN, KNOCK, LEAVE, BAN)
|
||||
LIST: Final = {INVITE, JOIN, KNOCK, LEAVE, BAN}
|
||||
|
||||
|
||||
class PresenceState:
|
||||
|
||||
@@ -681,17 +681,17 @@ def setup_sentry(hs: "HomeServer") -> None:
|
||||
)
|
||||
|
||||
# We set some default tags that give some context to this instance
|
||||
with sentry_sdk.configure_scope() as scope:
|
||||
scope.set_tag("matrix_server_name", hs.config.server.server_name)
|
||||
global_scope = sentry_sdk.Scope.get_global_scope()
|
||||
global_scope.set_tag("matrix_server_name", hs.config.server.server_name)
|
||||
|
||||
app = (
|
||||
hs.config.worker.worker_app
|
||||
if hs.config.worker.worker_app
|
||||
else "synapse.app.homeserver"
|
||||
)
|
||||
name = hs.get_instance_name()
|
||||
scope.set_tag("worker_app", app)
|
||||
scope.set_tag("worker_name", name)
|
||||
app = (
|
||||
hs.config.worker.worker_app
|
||||
if hs.config.worker.worker_app
|
||||
else "synapse.app.homeserver"
|
||||
)
|
||||
name = hs.get_instance_name()
|
||||
global_scope.set_tag("worker_app", app)
|
||||
global_scope.set_tag("worker_name", name)
|
||||
|
||||
|
||||
def setup_sdnotify(hs: "HomeServer") -> None:
|
||||
|
||||
@@ -332,6 +332,9 @@ class ExperimentalConfig(Config):
|
||||
# MSC3391: Removing account data.
|
||||
self.msc3391_enabled = experimental.get("msc3391_enabled", False)
|
||||
|
||||
# MSC3575 (Sliding Sync API endpoints)
|
||||
self.msc3575_enabled: bool = experimental.get("msc3575_enabled", False)
|
||||
|
||||
# MSC3773: Thread notifications
|
||||
self.msc3773_enabled: bool = experimental.get("msc3773_enabled", False)
|
||||
|
||||
@@ -436,3 +439,7 @@ class ExperimentalConfig(Config):
|
||||
self.msc4115_membership_on_events = experimental.get(
|
||||
"msc4115_membership_on_events", False
|
||||
)
|
||||
|
||||
self.msc3916_authenticated_media_enabled = experimental.get(
|
||||
"msc3916_authenticated_media_enabled", False
|
||||
)
|
||||
|
||||
@@ -218,3 +218,13 @@ class RatelimitConfig(Config):
|
||||
"rc_media_create",
|
||||
defaults={"per_second": 10, "burst_count": 50},
|
||||
)
|
||||
|
||||
self.remote_media_downloads = RatelimitSettings(
|
||||
key="rc_remote_media_downloads",
|
||||
per_second=self.parse_size(
|
||||
config.get("remote_media_download_per_second", "87K")
|
||||
),
|
||||
burst_count=self.parse_size(
|
||||
config.get("remote_media_download_burst_count", "500M")
|
||||
),
|
||||
)
|
||||
|
||||
@@ -56,6 +56,7 @@ from synapse.api.errors import (
|
||||
SynapseError,
|
||||
UnsupportedRoomVersionError,
|
||||
)
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.api.room_versions import (
|
||||
KNOWN_ROOM_VERSIONS,
|
||||
EventFormatVersions,
|
||||
@@ -1877,6 +1878,8 @@ class FederationClient(FederationBase):
|
||||
output_stream: BinaryIO,
|
||||
max_size: int,
|
||||
max_timeout_ms: int,
|
||||
download_ratelimiter: Ratelimiter,
|
||||
ip_address: str,
|
||||
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
||||
try:
|
||||
return await self.transport_layer.download_media_v3(
|
||||
@@ -1885,6 +1888,8 @@ class FederationClient(FederationBase):
|
||||
output_stream=output_stream,
|
||||
max_size=max_size,
|
||||
max_timeout_ms=max_timeout_ms,
|
||||
download_ratelimiter=download_ratelimiter,
|
||||
ip_address=ip_address,
|
||||
)
|
||||
except HttpResponseException as e:
|
||||
# If an error is received that is due to an unrecognised endpoint,
|
||||
@@ -1905,6 +1910,8 @@ class FederationClient(FederationBase):
|
||||
output_stream=output_stream,
|
||||
max_size=max_size,
|
||||
max_timeout_ms=max_timeout_ms,
|
||||
download_ratelimiter=download_ratelimiter,
|
||||
ip_address=ip_address,
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -674,7 +674,7 @@ class FederationServer(FederationBase):
|
||||
# This is in addition to the HS-level rate limiting applied by
|
||||
# BaseFederationServlet.
|
||||
# type-ignore: mypy doesn't seem able to deduce the type of the limiter(!?)
|
||||
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type]
|
||||
await self._room_member_handler._join_rate_per_room_limiter.ratelimit(
|
||||
requester=None,
|
||||
key=room_id,
|
||||
update=False,
|
||||
@@ -717,7 +717,7 @@ class FederationServer(FederationBase):
|
||||
SynapseTags.SEND_JOIN_RESPONSE_IS_PARTIAL_STATE,
|
||||
caller_supports_partial_state,
|
||||
)
|
||||
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type]
|
||||
await self._room_member_handler._join_rate_per_room_limiter.ratelimit(
|
||||
requester=None,
|
||||
key=room_id,
|
||||
update=False,
|
||||
|
||||
@@ -43,6 +43,7 @@ import ijson
|
||||
|
||||
from synapse.api.constants import Direction, Membership
|
||||
from synapse.api.errors import Codes, HttpResponseException, SynapseError
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.api.room_versions import RoomVersion
|
||||
from synapse.api.urls import (
|
||||
FEDERATION_UNSTABLE_PREFIX,
|
||||
@@ -819,6 +820,8 @@ class TransportLayerClient:
|
||||
output_stream: BinaryIO,
|
||||
max_size: int,
|
||||
max_timeout_ms: int,
|
||||
download_ratelimiter: Ratelimiter,
|
||||
ip_address: str,
|
||||
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
||||
path = f"/_matrix/media/r0/download/{destination}/{media_id}"
|
||||
|
||||
@@ -834,6 +837,8 @@ class TransportLayerClient:
|
||||
"allow_remote": "false",
|
||||
"timeout_ms": str(max_timeout_ms),
|
||||
},
|
||||
download_ratelimiter=download_ratelimiter,
|
||||
ip_address=ip_address,
|
||||
)
|
||||
|
||||
async def download_media_v3(
|
||||
@@ -843,6 +848,8 @@ class TransportLayerClient:
|
||||
output_stream: BinaryIO,
|
||||
max_size: int,
|
||||
max_timeout_ms: int,
|
||||
download_ratelimiter: Ratelimiter,
|
||||
ip_address: str,
|
||||
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
||||
path = f"/_matrix/media/v3/download/{destination}/{media_id}"
|
||||
|
||||
@@ -862,6 +869,8 @@ class TransportLayerClient:
|
||||
"allow_redirect": "true",
|
||||
},
|
||||
follow_redirects=True,
|
||||
download_ratelimiter=download_ratelimiter,
|
||||
ip_address=ip_address,
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -19,6 +19,7 @@
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
import inspect
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple, Type
|
||||
|
||||
@@ -33,6 +34,7 @@ from synapse.federation.transport.server.federation import (
|
||||
FEDERATION_SERVLET_CLASSES,
|
||||
FederationAccountStatusServlet,
|
||||
FederationUnstableClientKeysClaimServlet,
|
||||
FederationUnstableMediaDownloadServlet,
|
||||
)
|
||||
from synapse.http.server import HttpServer, JsonResource
|
||||
from synapse.http.servlet import (
|
||||
@@ -315,6 +317,28 @@ def register_servlets(
|
||||
):
|
||||
continue
|
||||
|
||||
if servletclass == FederationUnstableMediaDownloadServlet:
|
||||
if (
|
||||
not hs.config.server.enable_media_repo
|
||||
or not hs.config.experimental.msc3916_authenticated_media_enabled
|
||||
):
|
||||
continue
|
||||
|
||||
# don't load the endpoint if the storage provider is incompatible
|
||||
media_repo = hs.get_media_repository()
|
||||
load_download_endpoint = True
|
||||
for provider in media_repo.media_storage.storage_providers:
|
||||
signature = inspect.signature(provider.backend.fetch)
|
||||
if "federation" not in signature.parameters:
|
||||
logger.warning(
|
||||
f"Federation media `/download` endpoint will not be enabled as storage provider {provider.backend} is not compatible with this endpoint."
|
||||
)
|
||||
load_download_endpoint = False
|
||||
break
|
||||
|
||||
if not load_download_endpoint:
|
||||
continue
|
||||
|
||||
servletclass(
|
||||
hs=hs,
|
||||
authenticator=authenticator,
|
||||
|
||||
@@ -360,13 +360,29 @@ class BaseFederationServlet:
|
||||
"request"
|
||||
)
|
||||
return None
|
||||
if (
|
||||
func.__self__.__class__.__name__ # type: ignore
|
||||
== "FederationUnstableMediaDownloadServlet"
|
||||
):
|
||||
response = await func(
|
||||
origin, content, request, *args, **kwargs
|
||||
)
|
||||
else:
|
||||
response = await func(
|
||||
origin, content, request.args, *args, **kwargs
|
||||
)
|
||||
else:
|
||||
if (
|
||||
func.__self__.__class__.__name__ # type: ignore
|
||||
== "FederationUnstableMediaDownloadServlet"
|
||||
):
|
||||
response = await func(
|
||||
origin, content, request, *args, **kwargs
|
||||
)
|
||||
else:
|
||||
response = await func(
|
||||
origin, content, request.args, *args, **kwargs
|
||||
)
|
||||
else:
|
||||
response = await func(
|
||||
origin, content, request.args, *args, **kwargs
|
||||
)
|
||||
finally:
|
||||
# if we used the origin's context as the parent, add a new span using
|
||||
# the servlet span as a parent, so that we have a link
|
||||
|
||||
@@ -44,10 +44,13 @@ from synapse.federation.transport.server._base import (
|
||||
)
|
||||
from synapse.http.servlet import (
|
||||
parse_boolean_from_args,
|
||||
parse_integer,
|
||||
parse_integer_from_args,
|
||||
parse_string_from_args,
|
||||
parse_strings_from_args,
|
||||
)
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.media._base import DEFAULT_MAX_TIMEOUT_MS, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import SYNAPSE_VERSION
|
||||
from synapse.util.ratelimitutils import FederationRateLimiter
|
||||
@@ -787,6 +790,43 @@ class FederationAccountStatusServlet(BaseFederationServerServlet):
|
||||
return 200, {"account_statuses": statuses, "failures": failures}
|
||||
|
||||
|
||||
class FederationUnstableMediaDownloadServlet(BaseFederationServerServlet):
|
||||
"""
|
||||
Implementation of new federation media `/download` endpoint outlined in MSC3916. Returns
|
||||
a multipart/form-data response consisting of a JSON object and the requested media
|
||||
item. This endpoint only returns local media.
|
||||
"""
|
||||
|
||||
PATH = "/media/download/(?P<media_id>[^/]*)"
|
||||
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc3916"
|
||||
RATELIMIT = True
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
hs: "HomeServer",
|
||||
ratelimiter: FederationRateLimiter,
|
||||
authenticator: Authenticator,
|
||||
server_name: str,
|
||||
):
|
||||
super().__init__(hs, authenticator, ratelimiter, server_name)
|
||||
self.media_repo = self.hs.get_media_repository()
|
||||
|
||||
async def on_GET(
|
||||
self,
|
||||
origin: Optional[str],
|
||||
content: Literal[None],
|
||||
request: SynapseRequest,
|
||||
media_id: str,
|
||||
) -> None:
|
||||
max_timeout_ms = parse_integer(
|
||||
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
|
||||
)
|
||||
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
|
||||
await self.media_repo.get_local_media(
|
||||
request, media_id, None, max_timeout_ms, federation=True
|
||||
)
|
||||
|
||||
|
||||
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
||||
FederationSendServlet,
|
||||
FederationEventServlet,
|
||||
@@ -818,4 +858,5 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
||||
FederationV1SendKnockServlet,
|
||||
FederationMakeKnockServlet,
|
||||
FederationAccountStatusServlet,
|
||||
FederationUnstableMediaDownloadServlet,
|
||||
)
|
||||
|
||||
@@ -126,13 +126,7 @@ class AdminHandler:
|
||||
# Get all rooms the user is in or has been in
|
||||
rooms = await self._store.get_rooms_for_local_user_where_membership_is(
|
||||
user_id,
|
||||
membership_list=(
|
||||
Membership.JOIN,
|
||||
Membership.LEAVE,
|
||||
Membership.BAN,
|
||||
Membership.INVITE,
|
||||
Membership.KNOCK,
|
||||
),
|
||||
membership_list=Membership.LIST,
|
||||
)
|
||||
|
||||
# We only try and fetch events for rooms the user has been in. If
|
||||
@@ -179,7 +173,7 @@ class AdminHandler:
|
||||
if room.membership == Membership.JOIN:
|
||||
stream_ordering = self._store.get_room_max_stream_ordering()
|
||||
else:
|
||||
stream_ordering = room.stream_ordering
|
||||
stream_ordering = room.event_pos.stream
|
||||
|
||||
from_key = RoomStreamToken(topological=0, stream=0)
|
||||
to_key = RoomStreamToken(stream=stream_ordering)
|
||||
|
||||
@@ -236,6 +236,13 @@ class DeviceMessageHandler:
|
||||
local_messages = {}
|
||||
remote_messages: Dict[str, Dict[str, Dict[str, JsonDict]]] = {}
|
||||
for user_id, by_device in messages.items():
|
||||
if not UserID.is_valid(user_id):
|
||||
logger.warning(
|
||||
"Ignoring attempt to send device message to invalid user: %r",
|
||||
user_id,
|
||||
)
|
||||
continue
|
||||
|
||||
# add an opentracing log entry for each message
|
||||
for device_id, message_content in by_device.items():
|
||||
log_kv(
|
||||
|
||||
@@ -35,6 +35,7 @@ from synapse.api.errors import CodeMessageException, Codes, NotFoundError, Synap
|
||||
from synapse.handlers.device import DeviceHandler
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace
|
||||
from synapse.replication.http.devices import ReplicationUploadKeysForUserRestServlet
|
||||
from synapse.types import (
|
||||
JsonDict,
|
||||
JsonMapping,
|
||||
@@ -45,7 +46,10 @@ from synapse.types import (
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.async_helpers import Linearizer, concurrently_execute
|
||||
from synapse.util.cancellation import cancellable
|
||||
from synapse.util.retryutils import NotRetryingDestination
|
||||
from synapse.util.retryutils import (
|
||||
NotRetryingDestination,
|
||||
filter_destinations_by_retry_limiter,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -53,6 +57,9 @@ if TYPE_CHECKING:
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
ONE_TIME_KEY_UPLOAD = "one_time_key_upload_lock"
|
||||
|
||||
|
||||
class E2eKeysHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.config = hs.config
|
||||
@@ -62,6 +69,7 @@ class E2eKeysHandler:
|
||||
self._appservice_handler = hs.get_application_service_handler()
|
||||
self.is_mine = hs.is_mine
|
||||
self.clock = hs.get_clock()
|
||||
self._worker_lock_handler = hs.get_worker_locks_handler()
|
||||
|
||||
federation_registry = hs.get_federation_registry()
|
||||
|
||||
@@ -82,6 +90,12 @@ class E2eKeysHandler:
|
||||
edu_updater.incoming_signing_key_update,
|
||||
)
|
||||
|
||||
self.device_key_uploader = self.upload_device_keys_for_user
|
||||
else:
|
||||
self.device_key_uploader = (
|
||||
ReplicationUploadKeysForUserRestServlet.make_client(hs)
|
||||
)
|
||||
|
||||
# doesn't really work as part of the generic query API, because the
|
||||
# query request requires an object POST, but we abuse the
|
||||
# "query handler" interface.
|
||||
@@ -145,6 +159,11 @@ class E2eKeysHandler:
|
||||
remote_queries = {}
|
||||
|
||||
for user_id, device_ids in device_keys_query.items():
|
||||
if not UserID.is_valid(user_id):
|
||||
# Ignore invalid user IDs, which is the same behaviour as if
|
||||
# the user existed but had no keys.
|
||||
continue
|
||||
|
||||
# we use UserID.from_string to catch invalid user ids
|
||||
if self.is_mine(UserID.from_string(user_id)):
|
||||
local_query[user_id] = device_ids
|
||||
@@ -259,10 +278,8 @@ class E2eKeysHandler:
|
||||
"%d destinations to query devices for", len(remote_queries_not_in_cache)
|
||||
)
|
||||
|
||||
async def _query(
|
||||
destination_queries: Tuple[str, Dict[str, Iterable[str]]]
|
||||
) -> None:
|
||||
destination, queries = destination_queries
|
||||
async def _query(destination: str) -> None:
|
||||
queries = remote_queries_not_in_cache[destination]
|
||||
return await self._query_devices_for_destination(
|
||||
results,
|
||||
cross_signing_keys,
|
||||
@@ -272,9 +289,20 @@ class E2eKeysHandler:
|
||||
timeout,
|
||||
)
|
||||
|
||||
# Only try and fetch keys for destinations that are not marked as
|
||||
# down.
|
||||
filtered_destinations = await filter_destinations_by_retry_limiter(
|
||||
remote_queries_not_in_cache.keys(),
|
||||
self.clock,
|
||||
self.store,
|
||||
# Let's give an arbitrary grace period for those hosts that are
|
||||
# only recently down
|
||||
retry_due_within_ms=60 * 1000,
|
||||
)
|
||||
|
||||
await concurrently_execute(
|
||||
_query,
|
||||
remote_queries_not_in_cache.items(),
|
||||
filtered_destinations,
|
||||
10,
|
||||
delay_cancellation=True,
|
||||
)
|
||||
@@ -775,36 +803,17 @@ class E2eKeysHandler:
|
||||
"one_time_keys": A mapping from algorithm to number of keys for that
|
||||
algorithm, including those previously persisted.
|
||||
"""
|
||||
# This can only be called from the main process.
|
||||
assert isinstance(self.device_handler, DeviceHandler)
|
||||
|
||||
time_now = self.clock.time_msec()
|
||||
|
||||
# TODO: Validate the JSON to make sure it has the right keys.
|
||||
device_keys = keys.get("device_keys", None)
|
||||
if device_keys:
|
||||
logger.info(
|
||||
"Updating device_keys for device %r for user %s at %d",
|
||||
device_id,
|
||||
user_id,
|
||||
time_now,
|
||||
await self.device_key_uploader(
|
||||
user_id=user_id,
|
||||
device_id=device_id,
|
||||
keys={"device_keys": device_keys},
|
||||
)
|
||||
log_kv(
|
||||
{
|
||||
"message": "Updating device_keys for user.",
|
||||
"user_id": user_id,
|
||||
"device_id": device_id,
|
||||
}
|
||||
)
|
||||
# TODO: Sign the JSON with the server key
|
||||
changed = await self.store.set_e2e_device_keys(
|
||||
user_id, device_id, time_now, device_keys
|
||||
)
|
||||
if changed:
|
||||
# Only notify about device updates *if* the keys actually changed
|
||||
await self.device_handler.notify_device_update(user_id, [device_id])
|
||||
else:
|
||||
log_kv({"message": "Not updating device_keys for user", "user_id": user_id})
|
||||
|
||||
one_time_keys = keys.get("one_time_keys", None)
|
||||
if one_time_keys:
|
||||
log_kv(
|
||||
@@ -840,6 +849,49 @@ class E2eKeysHandler:
|
||||
{"message": "Did not update fallback_keys", "reason": "no keys given"}
|
||||
)
|
||||
|
||||
result = await self.store.count_e2e_one_time_keys(user_id, device_id)
|
||||
|
||||
set_tag("one_time_key_counts", str(result))
|
||||
return {"one_time_key_counts": result}
|
||||
|
||||
@tag_args
|
||||
async def upload_device_keys_for_user(
|
||||
self, user_id: str, device_id: str, keys: JsonDict
|
||||
) -> None:
|
||||
"""
|
||||
Args:
|
||||
user_id: user whose keys are being uploaded.
|
||||
device_id: device whose keys are being uploaded.
|
||||
device_keys: the `device_keys` of an /keys/upload request.
|
||||
|
||||
"""
|
||||
# This can only be called from the main process.
|
||||
assert isinstance(self.device_handler, DeviceHandler)
|
||||
|
||||
time_now = self.clock.time_msec()
|
||||
|
||||
device_keys = keys["device_keys"]
|
||||
logger.info(
|
||||
"Updating device_keys for device %r for user %s at %d",
|
||||
device_id,
|
||||
user_id,
|
||||
time_now,
|
||||
)
|
||||
log_kv(
|
||||
{
|
||||
"message": "Updating device_keys for user.",
|
||||
"user_id": user_id,
|
||||
"device_id": device_id,
|
||||
}
|
||||
)
|
||||
# TODO: Sign the JSON with the server key
|
||||
changed = await self.store.set_e2e_device_keys(
|
||||
user_id, device_id, time_now, device_keys
|
||||
)
|
||||
if changed:
|
||||
# Only notify about device updates *if* the keys actually changed
|
||||
await self.device_handler.notify_device_update(user_id, [device_id])
|
||||
|
||||
# the device should have been registered already, but it may have been
|
||||
# deleted due to a race with a DELETE request. Or we may be using an
|
||||
# old access_token without an associated device_id. Either way, we
|
||||
@@ -847,53 +899,56 @@ class E2eKeysHandler:
|
||||
# keys without a corresponding device.
|
||||
await self.device_handler.check_device_registered(user_id, device_id)
|
||||
|
||||
result = await self.store.count_e2e_one_time_keys(user_id, device_id)
|
||||
|
||||
set_tag("one_time_key_counts", str(result))
|
||||
return {"one_time_key_counts": result}
|
||||
|
||||
async def _upload_one_time_keys_for_user(
|
||||
self, user_id: str, device_id: str, time_now: int, one_time_keys: JsonDict
|
||||
) -> None:
|
||||
logger.info(
|
||||
"Adding one_time_keys %r for device %r for user %r at %d",
|
||||
one_time_keys.keys(),
|
||||
device_id,
|
||||
user_id,
|
||||
time_now,
|
||||
)
|
||||
# We take out a lock so that we don't have to worry about a client
|
||||
# sending duplicate requests.
|
||||
lock_key = f"{user_id}_{device_id}"
|
||||
async with self._worker_lock_handler.acquire_lock(
|
||||
ONE_TIME_KEY_UPLOAD, lock_key
|
||||
):
|
||||
logger.info(
|
||||
"Adding one_time_keys %r for device %r for user %r at %d",
|
||||
one_time_keys.keys(),
|
||||
device_id,
|
||||
user_id,
|
||||
time_now,
|
||||
)
|
||||
|
||||
# make a list of (alg, id, key) tuples
|
||||
key_list = []
|
||||
for key_id, key_obj in one_time_keys.items():
|
||||
algorithm, key_id = key_id.split(":")
|
||||
key_list.append((algorithm, key_id, key_obj))
|
||||
# make a list of (alg, id, key) tuples
|
||||
key_list = []
|
||||
for key_id, key_obj in one_time_keys.items():
|
||||
algorithm, key_id = key_id.split(":")
|
||||
key_list.append((algorithm, key_id, key_obj))
|
||||
|
||||
# First we check if we have already persisted any of the keys.
|
||||
existing_key_map = await self.store.get_e2e_one_time_keys(
|
||||
user_id, device_id, [k_id for _, k_id, _ in key_list]
|
||||
)
|
||||
# First we check if we have already persisted any of the keys.
|
||||
existing_key_map = await self.store.get_e2e_one_time_keys(
|
||||
user_id, device_id, [k_id for _, k_id, _ in key_list]
|
||||
)
|
||||
|
||||
new_keys = [] # Keys that we need to insert. (alg, id, json) tuples.
|
||||
for algorithm, key_id, key in key_list:
|
||||
ex_json = existing_key_map.get((algorithm, key_id), None)
|
||||
if ex_json:
|
||||
if not _one_time_keys_match(ex_json, key):
|
||||
raise SynapseError(
|
||||
400,
|
||||
(
|
||||
"One time key %s:%s already exists. "
|
||||
"Old key: %s; new key: %r"
|
||||
new_keys = [] # Keys that we need to insert. (alg, id, json) tuples.
|
||||
for algorithm, key_id, key in key_list:
|
||||
ex_json = existing_key_map.get((algorithm, key_id), None)
|
||||
if ex_json:
|
||||
if not _one_time_keys_match(ex_json, key):
|
||||
raise SynapseError(
|
||||
400,
|
||||
(
|
||||
"One time key %s:%s already exists. "
|
||||
"Old key: %s; new key: %r"
|
||||
)
|
||||
% (algorithm, key_id, ex_json, key),
|
||||
)
|
||||
% (algorithm, key_id, ex_json, key),
|
||||
else:
|
||||
new_keys.append(
|
||||
(algorithm, key_id, encode_canonical_json(key).decode("ascii"))
|
||||
)
|
||||
else:
|
||||
new_keys.append(
|
||||
(algorithm, key_id, encode_canonical_json(key).decode("ascii"))
|
||||
)
|
||||
|
||||
log_kv({"message": "Inserting new one_time_keys.", "keys": new_keys})
|
||||
await self.store.add_e2e_one_time_keys(user_id, device_id, time_now, new_keys)
|
||||
log_kv({"message": "Inserting new one_time_keys.", "keys": new_keys})
|
||||
await self.store.add_e2e_one_time_keys(
|
||||
user_id, device_id, time_now, new_keys
|
||||
)
|
||||
|
||||
async def upload_signing_keys_for_user(
|
||||
self, user_id: str, keys: JsonDict
|
||||
|
||||
@@ -247,6 +247,12 @@ class E2eRoomKeysHandler:
|
||||
if current_room_key:
|
||||
if self._should_replace_room_key(current_room_key, room_key):
|
||||
log_kv({"message": "Replacing room key."})
|
||||
logger.debug(
|
||||
"Replacing room key. room=%s session=%s user=%s",
|
||||
room_id,
|
||||
session_id,
|
||||
user_id,
|
||||
)
|
||||
# updates are done one at a time in the DB, so send
|
||||
# updates right away rather than batching them up,
|
||||
# like we do with the inserts
|
||||
@@ -256,6 +262,12 @@ class E2eRoomKeysHandler:
|
||||
changed = True
|
||||
else:
|
||||
log_kv({"message": "Not replacing room_key."})
|
||||
logger.debug(
|
||||
"Not replacing room key. room=%s session=%s user=%s",
|
||||
room_id,
|
||||
session_id,
|
||||
user_id,
|
||||
)
|
||||
else:
|
||||
log_kv(
|
||||
{
|
||||
@@ -265,6 +277,12 @@ class E2eRoomKeysHandler:
|
||||
}
|
||||
)
|
||||
log_kv({"message": "Replacing room key."})
|
||||
logger.debug(
|
||||
"Inserting new room key. room=%s session=%s user=%s",
|
||||
room_id,
|
||||
session_id,
|
||||
user_id,
|
||||
)
|
||||
to_insert.append((room_id, session_id, room_key))
|
||||
changed = True
|
||||
|
||||
|
||||
@@ -199,7 +199,7 @@ class InitialSyncHandler:
|
||||
)
|
||||
elif event.membership == Membership.LEAVE:
|
||||
room_end_token = RoomStreamToken(
|
||||
stream=event.stream_ordering,
|
||||
stream=event.event_pos.stream,
|
||||
)
|
||||
deferred_room_state = run_in_background(
|
||||
self._state_storage_controller.get_state_for_events,
|
||||
|
||||
@@ -496,13 +496,6 @@ class EventCreationHandler:
|
||||
|
||||
self.room_prejoin_state_types = self.hs.config.api.room_prejoin_state
|
||||
|
||||
self.membership_types_to_include_profile_data_in = {
|
||||
Membership.JOIN,
|
||||
Membership.KNOCK,
|
||||
}
|
||||
if self.hs.config.server.include_profile_data_on_invite:
|
||||
self.membership_types_to_include_profile_data_in.add(Membership.INVITE)
|
||||
|
||||
self.send_event = ReplicationSendEventRestServlet.make_client(hs)
|
||||
self.send_events = ReplicationSendEventsRestServlet.make_client(hs)
|
||||
|
||||
@@ -594,8 +587,6 @@ class EventCreationHandler:
|
||||
Creates an FrozenEvent object, filling out auth_events, prev_events,
|
||||
etc.
|
||||
|
||||
Adds display names to Join membership events.
|
||||
|
||||
Args:
|
||||
requester
|
||||
event_dict: An entire event
|
||||
@@ -672,29 +663,6 @@ class EventCreationHandler:
|
||||
|
||||
self.validator.validate_builder(builder)
|
||||
|
||||
if builder.type == EventTypes.Member:
|
||||
membership = builder.content.get("membership", None)
|
||||
target = UserID.from_string(builder.state_key)
|
||||
|
||||
if membership in self.membership_types_to_include_profile_data_in:
|
||||
# If event doesn't include a display name, add one.
|
||||
profile = self.profile_handler
|
||||
content = builder.content
|
||||
|
||||
try:
|
||||
if "displayname" not in content:
|
||||
displayname = await profile.get_displayname(target)
|
||||
if displayname is not None:
|
||||
content["displayname"] = displayname
|
||||
if "avatar_url" not in content:
|
||||
avatar_url = await profile.get_avatar_url(target)
|
||||
if avatar_url is not None:
|
||||
content["avatar_url"] = avatar_url
|
||||
except Exception as e:
|
||||
logger.info(
|
||||
"Failed to get profile information for %r: %s", target, e
|
||||
)
|
||||
|
||||
is_exempt = await self._is_exempt_from_privacy_policy(builder, requester)
|
||||
if require_consent and not is_exempt:
|
||||
await self.assert_accepted_privacy_policy(requester)
|
||||
|
||||
@@ -27,7 +27,6 @@ from synapse.api.constants import Direction, EventTypes, Membership
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.api.filtering import Filter
|
||||
from synapse.events.utils import SerializeEventConfig
|
||||
from synapse.handlers.room import ShutdownRoomParams, ShutdownRoomResponse
|
||||
from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
@@ -38,6 +37,8 @@ from synapse.types import (
|
||||
JsonMapping,
|
||||
Requester,
|
||||
ScheduledTask,
|
||||
ShutdownRoomParams,
|
||||
ShutdownRoomResponse,
|
||||
StreamKeyType,
|
||||
TaskStatus,
|
||||
)
|
||||
|
||||
@@ -590,7 +590,7 @@ class RegistrationHandler:
|
||||
# moving away from bare excepts is a good thing to do.
|
||||
logger.error("Failed to join new user to %r: %r", r, e)
|
||||
except Exception as e:
|
||||
logger.error("Failed to join new user to %r: %r", r, e)
|
||||
logger.error("Failed to join new user to %r: %r", r, e, exc_info=True)
|
||||
|
||||
async def _auto_join_rooms(self, user_id: str) -> None:
|
||||
"""Automatically joins users to auto join rooms - creating the room in the first place
|
||||
|
||||
@@ -393,9 +393,9 @@ class RelationsHandler:
|
||||
|
||||
# Attempt to find another event to use as the latest event.
|
||||
potential_events, _ = await self._main_store.get_relations_for_event(
|
||||
room_id,
|
||||
event_id,
|
||||
event,
|
||||
room_id,
|
||||
RelationTypes.THREAD,
|
||||
direction=Direction.FORWARDS,
|
||||
)
|
||||
|
||||
@@ -40,7 +40,6 @@ from typing import (
|
||||
)
|
||||
|
||||
import attr
|
||||
from typing_extensions import TypedDict
|
||||
|
||||
import synapse.events.snapshot
|
||||
from synapse.api.constants import (
|
||||
@@ -81,6 +80,8 @@ from synapse.types import (
|
||||
RoomAlias,
|
||||
RoomID,
|
||||
RoomStreamToken,
|
||||
ShutdownRoomParams,
|
||||
ShutdownRoomResponse,
|
||||
StateMap,
|
||||
StrCollection,
|
||||
StreamKeyType,
|
||||
@@ -1780,63 +1781,6 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
|
||||
return self.store.get_current_room_stream_token_for_room_id(room_id)
|
||||
|
||||
|
||||
class ShutdownRoomParams(TypedDict):
|
||||
"""
|
||||
Attributes:
|
||||
requester_user_id:
|
||||
User who requested the action. Will be recorded as putting the room on the
|
||||
blocking list.
|
||||
new_room_user_id:
|
||||
If set, a new room will be created with this user ID
|
||||
as the creator and admin, and all users in the old room will be
|
||||
moved into that room. If not set, no new room will be created
|
||||
and the users will just be removed from the old room.
|
||||
new_room_name:
|
||||
A string representing the name of the room that new users will
|
||||
be invited to. Defaults to `Content Violation Notification`
|
||||
message:
|
||||
A string containing the first message that will be sent as
|
||||
`new_room_user_id` in the new room. Ideally this will clearly
|
||||
convey why the original room was shut down.
|
||||
Defaults to `Sharing illegal content on this server is not
|
||||
permitted and rooms in violation will be blocked.`
|
||||
block:
|
||||
If set to `true`, this room will be added to a blocking list,
|
||||
preventing future attempts to join the room. Defaults to `false`.
|
||||
purge:
|
||||
If set to `true`, purge the given room from the database.
|
||||
force_purge:
|
||||
If set to `true`, the room will be purged from database
|
||||
even if there are still users joined to the room.
|
||||
"""
|
||||
|
||||
requester_user_id: Optional[str]
|
||||
new_room_user_id: Optional[str]
|
||||
new_room_name: Optional[str]
|
||||
message: Optional[str]
|
||||
block: bool
|
||||
purge: bool
|
||||
force_purge: bool
|
||||
|
||||
|
||||
class ShutdownRoomResponse(TypedDict):
|
||||
"""
|
||||
Attributes:
|
||||
kicked_users: An array of users (`user_id`) that were kicked.
|
||||
failed_to_kick_users:
|
||||
An array of users (`user_id`) that that were not kicked.
|
||||
local_aliases:
|
||||
An array of strings representing the local aliases that were
|
||||
migrated from the old room to the new.
|
||||
new_room_id: A string representing the room ID of the new room.
|
||||
"""
|
||||
|
||||
kicked_users: List[str]
|
||||
failed_to_kick_users: List[str]
|
||||
local_aliases: List[str]
|
||||
new_room_id: Optional[str]
|
||||
|
||||
|
||||
class RoomShutdownHandler:
|
||||
DEFAULT_MESSAGE = (
|
||||
"Sharing illegal content on this server is not permitted and rooms in"
|
||||
|
||||
@@ -106,6 +106,13 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
self.event_auth_handler = hs.get_event_auth_handler()
|
||||
self._worker_lock_handler = hs.get_worker_locks_handler()
|
||||
|
||||
self._membership_types_to_include_profile_data_in = {
|
||||
Membership.JOIN,
|
||||
Membership.KNOCK,
|
||||
}
|
||||
if self.hs.config.server.include_profile_data_on_invite:
|
||||
self._membership_types_to_include_profile_data_in.add(Membership.INVITE)
|
||||
|
||||
self.member_linearizer: Linearizer = Linearizer(name="member")
|
||||
self.member_as_limiter = Linearizer(max_count=10, name="member_as_limiter")
|
||||
|
||||
@@ -785,9 +792,8 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
if (
|
||||
not self.allow_per_room_profiles and not is_requester_server_notices_user
|
||||
) or requester.shadow_banned:
|
||||
# Strip profile data, knowing that new profile data will be added to the
|
||||
# event's content in event_creation_handler.create_event() using the target's
|
||||
# global profile.
|
||||
# Strip profile data, knowing that new profile data will be added to
|
||||
# the event's content below using the target's global profile.
|
||||
content.pop("displayname", None)
|
||||
content.pop("avatar_url", None)
|
||||
|
||||
@@ -823,6 +829,29 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
if action in ["kick", "unban"]:
|
||||
effective_membership_state = "leave"
|
||||
|
||||
if effective_membership_state not in Membership.LIST:
|
||||
raise SynapseError(400, "Invalid membership key")
|
||||
|
||||
# Add profile data for joins etc, if no per-room profile.
|
||||
if (
|
||||
effective_membership_state
|
||||
in self._membership_types_to_include_profile_data_in
|
||||
):
|
||||
# If event doesn't include a display name, add one.
|
||||
profile = self.profile_handler
|
||||
|
||||
try:
|
||||
if "displayname" not in content:
|
||||
displayname = await profile.get_displayname(target)
|
||||
if displayname is not None:
|
||||
content["displayname"] = displayname
|
||||
if "avatar_url" not in content:
|
||||
avatar_url = await profile.get_avatar_url(target)
|
||||
if avatar_url is not None:
|
||||
content["avatar_url"] = avatar_url
|
||||
except Exception as e:
|
||||
logger.info("Failed to get profile information for %r: %s", target, e)
|
||||
|
||||
# if this is a join with a 3pid signature, we may need to turn a 3pid
|
||||
# invite into a normal invite before we can handle the join.
|
||||
if third_party_signed is not None:
|
||||
|
||||
610
synapse/handlers/sliding_sync.py
Normal file
610
synapse/handlers/sliding_sync.py
Normal file
@@ -0,0 +1,610 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
# Originally licensed under the Apache License, Version 2.0:
|
||||
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||
#
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
import logging
|
||||
from enum import Enum
|
||||
from typing import TYPE_CHECKING, AbstractSet, Dict, Final, List, Optional, Tuple
|
||||
|
||||
import attr
|
||||
from immutabledict import immutabledict
|
||||
|
||||
from synapse._pydantic_compat import HAS_PYDANTIC_V2
|
||||
|
||||
if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
||||
from pydantic.v1 import Extra
|
||||
else:
|
||||
from pydantic import Extra
|
||||
|
||||
from synapse.api.constants import Membership
|
||||
from synapse.events import EventBase
|
||||
from synapse.rest.client.models import SlidingSyncBody
|
||||
from synapse.types import JsonMapping, Requester, RoomStreamToken, StreamToken, UserID
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def filter_membership_for_sync(*, membership: str, user_id: str, sender: str) -> bool:
|
||||
"""
|
||||
Returns True if the membership event should be included in the sync response,
|
||||
otherwise False.
|
||||
|
||||
Attributes:
|
||||
membership: The membership state of the user in the room.
|
||||
user_id: The user ID that the membership applies to
|
||||
sender: The person who sent the membership event
|
||||
"""
|
||||
|
||||
# Everything except `Membership.LEAVE` because we want everything that's *still*
|
||||
# relevant to the user. There are few more things to include in the sync response
|
||||
# (newly_left) but those are handled separately.
|
||||
#
|
||||
# This logic includes kicks (leave events where the sender is not the same user) and
|
||||
# can be read as "anything that isn't a leave or a leave with a different sender".
|
||||
return membership != Membership.LEAVE or sender != user_id
|
||||
|
||||
|
||||
class SlidingSyncConfig(SlidingSyncBody):
|
||||
"""
|
||||
Inherit from `SlidingSyncBody` since we need all of the same fields and add a few
|
||||
extra fields that we need in the handler
|
||||
"""
|
||||
|
||||
user: UserID
|
||||
device_id: Optional[str]
|
||||
|
||||
# Pydantic config
|
||||
class Config:
|
||||
# By default, ignore fields that we don't recognise.
|
||||
extra = Extra.ignore
|
||||
# By default, don't allow fields to be reassigned after parsing.
|
||||
allow_mutation = False
|
||||
# Allow custom types like `UserID` to be used in the model
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
|
||||
class OperationType(Enum):
|
||||
"""
|
||||
Represents the operation types in a Sliding Sync window.
|
||||
|
||||
Attributes:
|
||||
SYNC: Sets a range of entries. Clients SHOULD discard what they previous knew about
|
||||
entries in this range.
|
||||
INSERT: Sets a single entry. If the position is not empty then clients MUST move
|
||||
entries to the left or the right depending on where the closest empty space is.
|
||||
DELETE: Remove a single entry. Often comes before an INSERT to allow entries to move
|
||||
places.
|
||||
INVALIDATE: Remove a range of entries. Clients MAY persist the invalidated range for
|
||||
offline support, but they should be treated as empty when additional operations
|
||||
which concern indexes in the range arrive from the server.
|
||||
"""
|
||||
|
||||
SYNC: Final = "SYNC"
|
||||
INSERT: Final = "INSERT"
|
||||
DELETE: Final = "DELETE"
|
||||
INVALIDATE: Final = "INVALIDATE"
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class SlidingSyncResult:
|
||||
"""
|
||||
The Sliding Sync result to be serialized to JSON for a response.
|
||||
|
||||
Attributes:
|
||||
next_pos: The next position token in the sliding window to request (next_batch).
|
||||
lists: Sliding window API. A map of list key to list results.
|
||||
rooms: Room subscription API. A map of room ID to room subscription to room results.
|
||||
extensions: Extensions API. A map of extension key to extension results.
|
||||
"""
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class RoomResult:
|
||||
"""
|
||||
Attributes:
|
||||
name: Room name or calculated room name.
|
||||
avatar: Room avatar
|
||||
heroes: List of stripped membership events (containing `user_id` and optionally
|
||||
`avatar_url` and `displayname`) for the users used to calculate the room name.
|
||||
initial: Flag which is set when this is the first time the server is sending this
|
||||
data on this connection. Clients can use this flag to replace or update
|
||||
their local state. When there is an update, servers MUST omit this flag
|
||||
entirely and NOT send "initial":false as this is wasteful on bandwidth. The
|
||||
absence of this flag means 'false'.
|
||||
required_state: The current state of the room
|
||||
timeline: Latest events in the room. The last event is the most recent
|
||||
is_dm: Flag to specify whether the room is a direct-message room (most likely
|
||||
between two people).
|
||||
invite_state: Stripped state events. Same as `rooms.invite.$room_id.invite_state`
|
||||
in sync v2, absent on joined/left rooms
|
||||
prev_batch: A token that can be passed as a start parameter to the
|
||||
`/rooms/<room_id>/messages` API to retrieve earlier messages.
|
||||
limited: True if their are more events than fit between the given position and now.
|
||||
Sync again to get more.
|
||||
joined_count: The number of users with membership of join, including the client's
|
||||
own user ID. (same as sync `v2 m.joined_member_count`)
|
||||
invited_count: The number of users with membership of invite. (same as sync v2
|
||||
`m.invited_member_count`)
|
||||
notification_count: The total number of unread notifications for this room. (same
|
||||
as sync v2)
|
||||
highlight_count: The number of unread notifications for this room with the highlight
|
||||
flag set. (same as sync v2)
|
||||
num_live: The number of timeline events which have just occurred and are not historical.
|
||||
The last N events are 'live' and should be treated as such. This is mostly
|
||||
useful to determine whether a given @mention event should make a noise or not.
|
||||
Clients cannot rely solely on the absence of `initial: true` to determine live
|
||||
events because if a room not in the sliding window bumps into the window because
|
||||
of an @mention it will have `initial: true` yet contain a single live event
|
||||
(with potentially other old events in the timeline).
|
||||
"""
|
||||
|
||||
name: str
|
||||
avatar: Optional[str]
|
||||
heroes: Optional[List[EventBase]]
|
||||
initial: bool
|
||||
required_state: List[EventBase]
|
||||
timeline: List[EventBase]
|
||||
is_dm: bool
|
||||
invite_state: List[EventBase]
|
||||
prev_batch: StreamToken
|
||||
limited: bool
|
||||
joined_count: int
|
||||
invited_count: int
|
||||
notification_count: int
|
||||
highlight_count: int
|
||||
num_live: int
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class SlidingWindowList:
|
||||
"""
|
||||
Attributes:
|
||||
count: The total number of entries in the list. Always present if this list
|
||||
is.
|
||||
ops: The sliding list operations to perform.
|
||||
"""
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class Operation:
|
||||
"""
|
||||
Attributes:
|
||||
op: The operation type to perform.
|
||||
range: Which index positions are affected by this operation. These are
|
||||
both inclusive.
|
||||
room_ids: Which room IDs are affected by this operation. These IDs match
|
||||
up to the positions in the `range`, so the last room ID in this list
|
||||
matches the 9th index. The room data is held in a separate object.
|
||||
"""
|
||||
|
||||
op: OperationType
|
||||
range: Tuple[int, int]
|
||||
room_ids: List[str]
|
||||
|
||||
count: int
|
||||
ops: List[Operation]
|
||||
|
||||
next_pos: StreamToken
|
||||
lists: Dict[str, SlidingWindowList]
|
||||
rooms: Dict[str, RoomResult]
|
||||
extensions: JsonMapping
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
"""Make the result appear empty if there are no updates. This is used
|
||||
to tell if the notifier needs to wait for more events when polling for
|
||||
events.
|
||||
"""
|
||||
return bool(self.lists or self.rooms or self.extensions)
|
||||
|
||||
@staticmethod
|
||||
def empty(next_pos: StreamToken) -> "SlidingSyncResult":
|
||||
"Return a new empty result"
|
||||
return SlidingSyncResult(
|
||||
next_pos=next_pos,
|
||||
lists={},
|
||||
rooms={},
|
||||
extensions={},
|
||||
)
|
||||
|
||||
|
||||
class SlidingSyncHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.clock = hs.get_clock()
|
||||
self.store = hs.get_datastores().main
|
||||
self.auth_blocking = hs.get_auth_blocking()
|
||||
self.notifier = hs.get_notifier()
|
||||
self.event_sources = hs.get_event_sources()
|
||||
self.rooms_to_exclude_globally = hs.config.server.rooms_to_exclude_from_sync
|
||||
|
||||
async def wait_for_sync_for_user(
|
||||
self,
|
||||
requester: Requester,
|
||||
sync_config: SlidingSyncConfig,
|
||||
from_token: Optional[StreamToken] = None,
|
||||
timeout_ms: int = 0,
|
||||
) -> SlidingSyncResult:
|
||||
"""Get the sync for a client if we have new data for it now. Otherwise
|
||||
wait for new data to arrive on the server. If the timeout expires, then
|
||||
return an empty sync result.
|
||||
"""
|
||||
# If the user is not part of the mau group, then check that limits have
|
||||
# not been exceeded (if not part of the group by this point, almost certain
|
||||
# auth_blocking will occur)
|
||||
await self.auth_blocking.check_auth_blocking(requester=requester)
|
||||
|
||||
# TODO: If the To-Device extension is enabled and we have a `from_token`, delete
|
||||
# any to-device messages before that token (since we now know that the device
|
||||
# has received them). (see sync v2 for how to do this)
|
||||
|
||||
# If we're working with a user-provided token, we need to make sure to wait for
|
||||
# this worker to catch up with the token so we don't skip past any incoming
|
||||
# events or future events if the user is nefariously, manually modifying the
|
||||
# token.
|
||||
if from_token is not None:
|
||||
# We need to make sure this worker has caught up with the token. If
|
||||
# this returns false, it means we timed out waiting, and we should
|
||||
# just return an empty response.
|
||||
before_wait_ts = self.clock.time_msec()
|
||||
if not await self.notifier.wait_for_stream_token(from_token):
|
||||
logger.warning(
|
||||
"Timed out waiting for worker to catch up. Returning empty response"
|
||||
)
|
||||
return SlidingSyncResult.empty(from_token)
|
||||
|
||||
# If we've spent significant time waiting to catch up, take it off
|
||||
# the timeout.
|
||||
after_wait_ts = self.clock.time_msec()
|
||||
if after_wait_ts - before_wait_ts > 1_000:
|
||||
timeout_ms -= after_wait_ts - before_wait_ts
|
||||
timeout_ms = max(timeout_ms, 0)
|
||||
|
||||
# We're going to respond immediately if the timeout is 0 or if this is an
|
||||
# initial sync (without a `from_token`) so we can avoid calling
|
||||
# `notifier.wait_for_events()`.
|
||||
if timeout_ms == 0 or from_token is None:
|
||||
now_token = self.event_sources.get_current_token()
|
||||
result = await self.current_sync_for_user(
|
||||
sync_config,
|
||||
from_token=from_token,
|
||||
to_token=now_token,
|
||||
)
|
||||
else:
|
||||
# Otherwise, we wait for something to happen and report it to the user.
|
||||
async def current_sync_callback(
|
||||
before_token: StreamToken, after_token: StreamToken
|
||||
) -> SlidingSyncResult:
|
||||
return await self.current_sync_for_user(
|
||||
sync_config,
|
||||
from_token=from_token,
|
||||
to_token=after_token,
|
||||
)
|
||||
|
||||
result = await self.notifier.wait_for_events(
|
||||
sync_config.user.to_string(),
|
||||
timeout_ms,
|
||||
current_sync_callback,
|
||||
from_token=from_token,
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
async def current_sync_for_user(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
to_token: StreamToken,
|
||||
from_token: Optional[StreamToken] = None,
|
||||
) -> SlidingSyncResult:
|
||||
"""
|
||||
Generates the response body of a Sliding Sync result, represented as a
|
||||
`SlidingSyncResult`.
|
||||
"""
|
||||
user_id = sync_config.user.to_string()
|
||||
app_service = self.store.get_app_service_by_user_id(user_id)
|
||||
if app_service:
|
||||
# We no longer support AS users using /sync directly.
|
||||
# See https://github.com/matrix-org/matrix-doc/issues/1144
|
||||
raise NotImplementedError()
|
||||
|
||||
# Get all of the room IDs that the user should be able to see in the sync
|
||||
# response
|
||||
room_id_set = await self.get_sync_room_ids_for_user(
|
||||
sync_config.user,
|
||||
from_token=from_token,
|
||||
to_token=to_token,
|
||||
)
|
||||
|
||||
# Assemble sliding window lists
|
||||
lists: Dict[str, SlidingSyncResult.SlidingWindowList] = {}
|
||||
if sync_config.lists:
|
||||
for list_key, list_config in sync_config.lists.items():
|
||||
# TODO: Apply filters
|
||||
#
|
||||
# TODO: Exclude partially stated rooms unless the `required_state` has
|
||||
# `["m.room.member", "$LAZY"]`
|
||||
filtered_room_ids = room_id_set
|
||||
# TODO: Apply sorts
|
||||
sorted_room_ids = sorted(filtered_room_ids)
|
||||
|
||||
ops: List[SlidingSyncResult.SlidingWindowList.Operation] = []
|
||||
if list_config.ranges:
|
||||
for range in list_config.ranges:
|
||||
ops.append(
|
||||
SlidingSyncResult.SlidingWindowList.Operation(
|
||||
op=OperationType.SYNC,
|
||||
range=range,
|
||||
room_ids=sorted_room_ids[range[0] : range[1]],
|
||||
)
|
||||
)
|
||||
|
||||
lists[list_key] = SlidingSyncResult.SlidingWindowList(
|
||||
count=len(sorted_room_ids),
|
||||
ops=ops,
|
||||
)
|
||||
|
||||
return SlidingSyncResult(
|
||||
next_pos=to_token,
|
||||
lists=lists,
|
||||
# TODO: Gather room data for rooms in lists and `sync_config.room_subscriptions`
|
||||
rooms={},
|
||||
extensions={},
|
||||
)
|
||||
|
||||
async def get_sync_room_ids_for_user(
|
||||
self,
|
||||
user: UserID,
|
||||
to_token: StreamToken,
|
||||
from_token: Optional[StreamToken] = None,
|
||||
) -> AbstractSet[str]:
|
||||
"""
|
||||
Fetch room IDs that should be listed for this user in the sync response (the
|
||||
full room list that will be filtered, sorted, and sliced).
|
||||
|
||||
We're looking for rooms where the user has the following state in the token
|
||||
range (> `from_token` and <= `to_token`):
|
||||
|
||||
- `invite`, `join`, `knock`, `ban` membership events
|
||||
- Kicks (`leave` membership events where `sender` is different from the
|
||||
`user_id`/`state_key`)
|
||||
- `newly_left` (rooms that were left during the given token range)
|
||||
- In order for bans/kicks to not show up in sync, you need to `/forget` those
|
||||
rooms. This doesn't modify the event itself though and only adds the
|
||||
`forgotten` flag to the `room_memberships` table in Synapse. There isn't a way
|
||||
to tell when a room was forgotten at the moment so we can't factor it into the
|
||||
from/to range.
|
||||
"""
|
||||
user_id = user.to_string()
|
||||
|
||||
# First grab a current snapshot rooms for the user
|
||||
# (also handles forgotten rooms)
|
||||
room_for_user_list = await self.store.get_rooms_for_local_user_where_membership_is(
|
||||
user_id=user_id,
|
||||
# We want to fetch any kind of membership (joined and left rooms) in order
|
||||
# to get the `event_pos` of the latest room membership event for the
|
||||
# user.
|
||||
#
|
||||
# We will filter out the rooms that don't belong below (see
|
||||
# `filter_membership_for_sync`)
|
||||
membership_list=Membership.LIST,
|
||||
excluded_rooms=self.rooms_to_exclude_globally,
|
||||
)
|
||||
|
||||
# If the user has never joined any rooms before, we can just return an empty list
|
||||
if not room_for_user_list:
|
||||
return set()
|
||||
|
||||
# Our working list of rooms that can show up in the sync response
|
||||
sync_room_id_set = {
|
||||
room_for_user.room_id
|
||||
for room_for_user in room_for_user_list
|
||||
if filter_membership_for_sync(
|
||||
membership=room_for_user.membership,
|
||||
user_id=user_id,
|
||||
sender=room_for_user.sender,
|
||||
)
|
||||
}
|
||||
|
||||
# Get the `RoomStreamToken` that represents the spot we queried up to when we got
|
||||
# our membership snapshot from `get_rooms_for_local_user_where_membership_is()`.
|
||||
#
|
||||
# First, we need to get the max stream_ordering of each event persister instance
|
||||
# that we queried events from.
|
||||
instance_to_max_stream_ordering_map: Dict[str, int] = {}
|
||||
for room_for_user in room_for_user_list:
|
||||
instance_name = room_for_user.event_pos.instance_name
|
||||
stream_ordering = room_for_user.event_pos.stream
|
||||
|
||||
current_instance_max_stream_ordering = (
|
||||
instance_to_max_stream_ordering_map.get(instance_name)
|
||||
)
|
||||
if (
|
||||
current_instance_max_stream_ordering is None
|
||||
or stream_ordering > current_instance_max_stream_ordering
|
||||
):
|
||||
instance_to_max_stream_ordering_map[instance_name] = stream_ordering
|
||||
|
||||
# Then assemble the `RoomStreamToken`
|
||||
membership_snapshot_token = RoomStreamToken(
|
||||
# Minimum position in the `instance_map`
|
||||
stream=min(instance_to_max_stream_ordering_map.values()),
|
||||
instance_map=immutabledict(instance_to_max_stream_ordering_map),
|
||||
)
|
||||
|
||||
# If our `to_token` is already the same or ahead of the latest room membership
|
||||
# for the user, we can just straight-up return the room list (nothing has
|
||||
# changed)
|
||||
if membership_snapshot_token.is_before_or_eq(to_token.room_key):
|
||||
return sync_room_id_set
|
||||
|
||||
# Since we fetched the users room list at some point in time after the from/to
|
||||
# tokens, we need to revert/rewind some membership changes to match the point in
|
||||
# time of the `to_token`. In particular, we need to make these fixups:
|
||||
#
|
||||
# - 1a) Remove rooms that the user joined after the `to_token`
|
||||
# - 1b) Add back rooms that the user left after the `to_token`
|
||||
# - 2) Add back newly_left rooms (> `from_token` and <= `to_token`)
|
||||
#
|
||||
# Below, we're doing two separate lookups for membership changes. We could
|
||||
# request everything for both fixups in one range, [`from_token.room_key`,
|
||||
# `membership_snapshot_token`), but we want to avoid raw `stream_ordering`
|
||||
# comparison without `instance_name` (which is flawed). We could refactor
|
||||
# `event.internal_metadata` to include `instance_name` but it might turn out a
|
||||
# little difficult and a bigger, broader Synapse change than we want to make.
|
||||
|
||||
# 1) -----------------------------------------------------
|
||||
|
||||
# 1) Fetch membership changes that fall in the range from `to_token` up to
|
||||
# `membership_snapshot_token`
|
||||
membership_change_events_after_to_token = (
|
||||
await self.store.get_membership_changes_for_user(
|
||||
user_id,
|
||||
from_key=to_token.room_key,
|
||||
to_key=membership_snapshot_token,
|
||||
excluded_rooms=self.rooms_to_exclude_globally,
|
||||
)
|
||||
)
|
||||
|
||||
# 1) Assemble a list of the last membership events in some given ranges. Someone
|
||||
# could have left and joined multiple times during the given range but we only
|
||||
# care about end-result so we grab the last one.
|
||||
last_membership_change_by_room_id_after_to_token: Dict[str, EventBase] = {}
|
||||
# We also need the first membership event after the `to_token` so we can step
|
||||
# backward to the previous membership that would apply to the from/to range.
|
||||
first_membership_change_by_room_id_after_to_token: Dict[str, EventBase] = {}
|
||||
for event in membership_change_events_after_to_token:
|
||||
last_membership_change_by_room_id_after_to_token[event.room_id] = event
|
||||
# Only set if we haven't already set it
|
||||
first_membership_change_by_room_id_after_to_token.setdefault(
|
||||
event.room_id, event
|
||||
)
|
||||
|
||||
# 1) Fixup
|
||||
for (
|
||||
last_membership_change_after_to_token
|
||||
) in last_membership_change_by_room_id_after_to_token.values():
|
||||
room_id = last_membership_change_after_to_token.room_id
|
||||
|
||||
# We want to find the first membership change after the `to_token` then step
|
||||
# backward to know the membership in the from/to range.
|
||||
first_membership_change_after_to_token = (
|
||||
first_membership_change_by_room_id_after_to_token.get(room_id)
|
||||
)
|
||||
assert first_membership_change_after_to_token is not None, (
|
||||
"If there was a `last_membership_change_after_to_token` that we're iterating over, "
|
||||
+ "then there should be corresponding a first change. For example, even if there "
|
||||
+ "is only one event after the `to_token`, the first and last event will be same event. "
|
||||
+ "This is probably a mistake in assembling the `last_membership_change_by_room_id_after_to_token`"
|
||||
+ "/`first_membership_change_by_room_id_after_to_token` dicts above."
|
||||
)
|
||||
# TODO: Instead of reading from `unsigned`, refactor this to use the
|
||||
# `current_state_delta_stream` table in the future. Probably a new
|
||||
# `get_membership_changes_for_user()` function that uses
|
||||
# `current_state_delta_stream` with a join to `room_memberships`. This would
|
||||
# help in state reset scenarios since `prev_content` is looking at the
|
||||
# current branch vs the current room state. This is all just data given to
|
||||
# the client so no real harm to data integrity, but we'd like to be nice to
|
||||
# the client. Since the `current_state_delta_stream` table is new, it
|
||||
# doesn't have all events in it. Since this is Sliding Sync, if we ever need
|
||||
# to, we can signal the client to throw all of their state away by sending
|
||||
# "operation: RESET".
|
||||
prev_content = first_membership_change_after_to_token.unsigned.get(
|
||||
"prev_content", {}
|
||||
)
|
||||
prev_membership = prev_content.get("membership", None)
|
||||
prev_sender = first_membership_change_after_to_token.unsigned.get(
|
||||
"prev_sender", None
|
||||
)
|
||||
|
||||
# Check if the previous membership (membership that applies to the from/to
|
||||
# range) should be included in our `sync_room_id_set`
|
||||
should_prev_membership_be_included = (
|
||||
prev_membership is not None
|
||||
and prev_sender is not None
|
||||
and filter_membership_for_sync(
|
||||
membership=prev_membership,
|
||||
user_id=user_id,
|
||||
sender=prev_sender,
|
||||
)
|
||||
)
|
||||
|
||||
# Check if the last membership (membership that applies to our snapshot) was
|
||||
# already included in our `sync_room_id_set`
|
||||
was_last_membership_already_included = filter_membership_for_sync(
|
||||
membership=last_membership_change_after_to_token.membership,
|
||||
user_id=user_id,
|
||||
sender=last_membership_change_after_to_token.sender,
|
||||
)
|
||||
|
||||
# 1a) Add back rooms that the user left after the `to_token`
|
||||
#
|
||||
# For example, if the last membership event after the `to_token` is a leave
|
||||
# event, then the room was excluded from `sync_room_id_set` when we first
|
||||
# crafted it above. We should add these rooms back as long as the user also
|
||||
# was part of the room before the `to_token`.
|
||||
if (
|
||||
not was_last_membership_already_included
|
||||
and should_prev_membership_be_included
|
||||
):
|
||||
sync_room_id_set.add(room_id)
|
||||
# 1b) Remove rooms that the user joined (hasn't left) after the `to_token`
|
||||
#
|
||||
# For example, if the last membership event after the `to_token` is a "join"
|
||||
# event, then the room was included `sync_room_id_set` when we first crafted
|
||||
# it above. We should remove these rooms as long as the user also wasn't
|
||||
# part of the room before the `to_token`.
|
||||
elif (
|
||||
was_last_membership_already_included
|
||||
and not should_prev_membership_be_included
|
||||
):
|
||||
sync_room_id_set.discard(room_id)
|
||||
|
||||
# 2) -----------------------------------------------------
|
||||
# We fix-up newly_left rooms after the first fixup because it may have removed
|
||||
# some left rooms that we can figure out our newly_left in the following code
|
||||
|
||||
# 2) Fetch membership changes that fall in the range from `from_token` up to `to_token`
|
||||
membership_change_events_in_from_to_range = []
|
||||
if from_token:
|
||||
membership_change_events_in_from_to_range = (
|
||||
await self.store.get_membership_changes_for_user(
|
||||
user_id,
|
||||
from_key=from_token.room_key,
|
||||
to_key=to_token.room_key,
|
||||
excluded_rooms=self.rooms_to_exclude_globally,
|
||||
)
|
||||
)
|
||||
|
||||
# 2) Assemble a list of the last membership events in some given ranges. Someone
|
||||
# could have left and joined multiple times during the given range but we only
|
||||
# care about end-result so we grab the last one.
|
||||
last_membership_change_by_room_id_in_from_to_range: Dict[str, EventBase] = {}
|
||||
for event in membership_change_events_in_from_to_range:
|
||||
last_membership_change_by_room_id_in_from_to_range[event.room_id] = event
|
||||
|
||||
# 2) Fixup
|
||||
for (
|
||||
last_membership_change_in_from_to_range
|
||||
) in last_membership_change_by_room_id_in_from_to_range.values():
|
||||
room_id = last_membership_change_in_from_to_range.room_id
|
||||
|
||||
# 2) Add back newly_left rooms (> `from_token` and <= `to_token`). We
|
||||
# include newly_left rooms because the last event that the user should see
|
||||
# is their own leave event
|
||||
if last_membership_change_in_from_to_range.membership == Membership.LEAVE:
|
||||
sync_room_id_set.add(room_id)
|
||||
|
||||
return sync_room_id_set
|
||||
@@ -28,11 +28,14 @@ from typing import (
|
||||
Dict,
|
||||
FrozenSet,
|
||||
List,
|
||||
Literal,
|
||||
Mapping,
|
||||
Optional,
|
||||
Sequence,
|
||||
Set,
|
||||
Tuple,
|
||||
Union,
|
||||
overload,
|
||||
)
|
||||
|
||||
import attr
|
||||
@@ -128,6 +131,8 @@ class SyncVersion(Enum):
|
||||
|
||||
# Traditional `/sync` endpoint
|
||||
SYNC_V2 = "sync_v2"
|
||||
# Part of MSC3575 Sliding Sync
|
||||
E2EE_SYNC = "e2ee_sync"
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
@@ -279,6 +284,47 @@ class SyncResult:
|
||||
or self.device_lists
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def empty(
|
||||
next_batch: StreamToken,
|
||||
device_one_time_keys_count: JsonMapping,
|
||||
device_unused_fallback_key_types: List[str],
|
||||
) -> "SyncResult":
|
||||
"Return a new empty result"
|
||||
return SyncResult(
|
||||
next_batch=next_batch,
|
||||
presence=[],
|
||||
account_data=[],
|
||||
joined=[],
|
||||
invited=[],
|
||||
knocked=[],
|
||||
archived=[],
|
||||
to_device=[],
|
||||
device_lists=DeviceListUpdates(),
|
||||
device_one_time_keys_count=device_one_time_keys_count,
|
||||
device_unused_fallback_key_types=device_unused_fallback_key_types,
|
||||
)
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class E2eeSyncResult:
|
||||
"""
|
||||
Attributes:
|
||||
next_batch: Token for the next sync
|
||||
to_device: List of direct messages for the device.
|
||||
device_lists: List of user_ids whose devices have changed
|
||||
device_one_time_keys_count: Dict of algorithm to count for one time keys
|
||||
for this device
|
||||
device_unused_fallback_key_types: List of key types that have an unused fallback
|
||||
key
|
||||
"""
|
||||
|
||||
next_batch: StreamToken
|
||||
to_device: List[JsonDict]
|
||||
device_lists: DeviceListUpdates
|
||||
device_one_time_keys_count: JsonMapping
|
||||
device_unused_fallback_key_types: List[str]
|
||||
|
||||
|
||||
class SyncHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
@@ -322,6 +368,31 @@ class SyncHandler:
|
||||
|
||||
self.rooms_to_exclude_globally = hs.config.server.rooms_to_exclude_from_sync
|
||||
|
||||
@overload
|
||||
async def wait_for_sync_for_user(
|
||||
self,
|
||||
requester: Requester,
|
||||
sync_config: SyncConfig,
|
||||
sync_version: Literal[SyncVersion.SYNC_V2],
|
||||
request_key: SyncRequestKey,
|
||||
since_token: Optional[StreamToken] = None,
|
||||
timeout: int = 0,
|
||||
full_state: bool = False,
|
||||
) -> SyncResult: ...
|
||||
|
||||
@overload
|
||||
async def wait_for_sync_for_user(
|
||||
self,
|
||||
requester: Requester,
|
||||
sync_config: SyncConfig,
|
||||
sync_version: Literal[SyncVersion.E2EE_SYNC],
|
||||
request_key: SyncRequestKey,
|
||||
since_token: Optional[StreamToken] = None,
|
||||
timeout: int = 0,
|
||||
full_state: bool = False,
|
||||
) -> E2eeSyncResult: ...
|
||||
|
||||
@overload
|
||||
async def wait_for_sync_for_user(
|
||||
self,
|
||||
requester: Requester,
|
||||
@@ -331,7 +402,18 @@ class SyncHandler:
|
||||
since_token: Optional[StreamToken] = None,
|
||||
timeout: int = 0,
|
||||
full_state: bool = False,
|
||||
) -> SyncResult:
|
||||
) -> Union[SyncResult, E2eeSyncResult]: ...
|
||||
|
||||
async def wait_for_sync_for_user(
|
||||
self,
|
||||
requester: Requester,
|
||||
sync_config: SyncConfig,
|
||||
sync_version: SyncVersion,
|
||||
request_key: SyncRequestKey,
|
||||
since_token: Optional[StreamToken] = None,
|
||||
timeout: int = 0,
|
||||
full_state: bool = False,
|
||||
) -> Union[SyncResult, E2eeSyncResult]:
|
||||
"""Get the sync for a client if we have new data for it now. Otherwise
|
||||
wait for new data to arrive on the server. If the timeout expires, then
|
||||
return an empty sync result.
|
||||
@@ -344,8 +426,10 @@ class SyncHandler:
|
||||
since_token: The point in the stream to sync from.
|
||||
timeout: How long to wait for new data to arrive before giving up.
|
||||
full_state: Whether to return the full state for each room.
|
||||
|
||||
Returns:
|
||||
When `SyncVersion.SYNC_V2`, returns a full `SyncResult`.
|
||||
When `SyncVersion.E2EE_SYNC`, returns a `E2eeSyncResult`.
|
||||
"""
|
||||
# If the user is not part of the mau group, then check that limits have
|
||||
# not been exceeded (if not part of the group by this point, almost certain
|
||||
@@ -366,6 +450,29 @@ class SyncHandler:
|
||||
logger.debug("Returning sync response for %s", user_id)
|
||||
return res
|
||||
|
||||
@overload
|
||||
async def _wait_for_sync_for_user(
|
||||
self,
|
||||
sync_config: SyncConfig,
|
||||
sync_version: Literal[SyncVersion.SYNC_V2],
|
||||
since_token: Optional[StreamToken],
|
||||
timeout: int,
|
||||
full_state: bool,
|
||||
cache_context: ResponseCacheContext[SyncRequestKey],
|
||||
) -> SyncResult: ...
|
||||
|
||||
@overload
|
||||
async def _wait_for_sync_for_user(
|
||||
self,
|
||||
sync_config: SyncConfig,
|
||||
sync_version: Literal[SyncVersion.E2EE_SYNC],
|
||||
since_token: Optional[StreamToken],
|
||||
timeout: int,
|
||||
full_state: bool,
|
||||
cache_context: ResponseCacheContext[SyncRequestKey],
|
||||
) -> E2eeSyncResult: ...
|
||||
|
||||
@overload
|
||||
async def _wait_for_sync_for_user(
|
||||
self,
|
||||
sync_config: SyncConfig,
|
||||
@@ -374,7 +481,17 @@ class SyncHandler:
|
||||
timeout: int,
|
||||
full_state: bool,
|
||||
cache_context: ResponseCacheContext[SyncRequestKey],
|
||||
) -> SyncResult:
|
||||
) -> Union[SyncResult, E2eeSyncResult]: ...
|
||||
|
||||
async def _wait_for_sync_for_user(
|
||||
self,
|
||||
sync_config: SyncConfig,
|
||||
sync_version: SyncVersion,
|
||||
since_token: Optional[StreamToken],
|
||||
timeout: int,
|
||||
full_state: bool,
|
||||
cache_context: ResponseCacheContext[SyncRequestKey],
|
||||
) -> Union[SyncResult, E2eeSyncResult]:
|
||||
"""The start of the machinery that produces a /sync response.
|
||||
|
||||
See https://spec.matrix.org/v1.1/client-server-api/#syncing for full details.
|
||||
@@ -401,6 +518,45 @@ class SyncHandler:
|
||||
if context:
|
||||
context.tag = sync_label
|
||||
|
||||
if since_token is not None:
|
||||
# We need to make sure this worker has caught up with the token. If
|
||||
# this returns false it means we timed out waiting, and we should
|
||||
# just return an empty response.
|
||||
start = self.clock.time_msec()
|
||||
if not await self.notifier.wait_for_stream_token(since_token):
|
||||
logger.warning(
|
||||
"Timed out waiting for worker to catch up. Returning empty response"
|
||||
)
|
||||
device_id = sync_config.device_id
|
||||
one_time_keys_count: JsonMapping = {}
|
||||
unused_fallback_key_types: List[str] = []
|
||||
if device_id:
|
||||
user_id = sync_config.user.to_string()
|
||||
# TODO: We should have a way to let clients differentiate between the states of:
|
||||
# * no change in OTK count since the provided since token
|
||||
# * the server has zero OTKs left for this device
|
||||
# Spec issue: https://github.com/matrix-org/matrix-doc/issues/3298
|
||||
one_time_keys_count = await self.store.count_e2e_one_time_keys(
|
||||
user_id, device_id
|
||||
)
|
||||
unused_fallback_key_types = list(
|
||||
await self.store.get_e2e_unused_fallback_key_types(
|
||||
user_id, device_id
|
||||
)
|
||||
)
|
||||
|
||||
cache_context.should_cache = False # Don't cache empty responses
|
||||
return SyncResult.empty(
|
||||
since_token, one_time_keys_count, unused_fallback_key_types
|
||||
)
|
||||
|
||||
# If we've spent significant time waiting to catch up, take it off
|
||||
# the timeout.
|
||||
now = self.clock.time_msec()
|
||||
if now - start > 1_000:
|
||||
timeout -= now - start
|
||||
timeout = max(timeout, 0)
|
||||
|
||||
# if we have a since token, delete any to-device messages before that token
|
||||
# (since we now know that the device has received them)
|
||||
if since_token is not None:
|
||||
@@ -417,14 +573,16 @@ class SyncHandler:
|
||||
if timeout == 0 or since_token is None or full_state:
|
||||
# we are going to return immediately, so don't bother calling
|
||||
# notifier.wait_for_events.
|
||||
result: SyncResult = await self.current_sync_for_user(
|
||||
sync_config, sync_version, since_token, full_state=full_state
|
||||
result: Union[SyncResult, E2eeSyncResult] = (
|
||||
await self.current_sync_for_user(
|
||||
sync_config, sync_version, since_token, full_state=full_state
|
||||
)
|
||||
)
|
||||
else:
|
||||
# Otherwise, we wait for something to happen and report it to the user.
|
||||
async def current_sync_callback(
|
||||
before_token: StreamToken, after_token: StreamToken
|
||||
) -> SyncResult:
|
||||
) -> Union[SyncResult, E2eeSyncResult]:
|
||||
return await self.current_sync_for_user(
|
||||
sync_config, sync_version, since_token
|
||||
)
|
||||
@@ -456,14 +614,43 @@ class SyncHandler:
|
||||
|
||||
return result
|
||||
|
||||
@overload
|
||||
async def current_sync_for_user(
|
||||
self,
|
||||
sync_config: SyncConfig,
|
||||
sync_version: Literal[SyncVersion.SYNC_V2],
|
||||
since_token: Optional[StreamToken] = None,
|
||||
full_state: bool = False,
|
||||
) -> SyncResult: ...
|
||||
|
||||
@overload
|
||||
async def current_sync_for_user(
|
||||
self,
|
||||
sync_config: SyncConfig,
|
||||
sync_version: Literal[SyncVersion.E2EE_SYNC],
|
||||
since_token: Optional[StreamToken] = None,
|
||||
full_state: bool = False,
|
||||
) -> E2eeSyncResult: ...
|
||||
|
||||
@overload
|
||||
async def current_sync_for_user(
|
||||
self,
|
||||
sync_config: SyncConfig,
|
||||
sync_version: SyncVersion,
|
||||
since_token: Optional[StreamToken] = None,
|
||||
full_state: bool = False,
|
||||
) -> SyncResult:
|
||||
"""Generates the response body of a sync result, represented as a SyncResult.
|
||||
) -> Union[SyncResult, E2eeSyncResult]: ...
|
||||
|
||||
async def current_sync_for_user(
|
||||
self,
|
||||
sync_config: SyncConfig,
|
||||
sync_version: SyncVersion,
|
||||
since_token: Optional[StreamToken] = None,
|
||||
full_state: bool = False,
|
||||
) -> Union[SyncResult, E2eeSyncResult]:
|
||||
"""
|
||||
Generates the response body of a sync result, represented as a
|
||||
`SyncResult`/`E2eeSyncResult`.
|
||||
|
||||
This is a wrapper around `generate_sync_result` which starts an open tracing
|
||||
span to track the sync. See `generate_sync_result` for the next part of your
|
||||
@@ -474,15 +661,25 @@ class SyncHandler:
|
||||
sync_version: Determines what kind of sync response to generate.
|
||||
since_token: The point in the stream to sync from.p.
|
||||
full_state: Whether to return the full state for each room.
|
||||
|
||||
Returns:
|
||||
When `SyncVersion.SYNC_V2`, returns a full `SyncResult`.
|
||||
When `SyncVersion.E2EE_SYNC`, returns a `E2eeSyncResult`.
|
||||
"""
|
||||
with start_active_span("sync.current_sync_for_user"):
|
||||
log_kv({"since_token": since_token})
|
||||
|
||||
# Go through the `/sync` v2 path
|
||||
if sync_version == SyncVersion.SYNC_V2:
|
||||
sync_result: SyncResult = await self.generate_sync_result(
|
||||
sync_config, since_token, full_state
|
||||
sync_result: Union[SyncResult, E2eeSyncResult] = (
|
||||
await self.generate_sync_result(
|
||||
sync_config, since_token, full_state
|
||||
)
|
||||
)
|
||||
# Go through the MSC3575 Sliding Sync `/sync/e2ee` path
|
||||
elif sync_version == SyncVersion.E2EE_SYNC:
|
||||
sync_result = await self.generate_e2ee_sync_result(
|
||||
sync_config, since_token
|
||||
)
|
||||
else:
|
||||
raise Exception(
|
||||
@@ -1691,6 +1888,96 @@ class SyncHandler:
|
||||
next_batch=sync_result_builder.now_token,
|
||||
)
|
||||
|
||||
async def generate_e2ee_sync_result(
|
||||
self,
|
||||
sync_config: SyncConfig,
|
||||
since_token: Optional[StreamToken] = None,
|
||||
) -> E2eeSyncResult:
|
||||
"""
|
||||
Generates the response body of a MSC3575 Sliding Sync `/sync/e2ee` result.
|
||||
|
||||
This is represented by a `E2eeSyncResult` struct, which is built from small
|
||||
pieces using a `SyncResultBuilder`. The `sync_result_builder` is passed as a
|
||||
mutable ("inout") parameter to various helper functions. These retrieve and
|
||||
process the data which forms the sync body, often writing to the
|
||||
`sync_result_builder` to store their output.
|
||||
|
||||
At the end, we transfer data from the `sync_result_builder` to a new `E2eeSyncResult`
|
||||
instance to signify that the sync calculation is complete.
|
||||
"""
|
||||
user_id = sync_config.user.to_string()
|
||||
app_service = self.store.get_app_service_by_user_id(user_id)
|
||||
if app_service:
|
||||
# We no longer support AS users using /sync directly.
|
||||
# See https://github.com/matrix-org/matrix-doc/issues/1144
|
||||
raise NotImplementedError()
|
||||
|
||||
sync_result_builder = await self.get_sync_result_builder(
|
||||
sync_config,
|
||||
since_token,
|
||||
full_state=False,
|
||||
)
|
||||
|
||||
# 1. Calculate `to_device` events
|
||||
await self._generate_sync_entry_for_to_device(sync_result_builder)
|
||||
|
||||
# 2. Calculate `device_lists`
|
||||
# Device list updates are sent if a since token is provided.
|
||||
device_lists = DeviceListUpdates()
|
||||
include_device_list_updates = bool(since_token and since_token.device_list_key)
|
||||
if include_device_list_updates:
|
||||
# Note that _generate_sync_entry_for_rooms sets sync_result_builder.joined, which
|
||||
# is used in calculate_user_changes below.
|
||||
#
|
||||
# TODO: Running `_generate_sync_entry_for_rooms()` is a lot of work just to
|
||||
# figure out the membership changes/derived info needed for
|
||||
# `_generate_sync_entry_for_device_list()`. In the future, we should try to
|
||||
# refactor this away.
|
||||
(
|
||||
newly_joined_rooms,
|
||||
newly_left_rooms,
|
||||
) = await self._generate_sync_entry_for_rooms(sync_result_builder)
|
||||
|
||||
# This uses the sync_result_builder.joined which is set in
|
||||
# `_generate_sync_entry_for_rooms`, if that didn't find any joined
|
||||
# rooms for some reason it is a no-op.
|
||||
(
|
||||
newly_joined_or_invited_or_knocked_users,
|
||||
newly_left_users,
|
||||
) = sync_result_builder.calculate_user_changes()
|
||||
|
||||
device_lists = await self._generate_sync_entry_for_device_list(
|
||||
sync_result_builder,
|
||||
newly_joined_rooms=newly_joined_rooms,
|
||||
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
|
||||
newly_left_rooms=newly_left_rooms,
|
||||
newly_left_users=newly_left_users,
|
||||
)
|
||||
|
||||
# 3. Calculate `device_one_time_keys_count` and `device_unused_fallback_key_types`
|
||||
device_id = sync_config.device_id
|
||||
one_time_keys_count: JsonMapping = {}
|
||||
unused_fallback_key_types: List[str] = []
|
||||
if device_id:
|
||||
# TODO: We should have a way to let clients differentiate between the states of:
|
||||
# * no change in OTK count since the provided since token
|
||||
# * the server has zero OTKs left for this device
|
||||
# Spec issue: https://github.com/matrix-org/matrix-doc/issues/3298
|
||||
one_time_keys_count = await self.store.count_e2e_one_time_keys(
|
||||
user_id, device_id
|
||||
)
|
||||
unused_fallback_key_types = list(
|
||||
await self.store.get_e2e_unused_fallback_key_types(user_id, device_id)
|
||||
)
|
||||
|
||||
return E2eeSyncResult(
|
||||
to_device=sync_result_builder.to_device,
|
||||
device_lists=device_lists,
|
||||
device_one_time_keys_count=one_time_keys_count,
|
||||
device_unused_fallback_key_types=unused_fallback_key_types,
|
||||
next_batch=sync_result_builder.now_token,
|
||||
)
|
||||
|
||||
async def get_sync_result_builder(
|
||||
self,
|
||||
sync_config: SyncConfig,
|
||||
@@ -1715,7 +2002,7 @@ class SyncHandler:
|
||||
"""
|
||||
user_id = sync_config.user.to_string()
|
||||
|
||||
# Note: we get the users room list *before* we get the current token, this
|
||||
# Note: we get the users room list *before* we get the `now_token`, this
|
||||
# avoids checking back in history if rooms are joined after the token is fetched.
|
||||
token_before_rooms = self.event_sources.get_current_token()
|
||||
mutable_joined_room_ids = set(await self.store.get_rooms_for_user(user_id))
|
||||
@@ -1727,10 +2014,10 @@ class SyncHandler:
|
||||
now_token = self.event_sources.get_current_token()
|
||||
log_kv({"now_token": now_token})
|
||||
|
||||
# Since we fetched the users room list before the token, there's a small window
|
||||
# during which membership events may have been persisted, so we fetch these now
|
||||
# and modify the joined room list for any changes between the get_rooms_for_user
|
||||
# call and the get_current_token call.
|
||||
# Since we fetched the users room list before calculating the `now_token` (see
|
||||
# above), there's a small window during which membership events may have been
|
||||
# persisted, so we fetch these now and modify the joined room list for any
|
||||
# changes between the get_rooms_for_user call and the get_current_token call.
|
||||
membership_change_events = []
|
||||
if since_token:
|
||||
membership_change_events = await self.store.get_membership_changes_for_user(
|
||||
@@ -1740,16 +2027,19 @@ class SyncHandler:
|
||||
self.rooms_to_exclude_globally,
|
||||
)
|
||||
|
||||
mem_last_change_by_room_id: Dict[str, EventBase] = {}
|
||||
last_membership_change_by_room_id: Dict[str, EventBase] = {}
|
||||
for event in membership_change_events:
|
||||
mem_last_change_by_room_id[event.room_id] = event
|
||||
last_membership_change_by_room_id[event.room_id] = event
|
||||
|
||||
# For the latest membership event in each room found, add/remove the room ID
|
||||
# from the joined room list accordingly. In this case we only care if the
|
||||
# latest change is JOIN.
|
||||
|
||||
for room_id, event in mem_last_change_by_room_id.items():
|
||||
for room_id, event in last_membership_change_by_room_id.items():
|
||||
assert event.internal_metadata.stream_ordering
|
||||
# As a shortcut, skip any events that happened before we got our
|
||||
# `get_rooms_for_user()` snapshot (any changes are already represented
|
||||
# in that list).
|
||||
if (
|
||||
event.internal_metadata.stream_ordering
|
||||
< token_before_rooms.room_key.stream
|
||||
@@ -1889,7 +2179,7 @@ class SyncHandler:
|
||||
users_that_have_changed = (
|
||||
await self._device_handler.get_device_changes_in_shared_rooms(
|
||||
user_id,
|
||||
sync_result_builder.joined_room_ids,
|
||||
joined_room_ids,
|
||||
from_token=since_token,
|
||||
now_token=sync_result_builder.now_token,
|
||||
)
|
||||
@@ -2543,7 +2833,7 @@ class SyncHandler:
|
||||
continue
|
||||
|
||||
leave_token = now_token.copy_and_replace(
|
||||
StreamKeyType.ROOM, RoomStreamToken(stream=event.stream_ordering)
|
||||
StreamKeyType.ROOM, RoomStreamToken(stream=event.event_pos.stream)
|
||||
)
|
||||
room_entries.append(
|
||||
RoomSyncResultBuilder(
|
||||
|
||||
@@ -477,9 +477,9 @@ class TypingWriterHandler(FollowerTypingHandler):
|
||||
|
||||
rows = []
|
||||
for room_id in changed_rooms:
|
||||
serial = self._room_serials[room_id]
|
||||
if last_id < serial <= current_id:
|
||||
typing = self._room_typing[room_id]
|
||||
serial = self._room_serials.get(room_id)
|
||||
if serial and last_id < serial <= current_id:
|
||||
typing = self._room_typing.get(room_id, set())
|
||||
rows.append((serial, [room_id, list(typing)]))
|
||||
rows.sort()
|
||||
|
||||
|
||||
@@ -57,7 +57,7 @@ from twisted.internet.interfaces import IReactorTime
|
||||
from twisted.internet.task import Cooperator
|
||||
from twisted.web.client import ResponseFailed
|
||||
from twisted.web.http_headers import Headers
|
||||
from twisted.web.iweb import IAgent, IBodyProducer, IResponse
|
||||
from twisted.web.iweb import UNKNOWN_LENGTH, IAgent, IBodyProducer, IResponse
|
||||
|
||||
import synapse.metrics
|
||||
import synapse.util.retryutils
|
||||
@@ -68,6 +68,7 @@ from synapse.api.errors import (
|
||||
RequestSendFailed,
|
||||
SynapseError,
|
||||
)
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.crypto.context_factory import FederationPolicyForHTTPS
|
||||
from synapse.http import QuieterFileBodyProducer
|
||||
from synapse.http.client import (
|
||||
@@ -1411,9 +1412,11 @@ class MatrixFederationHttpClient:
|
||||
destination: str,
|
||||
path: str,
|
||||
output_stream: BinaryIO,
|
||||
download_ratelimiter: Ratelimiter,
|
||||
ip_address: str,
|
||||
max_size: int,
|
||||
args: Optional[QueryParams] = None,
|
||||
retry_on_dns_fail: bool = True,
|
||||
max_size: Optional[int] = None,
|
||||
ignore_backoff: bool = False,
|
||||
follow_redirects: bool = False,
|
||||
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
||||
@@ -1422,6 +1425,10 @@ class MatrixFederationHttpClient:
|
||||
destination: The remote server to send the HTTP request to.
|
||||
path: The HTTP path to GET.
|
||||
output_stream: File to write the response body to.
|
||||
download_ratelimiter: a ratelimiter to limit remote media downloads, keyed to
|
||||
requester IP
|
||||
ip_address: IP address of the requester
|
||||
max_size: maximum allowable size in bytes of the file
|
||||
args: Optional dictionary used to create the query string.
|
||||
ignore_backoff: true to ignore the historical backoff data
|
||||
and try the request anyway.
|
||||
@@ -1441,11 +1448,27 @@ class MatrixFederationHttpClient:
|
||||
federation whitelist
|
||||
RequestSendFailed: If there were problems connecting to the
|
||||
remote, due to e.g. DNS failures, connection timeouts etc.
|
||||
SynapseError: If the requested file exceeds ratelimits
|
||||
"""
|
||||
request = MatrixFederationRequest(
|
||||
method="GET", destination=destination, path=path, query=args
|
||||
)
|
||||
|
||||
# check for a minimum balance of 1MiB in ratelimiter before initiating request
|
||||
send_req, _ = await download_ratelimiter.can_do_action(
|
||||
requester=None, key=ip_address, n_actions=1048576, update=False
|
||||
)
|
||||
|
||||
if not send_req:
|
||||
msg = "Requested file size exceeds ratelimits"
|
||||
logger.warning(
|
||||
"{%s} [%s] %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
msg,
|
||||
)
|
||||
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
|
||||
|
||||
response = await self._send_request(
|
||||
request,
|
||||
retry_on_dns_fail=retry_on_dns_fail,
|
||||
@@ -1455,12 +1478,36 @@ class MatrixFederationHttpClient:
|
||||
|
||||
headers = dict(response.headers.getAllRawHeaders())
|
||||
|
||||
expected_size = response.length
|
||||
# if we don't get an expected length then use the max length
|
||||
if expected_size == UNKNOWN_LENGTH:
|
||||
expected_size = max_size
|
||||
logger.debug(
|
||||
f"File size unknown, assuming file is max allowable size: {max_size}"
|
||||
)
|
||||
|
||||
read_body, _ = await download_ratelimiter.can_do_action(
|
||||
requester=None,
|
||||
key=ip_address,
|
||||
n_actions=expected_size,
|
||||
)
|
||||
if not read_body:
|
||||
msg = "Requested file size exceeds ratelimits"
|
||||
logger.warning(
|
||||
"{%s} [%s] %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
msg,
|
||||
)
|
||||
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
|
||||
|
||||
try:
|
||||
d = read_body_with_max_size(response, output_stream, max_size)
|
||||
# add a byte of headroom to max size as function errs at >=
|
||||
d = read_body_with_max_size(response, output_stream, expected_size + 1)
|
||||
d.addTimeout(self.default_timeout_seconds, self.reactor)
|
||||
length = await make_deferred_yieldable(d)
|
||||
except BodyExceededMaxSize:
|
||||
msg = "Requested file is too large > %r bytes" % (max_size,)
|
||||
msg = "Requested file is too large > %r bytes" % (expected_size,)
|
||||
logger.warning(
|
||||
"{%s} [%s] %s",
|
||||
request.txn_id,
|
||||
|
||||
@@ -25,7 +25,16 @@ import os
|
||||
import urllib
|
||||
from abc import ABC, abstractmethod
|
||||
from types import TracebackType
|
||||
from typing import Awaitable, Dict, Generator, List, Optional, Tuple, Type
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Awaitable,
|
||||
Dict,
|
||||
Generator,
|
||||
List,
|
||||
Optional,
|
||||
Tuple,
|
||||
Type,
|
||||
)
|
||||
|
||||
import attr
|
||||
|
||||
@@ -39,6 +48,11 @@ from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.util.stringutils import is_ascii
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.media.media_storage import MultipartResponder
|
||||
from synapse.storage.databases.main.media_repository import LocalMedia
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# list all text content types that will have the charset default to UTF-8 when
|
||||
@@ -260,6 +274,53 @@ def _can_encode_filename_as_token(x: str) -> bool:
|
||||
return True
|
||||
|
||||
|
||||
async def respond_with_multipart_responder(
|
||||
request: SynapseRequest,
|
||||
responder: "Optional[MultipartResponder]",
|
||||
media_info: "LocalMedia",
|
||||
) -> None:
|
||||
"""
|
||||
Responds via a Multipart responder for the federation media `/download` requests
|
||||
|
||||
Args:
|
||||
request: the federation request to respond to
|
||||
responder: the Multipart responder which will send the response
|
||||
media_info: metadata about the media item
|
||||
"""
|
||||
if not responder:
|
||||
respond_404(request)
|
||||
return
|
||||
|
||||
# If we have a responder we *must* use it as a context manager.
|
||||
with responder:
|
||||
if request._disconnected:
|
||||
logger.warning(
|
||||
"Not sending response to request %s, already disconnected.", request
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug("Responding to media request with responder %s", responder)
|
||||
if media_info.media_length is not None:
|
||||
request.setHeader(b"Content-Length", b"%d" % (media_info.media_length,))
|
||||
request.setHeader(
|
||||
b"Content-Type", b"multipart/mixed; boundary=%s" % responder.boundary
|
||||
)
|
||||
|
||||
try:
|
||||
await responder.write_to_consumer(request)
|
||||
except Exception as e:
|
||||
# The majority of the time this will be due to the client having gone
|
||||
# away. Unfortunately, Twisted simply throws a generic exception at us
|
||||
# in that case.
|
||||
logger.warning("Failed to write to consumer: %s %s", type(e), e)
|
||||
|
||||
# Unregister the producer, if it has one, so Twisted doesn't complain
|
||||
if request.producer:
|
||||
request.unregisterProducer()
|
||||
|
||||
finish_request(request)
|
||||
|
||||
|
||||
async def respond_with_responder(
|
||||
request: SynapseRequest,
|
||||
responder: "Optional[Responder]",
|
||||
|
||||
@@ -42,6 +42,7 @@ from synapse.api.errors import (
|
||||
SynapseError,
|
||||
cs_error,
|
||||
)
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.config.repository import ThumbnailRequirement
|
||||
from synapse.http.server import respond_with_json
|
||||
from synapse.http.site import SynapseRequest
|
||||
@@ -53,10 +54,11 @@ from synapse.media._base import (
|
||||
ThumbnailInfo,
|
||||
get_filename_from_headers,
|
||||
respond_404,
|
||||
respond_with_multipart_responder,
|
||||
respond_with_responder,
|
||||
)
|
||||
from synapse.media.filepath import MediaFilePaths
|
||||
from synapse.media.media_storage import MediaStorage
|
||||
from synapse.media.media_storage import MediaStorage, MultipartResponder
|
||||
from synapse.media.storage_provider import StorageProviderWrapper
|
||||
from synapse.media.thumbnailer import Thumbnailer, ThumbnailError
|
||||
from synapse.media.url_previewer import UrlPreviewer
|
||||
@@ -111,6 +113,12 @@ class MediaRepository:
|
||||
)
|
||||
self.prevent_media_downloads_from = hs.config.media.prevent_media_downloads_from
|
||||
|
||||
self.download_ratelimiter = Ratelimiter(
|
||||
store=hs.get_storage_controllers().main,
|
||||
clock=hs.get_clock(),
|
||||
cfg=hs.config.ratelimiting.remote_media_downloads,
|
||||
)
|
||||
|
||||
# List of StorageProviders where we should search for media and
|
||||
# potentially upload to.
|
||||
storage_providers = []
|
||||
@@ -422,6 +430,7 @@ class MediaRepository:
|
||||
media_id: str,
|
||||
name: Optional[str],
|
||||
max_timeout_ms: int,
|
||||
federation: bool = False,
|
||||
) -> None:
|
||||
"""Responds to requests for local media, if exists, or returns 404.
|
||||
|
||||
@@ -433,6 +442,7 @@ class MediaRepository:
|
||||
the filename in the Content-Disposition header of the response.
|
||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||
media to be uploaded.
|
||||
federation: whether the local media being fetched is for a federation request
|
||||
|
||||
Returns:
|
||||
Resolves once a response has successfully been written to request
|
||||
@@ -452,10 +462,17 @@ class MediaRepository:
|
||||
|
||||
file_info = FileInfo(None, media_id, url_cache=bool(url_cache))
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
await respond_with_responder(
|
||||
request, responder, media_type, media_length, upload_name
|
||||
responder = await self.media_storage.fetch_media(
|
||||
file_info, media_info, federation
|
||||
)
|
||||
if federation:
|
||||
# this really should be a Multipart responder but just in case
|
||||
assert isinstance(responder, MultipartResponder)
|
||||
await respond_with_multipart_responder(request, responder, media_info)
|
||||
else:
|
||||
await respond_with_responder(
|
||||
request, responder, media_type, media_length, upload_name
|
||||
)
|
||||
|
||||
async def get_remote_media(
|
||||
self,
|
||||
@@ -464,6 +481,7 @@ class MediaRepository:
|
||||
media_id: str,
|
||||
name: Optional[str],
|
||||
max_timeout_ms: int,
|
||||
ip_address: str,
|
||||
) -> None:
|
||||
"""Respond to requests for remote media.
|
||||
|
||||
@@ -475,6 +493,7 @@ class MediaRepository:
|
||||
the filename in the Content-Disposition header of the response.
|
||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||
media to be uploaded.
|
||||
ip_address: the IP address of the requester
|
||||
|
||||
Returns:
|
||||
Resolves once a response has successfully been written to request
|
||||
@@ -500,7 +519,11 @@ class MediaRepository:
|
||||
key = (server_name, media_id)
|
||||
async with self.remote_media_linearizer.queue(key):
|
||||
responder, media_info = await self._get_remote_media_impl(
|
||||
server_name, media_id, max_timeout_ms
|
||||
server_name,
|
||||
media_id,
|
||||
max_timeout_ms,
|
||||
self.download_ratelimiter,
|
||||
ip_address,
|
||||
)
|
||||
|
||||
# We deliberately stream the file outside the lock
|
||||
@@ -517,7 +540,7 @@ class MediaRepository:
|
||||
respond_404(request)
|
||||
|
||||
async def get_remote_media_info(
|
||||
self, server_name: str, media_id: str, max_timeout_ms: int
|
||||
self, server_name: str, media_id: str, max_timeout_ms: int, ip_address: str
|
||||
) -> RemoteMedia:
|
||||
"""Gets the media info associated with the remote file, downloading
|
||||
if necessary.
|
||||
@@ -527,6 +550,7 @@ class MediaRepository:
|
||||
media_id: The media ID of the content (as defined by the remote server).
|
||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||
media to be uploaded.
|
||||
ip_address: IP address of the requester
|
||||
|
||||
Returns:
|
||||
The media info of the file
|
||||
@@ -542,7 +566,11 @@ class MediaRepository:
|
||||
key = (server_name, media_id)
|
||||
async with self.remote_media_linearizer.queue(key):
|
||||
responder, media_info = await self._get_remote_media_impl(
|
||||
server_name, media_id, max_timeout_ms
|
||||
server_name,
|
||||
media_id,
|
||||
max_timeout_ms,
|
||||
self.download_ratelimiter,
|
||||
ip_address,
|
||||
)
|
||||
|
||||
# Ensure we actually use the responder so that it releases resources
|
||||
@@ -553,7 +581,12 @@ class MediaRepository:
|
||||
return media_info
|
||||
|
||||
async def _get_remote_media_impl(
|
||||
self, server_name: str, media_id: str, max_timeout_ms: int
|
||||
self,
|
||||
server_name: str,
|
||||
media_id: str,
|
||||
max_timeout_ms: int,
|
||||
download_ratelimiter: Ratelimiter,
|
||||
ip_address: str,
|
||||
) -> Tuple[Optional[Responder], RemoteMedia]:
|
||||
"""Looks for media in local cache, if not there then attempt to
|
||||
download from remote server.
|
||||
@@ -564,6 +597,9 @@ class MediaRepository:
|
||||
remote server).
|
||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||
media to be uploaded.
|
||||
download_ratelimiter: a ratelimiter limiting remote media downloads, keyed to
|
||||
requester IP.
|
||||
ip_address: the IP address of the requester
|
||||
|
||||
Returns:
|
||||
A tuple of responder and the media info of the file.
|
||||
@@ -596,7 +632,7 @@ class MediaRepository:
|
||||
|
||||
try:
|
||||
media_info = await self._download_remote_file(
|
||||
server_name, media_id, max_timeout_ms
|
||||
server_name, media_id, max_timeout_ms, download_ratelimiter, ip_address
|
||||
)
|
||||
except SynapseError:
|
||||
raise
|
||||
@@ -630,6 +666,8 @@ class MediaRepository:
|
||||
server_name: str,
|
||||
media_id: str,
|
||||
max_timeout_ms: int,
|
||||
download_ratelimiter: Ratelimiter,
|
||||
ip_address: str,
|
||||
) -> RemoteMedia:
|
||||
"""Attempt to download the remote file from the given server name,
|
||||
using the given file_id as the local id.
|
||||
@@ -641,6 +679,9 @@ class MediaRepository:
|
||||
locally generated.
|
||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||
media to be uploaded.
|
||||
download_ratelimiter: a ratelimiter limiting remote media downloads, keyed to
|
||||
requester IP
|
||||
ip_address: the IP address of the requester
|
||||
|
||||
Returns:
|
||||
The media info of the file.
|
||||
@@ -650,7 +691,7 @@ class MediaRepository:
|
||||
|
||||
file_info = FileInfo(server_name=server_name, file_id=file_id)
|
||||
|
||||
with self.media_storage.store_into_file(file_info) as (f, fname, finish):
|
||||
async with self.media_storage.store_into_file(file_info) as (f, fname):
|
||||
try:
|
||||
length, headers = await self.client.download_media(
|
||||
server_name,
|
||||
@@ -658,6 +699,8 @@ class MediaRepository:
|
||||
output_stream=f,
|
||||
max_size=self.max_upload_size,
|
||||
max_timeout_ms=max_timeout_ms,
|
||||
download_ratelimiter=download_ratelimiter,
|
||||
ip_address=ip_address,
|
||||
)
|
||||
except RequestSendFailed as e:
|
||||
logger.warning(
|
||||
@@ -693,8 +736,6 @@ class MediaRepository:
|
||||
)
|
||||
raise SynapseError(502, "Failed to fetch remote media")
|
||||
|
||||
await finish()
|
||||
|
||||
if b"Content-Type" in headers:
|
||||
media_type = headers[b"Content-Type"][0].decode("ascii")
|
||||
else:
|
||||
@@ -1045,17 +1086,17 @@ class MediaRepository:
|
||||
),
|
||||
)
|
||||
|
||||
with self.media_storage.store_into_file(file_info) as (
|
||||
f,
|
||||
fname,
|
||||
finish,
|
||||
):
|
||||
async with self.media_storage.store_into_file(file_info) as (f, fname):
|
||||
try:
|
||||
await self.media_storage.write_to_file(t_byte_source, f)
|
||||
await finish()
|
||||
finally:
|
||||
t_byte_source.close()
|
||||
|
||||
# We flush and close the file to ensure that the bytes have
|
||||
# been written before getting the size.
|
||||
f.flush()
|
||||
f.close()
|
||||
|
||||
t_len = os.path.getsize(fname)
|
||||
|
||||
# Write to database
|
||||
|
||||
@@ -19,26 +19,33 @@
|
||||
#
|
||||
#
|
||||
import contextlib
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
from contextlib import closing
|
||||
from io import BytesIO
|
||||
from types import TracebackType
|
||||
from typing import (
|
||||
IO,
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Awaitable,
|
||||
AsyncIterator,
|
||||
BinaryIO,
|
||||
Callable,
|
||||
Generator,
|
||||
List,
|
||||
Optional,
|
||||
Sequence,
|
||||
Tuple,
|
||||
Type,
|
||||
Union,
|
||||
)
|
||||
from uuid import uuid4
|
||||
|
||||
import attr
|
||||
from zope.interface import implementer
|
||||
|
||||
from twisted.internet import defer, interfaces
|
||||
from twisted.internet.defer import Deferred
|
||||
from twisted.internet.interfaces import IConsumer
|
||||
from twisted.protocols.basic import FileSender
|
||||
@@ -49,15 +56,19 @@ from synapse.logging.opentracing import start_active_span, trace, trace_with_opn
|
||||
from synapse.util import Clock
|
||||
from synapse.util.file_consumer import BackgroundFileConsumer
|
||||
|
||||
from ..storage.databases.main.media_repository import LocalMedia
|
||||
from ..types import JsonDict
|
||||
from ._base import FileInfo, Responder
|
||||
from .filepath import MediaFilePaths
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.media.storage_provider import StorageProvider
|
||||
from synapse.media.storage_provider import StorageProviderWrapper
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
CRLF = b"\r\n"
|
||||
|
||||
|
||||
class MediaStorage:
|
||||
"""Responsible for storing/fetching files from local sources.
|
||||
@@ -74,7 +85,7 @@ class MediaStorage:
|
||||
hs: "HomeServer",
|
||||
local_media_directory: str,
|
||||
filepaths: MediaFilePaths,
|
||||
storage_providers: Sequence["StorageProvider"],
|
||||
storage_providers: Sequence["StorageProviderWrapper"],
|
||||
):
|
||||
self.hs = hs
|
||||
self.reactor = hs.get_reactor()
|
||||
@@ -97,11 +108,9 @@ class MediaStorage:
|
||||
the file path written to in the primary media store
|
||||
"""
|
||||
|
||||
with self.store_into_file(file_info) as (f, fname, finish_cb):
|
||||
async with self.store_into_file(file_info) as (f, fname):
|
||||
# Write to the main media repository
|
||||
await self.write_to_file(source, f)
|
||||
# Write to the other storage providers
|
||||
await finish_cb()
|
||||
|
||||
return fname
|
||||
|
||||
@@ -111,32 +120,27 @@ class MediaStorage:
|
||||
await defer_to_thread(self.reactor, _write_file_synchronously, source, output)
|
||||
|
||||
@trace_with_opname("MediaStorage.store_into_file")
|
||||
@contextlib.contextmanager
|
||||
def store_into_file(
|
||||
@contextlib.asynccontextmanager
|
||||
async def store_into_file(
|
||||
self, file_info: FileInfo
|
||||
) -> Generator[Tuple[BinaryIO, str, Callable[[], Awaitable[None]]], None, None]:
|
||||
"""Context manager used to get a file like object to write into, as
|
||||
) -> AsyncIterator[Tuple[BinaryIO, str]]:
|
||||
"""Async Context manager used to get a file like object to write into, as
|
||||
described by file_info.
|
||||
|
||||
Actually yields a 3-tuple (file, fname, finish_cb), where file is a file
|
||||
like object that can be written to, fname is the absolute path of file
|
||||
on disk, and finish_cb is a function that returns an awaitable.
|
||||
Actually yields a 2-tuple (file, fname,), where file is a file
|
||||
like object that can be written to and fname is the absolute path of file
|
||||
on disk.
|
||||
|
||||
fname can be used to read the contents from after upload, e.g. to
|
||||
generate thumbnails.
|
||||
|
||||
finish_cb must be called and waited on after the file has been successfully been
|
||||
written to. Should not be called if there was an error. Checks for spam and
|
||||
stores the file into the configured storage providers.
|
||||
|
||||
Args:
|
||||
file_info: Info about the file to store
|
||||
|
||||
Example:
|
||||
|
||||
with media_storage.store_into_file(info) as (f, fname, finish_cb):
|
||||
async with media_storage.store_into_file(info) as (f, fname,):
|
||||
# .. write into f ...
|
||||
await finish_cb()
|
||||
"""
|
||||
|
||||
path = self._file_info_to_path(file_info)
|
||||
@@ -145,72 +149,55 @@ class MediaStorage:
|
||||
dirname = os.path.dirname(fname)
|
||||
os.makedirs(dirname, exist_ok=True)
|
||||
|
||||
finished_called = [False]
|
||||
|
||||
main_media_repo_write_trace_scope = start_active_span(
|
||||
"writing to main media repo"
|
||||
)
|
||||
main_media_repo_write_trace_scope.__enter__()
|
||||
|
||||
try:
|
||||
with open(fname, "wb") as f:
|
||||
with start_active_span("writing to main media repo"):
|
||||
with open(fname, "wb") as f:
|
||||
yield f, fname
|
||||
|
||||
async def finish() -> None:
|
||||
# When someone calls finish, we assume they are done writing to the main media repo
|
||||
main_media_repo_write_trace_scope.__exit__(None, None, None)
|
||||
with start_active_span("writing to other storage providers"):
|
||||
spam_check = (
|
||||
await self._spam_checker_module_callbacks.check_media_file_for_spam(
|
||||
ReadableFileWrapper(self.clock, fname), file_info
|
||||
)
|
||||
)
|
||||
if spam_check != self._spam_checker_module_callbacks.NOT_SPAM:
|
||||
logger.info("Blocking media due to spam checker")
|
||||
# Note that we'll delete the stored media, due to the
|
||||
# try/except below. The media also won't be stored in
|
||||
# the DB.
|
||||
# We currently ignore any additional field returned by
|
||||
# the spam-check API.
|
||||
raise SpamMediaException(errcode=spam_check[0])
|
||||
|
||||
with start_active_span("writing to other storage providers"):
|
||||
# Ensure that all writes have been flushed and close the
|
||||
# file.
|
||||
f.flush()
|
||||
f.close()
|
||||
for provider in self.storage_providers:
|
||||
with start_active_span(str(provider)):
|
||||
await provider.store_file(path, file_info)
|
||||
|
||||
spam_check = await self._spam_checker_module_callbacks.check_media_file_for_spam(
|
||||
ReadableFileWrapper(self.clock, fname), file_info
|
||||
)
|
||||
if spam_check != self._spam_checker_module_callbacks.NOT_SPAM:
|
||||
logger.info("Blocking media due to spam checker")
|
||||
# Note that we'll delete the stored media, due to the
|
||||
# try/except below. The media also won't be stored in
|
||||
# the DB.
|
||||
# We currently ignore any additional field returned by
|
||||
# the spam-check API.
|
||||
raise SpamMediaException(errcode=spam_check[0])
|
||||
|
||||
for provider in self.storage_providers:
|
||||
with start_active_span(str(provider)):
|
||||
await provider.store_file(path, file_info)
|
||||
|
||||
finished_called[0] = True
|
||||
|
||||
yield f, fname, finish
|
||||
except Exception as e:
|
||||
try:
|
||||
main_media_repo_write_trace_scope.__exit__(
|
||||
type(e), None, e.__traceback__
|
||||
)
|
||||
os.remove(fname)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
raise e from None
|
||||
|
||||
if not finished_called:
|
||||
exc = Exception("Finished callback not called")
|
||||
main_media_repo_write_trace_scope.__exit__(
|
||||
type(exc), None, exc.__traceback__
|
||||
)
|
||||
raise exc
|
||||
|
||||
async def fetch_media(self, file_info: FileInfo) -> Optional[Responder]:
|
||||
async def fetch_media(
|
||||
self,
|
||||
file_info: FileInfo,
|
||||
media_info: Optional[LocalMedia] = None,
|
||||
federation: bool = False,
|
||||
) -> Optional[Responder]:
|
||||
"""Attempts to fetch media described by file_info from the local cache
|
||||
and configured storage providers.
|
||||
|
||||
Args:
|
||||
file_info
|
||||
file_info: Metadata about the media file
|
||||
media_info: Metadata about the media item
|
||||
federation: Whether this file is being fetched for a federation request
|
||||
|
||||
Returns:
|
||||
Returns a Responder if the file was found, otherwise None.
|
||||
If the file was found returns a Responder (a Multipart Responder if the requested
|
||||
file is for the federation /download endpoint), otherwise None.
|
||||
"""
|
||||
paths = [self._file_info_to_path(file_info)]
|
||||
|
||||
@@ -230,12 +217,19 @@ class MediaStorage:
|
||||
local_path = os.path.join(self.local_media_directory, path)
|
||||
if os.path.exists(local_path):
|
||||
logger.debug("responding with local file %s", local_path)
|
||||
return FileResponder(open(local_path, "rb"))
|
||||
if federation:
|
||||
assert media_info is not None
|
||||
boundary = uuid4().hex.encode("ascii")
|
||||
return MultipartResponder(
|
||||
open(local_path, "rb"), media_info, boundary
|
||||
)
|
||||
else:
|
||||
return FileResponder(open(local_path, "rb"))
|
||||
logger.debug("local file %s did not exist", local_path)
|
||||
|
||||
for provider in self.storage_providers:
|
||||
for path in paths:
|
||||
res: Any = await provider.fetch(path, file_info)
|
||||
res: Any = await provider.fetch(path, file_info, media_info, federation)
|
||||
if res:
|
||||
logger.debug("Streaming %s from %s", path, provider)
|
||||
return res
|
||||
@@ -349,7 +343,7 @@ class FileResponder(Responder):
|
||||
"""Wraps an open file that can be sent to a request.
|
||||
|
||||
Args:
|
||||
open_file: A file like object to be streamed ot the client,
|
||||
open_file: A file like object to be streamed to the client,
|
||||
is closed when finished streaming.
|
||||
"""
|
||||
|
||||
@@ -370,6 +364,38 @@ class FileResponder(Responder):
|
||||
self.open_file.close()
|
||||
|
||||
|
||||
class MultipartResponder(Responder):
|
||||
"""Wraps an open file, formats the response according to MSC3916 and sends it to a
|
||||
federation request.
|
||||
|
||||
Args:
|
||||
open_file: A file like object to be streamed to the client,
|
||||
is closed when finished streaming.
|
||||
media_info: metadata about the media item
|
||||
boundary: bytes to use for the multipart response boundary
|
||||
"""
|
||||
|
||||
def __init__(self, open_file: IO, media_info: LocalMedia, boundary: bytes) -> None:
|
||||
self.open_file = open_file
|
||||
self.media_info = media_info
|
||||
self.boundary = boundary
|
||||
|
||||
def write_to_consumer(self, consumer: IConsumer) -> Deferred:
|
||||
return make_deferred_yieldable(
|
||||
MultipartFileSender().beginFileTransfer(
|
||||
self.open_file, consumer, self.media_info.media_type, {}, self.boundary
|
||||
)
|
||||
)
|
||||
|
||||
def __exit__(
|
||||
self,
|
||||
exc_type: Optional[Type[BaseException]],
|
||||
exc_val: Optional[BaseException],
|
||||
exc_tb: Optional[TracebackType],
|
||||
) -> None:
|
||||
self.open_file.close()
|
||||
|
||||
|
||||
class SpamMediaException(NotFoundError):
|
||||
"""The media was blocked by a spam checker, so we simply 404 the request (in
|
||||
the same way as if it was quarantined).
|
||||
@@ -403,3 +429,151 @@ class ReadableFileWrapper:
|
||||
|
||||
# We yield to the reactor by sleeping for 0 seconds.
|
||||
await self.clock.sleep(0)
|
||||
|
||||
|
||||
@implementer(interfaces.IProducer)
|
||||
class MultipartFileSender:
|
||||
"""
|
||||
A producer that sends the contents of a file to a federation request in the format
|
||||
outlined in MSC3916 - a multipart/format-data response where the first field is a
|
||||
JSON object and the second is the requested file.
|
||||
|
||||
This is a slight re-writing of twisted.protocols.basic.FileSender to achieve the format
|
||||
outlined above.
|
||||
"""
|
||||
|
||||
CHUNK_SIZE = 2**14
|
||||
|
||||
lastSent = ""
|
||||
deferred: Optional[defer.Deferred] = None
|
||||
|
||||
def beginFileTransfer(
|
||||
self,
|
||||
file: IO,
|
||||
consumer: IConsumer,
|
||||
file_content_type: str,
|
||||
json_object: JsonDict,
|
||||
boundary: bytes,
|
||||
) -> Deferred:
|
||||
"""
|
||||
Begin transferring a file
|
||||
|
||||
Args:
|
||||
file: The file object to read data from
|
||||
consumer: The synapse request to write the data to
|
||||
file_content_type: The content-type of the file
|
||||
json_object: The JSON object to write to the first field of the response
|
||||
boundary: bytes to be used as the multipart/form-data boundary
|
||||
|
||||
Returns: A deferred whose callback will be invoked when the file has
|
||||
been completely written to the consumer. The last byte written to the
|
||||
consumer is passed to the callback.
|
||||
"""
|
||||
self.file: Optional[IO] = file
|
||||
self.consumer = consumer
|
||||
self.json_field = json_object
|
||||
self.json_field_written = False
|
||||
self.content_type_written = False
|
||||
self.file_content_type = file_content_type
|
||||
self.boundary = boundary
|
||||
self.deferred: Deferred = defer.Deferred()
|
||||
self.consumer.registerProducer(self, False)
|
||||
# while it's not entirely clear why this assignment is necessary, it mirrors
|
||||
# the behavior in FileSender.beginFileTransfer and thus is preserved here
|
||||
deferred = self.deferred
|
||||
return deferred
|
||||
|
||||
def resumeProducing(self) -> None:
|
||||
# write the first field, which will always be a json field
|
||||
if not self.json_field_written:
|
||||
self.consumer.write(CRLF + b"--" + self.boundary + CRLF)
|
||||
|
||||
content_type = Header(b"Content-Type", b"application/json")
|
||||
self.consumer.write(bytes(content_type) + CRLF)
|
||||
|
||||
json_field = json.dumps(self.json_field)
|
||||
json_bytes = json_field.encode("utf-8")
|
||||
self.consumer.write(json_bytes)
|
||||
self.consumer.write(CRLF + b"--" + self.boundary + CRLF)
|
||||
|
||||
self.json_field_written = True
|
||||
|
||||
chunk: Any = ""
|
||||
if self.file:
|
||||
# if we haven't written the content type yet, do so
|
||||
if not self.content_type_written:
|
||||
type = self.file_content_type.encode("utf-8")
|
||||
content_type = Header(b"Content-Type", type)
|
||||
self.consumer.write(bytes(content_type) + CRLF)
|
||||
self.content_type_written = True
|
||||
|
||||
chunk = self.file.read(self.CHUNK_SIZE)
|
||||
|
||||
if not chunk:
|
||||
# we've reached the end of the file
|
||||
self.consumer.write(CRLF + b"--" + self.boundary + b"--" + CRLF)
|
||||
self.file = None
|
||||
self.consumer.unregisterProducer()
|
||||
|
||||
if self.deferred:
|
||||
self.deferred.callback(self.lastSent)
|
||||
self.deferred = None
|
||||
return
|
||||
|
||||
self.consumer.write(chunk)
|
||||
self.lastSent = chunk[-1:]
|
||||
|
||||
def pauseProducing(self) -> None:
|
||||
pass
|
||||
|
||||
def stopProducing(self) -> None:
|
||||
if self.deferred:
|
||||
self.deferred.errback(Exception("Consumer asked us to stop producing"))
|
||||
self.deferred = None
|
||||
|
||||
|
||||
class Header:
|
||||
"""
|
||||
`Header` This class is a tiny wrapper that produces
|
||||
request headers. We can't use standard python header
|
||||
class because it encodes unicode fields using =? bla bla ?=
|
||||
encoding, which is correct, but no one in HTTP world expects
|
||||
that, everyone wants utf-8 raw bytes. (stolen from treq.multipart)
|
||||
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
name: bytes,
|
||||
value: Any,
|
||||
params: Optional[List[Tuple[Any, Any]]] = None,
|
||||
):
|
||||
self.name = name
|
||||
self.value = value
|
||||
self.params = params or []
|
||||
|
||||
def add_param(self, name: Any, value: Any) -> None:
|
||||
self.params.append((name, value))
|
||||
|
||||
def __bytes__(self) -> bytes:
|
||||
with closing(BytesIO()) as h:
|
||||
h.write(self.name + b": " + escape(self.value).encode("us-ascii"))
|
||||
if self.params:
|
||||
for name, val in self.params:
|
||||
h.write(b"; ")
|
||||
h.write(escape(name).encode("us-ascii"))
|
||||
h.write(b"=")
|
||||
h.write(b'"' + escape(val).encode("utf-8") + b'"')
|
||||
h.seek(0)
|
||||
return h.read()
|
||||
|
||||
|
||||
def escape(value: Union[str, bytes]) -> str:
|
||||
"""
|
||||
This function prevents header values from corrupting the request,
|
||||
a newline in the file name parameter makes form-data request unreadable
|
||||
for a majority of parsers. (stolen from treq.multipart)
|
||||
"""
|
||||
if isinstance(value, bytes):
|
||||
value = value.decode("utf-8")
|
||||
return value.replace("\r", "").replace("\n", "").replace('"', '\\"')
|
||||
|
||||
@@ -24,14 +24,16 @@ import logging
|
||||
import os
|
||||
import shutil
|
||||
from typing import TYPE_CHECKING, Callable, Optional
|
||||
from uuid import uuid4
|
||||
|
||||
from synapse.config._base import Config
|
||||
from synapse.logging.context import defer_to_thread, run_in_background
|
||||
from synapse.logging.opentracing import start_active_span, trace_with_opname
|
||||
from synapse.util.async_helpers import maybe_awaitable
|
||||
|
||||
from ..storage.databases.main.media_repository import LocalMedia
|
||||
from ._base import FileInfo, Responder
|
||||
from .media_storage import FileResponder
|
||||
from .media_storage import FileResponder, MultipartResponder
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -55,13 +57,21 @@ class StorageProvider(metaclass=abc.ABCMeta):
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
async def fetch(self, path: str, file_info: FileInfo) -> Optional[Responder]:
|
||||
async def fetch(
|
||||
self,
|
||||
path: str,
|
||||
file_info: FileInfo,
|
||||
media_info: Optional[LocalMedia] = None,
|
||||
federation: bool = False,
|
||||
) -> Optional[Responder]:
|
||||
"""Attempt to fetch the file described by file_info and stream it
|
||||
into writer.
|
||||
|
||||
Args:
|
||||
path: Relative path of file in local cache
|
||||
file_info: The metadata of the file.
|
||||
media_info: metadata of the media item
|
||||
federation: Whether the requested media is for a federation request
|
||||
|
||||
Returns:
|
||||
Returns a Responder if the provider has the file, otherwise returns None.
|
||||
@@ -124,7 +134,13 @@ class StorageProviderWrapper(StorageProvider):
|
||||
run_in_background(store)
|
||||
|
||||
@trace_with_opname("StorageProviderWrapper.fetch")
|
||||
async def fetch(self, path: str, file_info: FileInfo) -> Optional[Responder]:
|
||||
async def fetch(
|
||||
self,
|
||||
path: str,
|
||||
file_info: FileInfo,
|
||||
media_info: Optional[LocalMedia] = None,
|
||||
federation: bool = False,
|
||||
) -> Optional[Responder]:
|
||||
if file_info.url_cache:
|
||||
# Files in the URL preview cache definitely aren't stored here,
|
||||
# so avoid any potentially slow I/O or network access.
|
||||
@@ -132,7 +148,9 @@ class StorageProviderWrapper(StorageProvider):
|
||||
|
||||
# store_file is supposed to return an Awaitable, but guard
|
||||
# against improper implementations.
|
||||
return await maybe_awaitable(self.backend.fetch(path, file_info))
|
||||
return await maybe_awaitable(
|
||||
self.backend.fetch(path, file_info, media_info, federation)
|
||||
)
|
||||
|
||||
|
||||
class FileStorageProviderBackend(StorageProvider):
|
||||
@@ -172,11 +190,23 @@ class FileStorageProviderBackend(StorageProvider):
|
||||
)
|
||||
|
||||
@trace_with_opname("FileStorageProviderBackend.fetch")
|
||||
async def fetch(self, path: str, file_info: FileInfo) -> Optional[Responder]:
|
||||
async def fetch(
|
||||
self,
|
||||
path: str,
|
||||
file_info: FileInfo,
|
||||
media_info: Optional[LocalMedia] = None,
|
||||
federation: bool = False,
|
||||
) -> Optional[Responder]:
|
||||
"""See StorageProvider.fetch"""
|
||||
|
||||
backup_fname = os.path.join(self.base_directory, path)
|
||||
if os.path.isfile(backup_fname):
|
||||
if federation:
|
||||
assert media_info is not None
|
||||
boundary = uuid4().hex.encode("ascii")
|
||||
return MultipartResponder(
|
||||
open(backup_fname, "rb"), media_info, boundary
|
||||
)
|
||||
return FileResponder(open(backup_fname, "rb"))
|
||||
|
||||
return None
|
||||
|
||||
@@ -22,11 +22,27 @@
|
||||
import logging
|
||||
from io import BytesIO
|
||||
from types import TracebackType
|
||||
from typing import Optional, Tuple, Type
|
||||
from typing import TYPE_CHECKING, List, Optional, Tuple, Type
|
||||
|
||||
from PIL import Image
|
||||
|
||||
from synapse.api.errors import Codes, SynapseError, cs_error
|
||||
from synapse.config.repository import THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP
|
||||
from synapse.http.server import respond_with_json
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.media._base import (
|
||||
FileInfo,
|
||||
ThumbnailInfo,
|
||||
respond_404,
|
||||
respond_with_file,
|
||||
respond_with_responder,
|
||||
)
|
||||
from synapse.media.media_storage import MediaStorage
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.media.media_repository import MediaRepository
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -231,3 +247,473 @@ class Thumbnailer:
|
||||
def __del__(self) -> None:
|
||||
# Make sure we actually do close the image, rather than leak data.
|
||||
self.close()
|
||||
|
||||
|
||||
class ThumbnailProvider:
|
||||
def __init__(
|
||||
self,
|
||||
hs: "HomeServer",
|
||||
media_repo: "MediaRepository",
|
||||
media_storage: MediaStorage,
|
||||
):
|
||||
self.hs = hs
|
||||
self.media_repo = media_repo
|
||||
self.media_storage = media_storage
|
||||
self.store = hs.get_datastores().main
|
||||
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
|
||||
|
||||
async def respond_local_thumbnail(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
media_id: str,
|
||||
width: int,
|
||||
height: int,
|
||||
method: str,
|
||||
m_type: str,
|
||||
max_timeout_ms: int,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_local_media_info(
|
||||
request, media_id, max_timeout_ms
|
||||
)
|
||||
if not media_info:
|
||||
return
|
||||
|
||||
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
||||
await self._select_and_respond_with_thumbnail(
|
||||
request,
|
||||
width,
|
||||
height,
|
||||
method,
|
||||
m_type,
|
||||
thumbnail_infos,
|
||||
media_id,
|
||||
media_id,
|
||||
url_cache=bool(media_info.url_cache),
|
||||
server_name=None,
|
||||
)
|
||||
|
||||
async def select_or_generate_local_thumbnail(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
media_id: str,
|
||||
desired_width: int,
|
||||
desired_height: int,
|
||||
desired_method: str,
|
||||
desired_type: str,
|
||||
max_timeout_ms: int,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_local_media_info(
|
||||
request, media_id, max_timeout_ms
|
||||
)
|
||||
|
||||
if not media_info:
|
||||
return
|
||||
|
||||
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
||||
for info in thumbnail_infos:
|
||||
t_w = info.width == desired_width
|
||||
t_h = info.height == desired_height
|
||||
t_method = info.method == desired_method
|
||||
t_type = info.type == desired_type
|
||||
|
||||
if t_w and t_h and t_method and t_type:
|
||||
file_info = FileInfo(
|
||||
server_name=None,
|
||||
file_id=media_id,
|
||||
url_cache=bool(media_info.url_cache),
|
||||
thumbnail=info,
|
||||
)
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
if responder:
|
||||
await respond_with_responder(
|
||||
request, responder, info.type, info.length
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug("We don't have a thumbnail of that size. Generating")
|
||||
|
||||
# Okay, so we generate one.
|
||||
file_path = await self.media_repo.generate_local_exact_thumbnail(
|
||||
media_id,
|
||||
desired_width,
|
||||
desired_height,
|
||||
desired_method,
|
||||
desired_type,
|
||||
url_cache=bool(media_info.url_cache),
|
||||
)
|
||||
|
||||
if file_path:
|
||||
await respond_with_file(request, desired_type, file_path)
|
||||
else:
|
||||
logger.warning("Failed to generate thumbnail")
|
||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||
|
||||
async def select_or_generate_remote_thumbnail(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
server_name: str,
|
||||
media_id: str,
|
||||
desired_width: int,
|
||||
desired_height: int,
|
||||
desired_method: str,
|
||||
desired_type: str,
|
||||
max_timeout_ms: int,
|
||||
ip_address: str,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_remote_media_info(
|
||||
server_name, media_id, max_timeout_ms, ip_address
|
||||
)
|
||||
if not media_info:
|
||||
respond_404(request)
|
||||
return
|
||||
|
||||
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
||||
server_name, media_id
|
||||
)
|
||||
|
||||
file_id = media_info.filesystem_id
|
||||
|
||||
for info in thumbnail_infos:
|
||||
t_w = info.width == desired_width
|
||||
t_h = info.height == desired_height
|
||||
t_method = info.method == desired_method
|
||||
t_type = info.type == desired_type
|
||||
|
||||
if t_w and t_h and t_method and t_type:
|
||||
file_info = FileInfo(
|
||||
server_name=server_name,
|
||||
file_id=file_id,
|
||||
thumbnail=info,
|
||||
)
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
if responder:
|
||||
await respond_with_responder(
|
||||
request, responder, info.type, info.length
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug("We don't have a thumbnail of that size. Generating")
|
||||
|
||||
# Okay, so we generate one.
|
||||
file_path = await self.media_repo.generate_remote_exact_thumbnail(
|
||||
server_name,
|
||||
file_id,
|
||||
media_id,
|
||||
desired_width,
|
||||
desired_height,
|
||||
desired_method,
|
||||
desired_type,
|
||||
)
|
||||
|
||||
if file_path:
|
||||
await respond_with_file(request, desired_type, file_path)
|
||||
else:
|
||||
logger.warning("Failed to generate thumbnail")
|
||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||
|
||||
async def respond_remote_thumbnail(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
server_name: str,
|
||||
media_id: str,
|
||||
width: int,
|
||||
height: int,
|
||||
method: str,
|
||||
m_type: str,
|
||||
max_timeout_ms: int,
|
||||
ip_address: str,
|
||||
) -> None:
|
||||
# TODO: Don't download the whole remote file
|
||||
# We should proxy the thumbnail from the remote server instead of
|
||||
# downloading the remote file and generating our own thumbnails.
|
||||
media_info = await self.media_repo.get_remote_media_info(
|
||||
server_name, media_id, max_timeout_ms, ip_address
|
||||
)
|
||||
if not media_info:
|
||||
return
|
||||
|
||||
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
||||
server_name, media_id
|
||||
)
|
||||
await self._select_and_respond_with_thumbnail(
|
||||
request,
|
||||
width,
|
||||
height,
|
||||
method,
|
||||
m_type,
|
||||
thumbnail_infos,
|
||||
media_id,
|
||||
media_info.filesystem_id,
|
||||
url_cache=False,
|
||||
server_name=server_name,
|
||||
)
|
||||
|
||||
async def _select_and_respond_with_thumbnail(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
desired_width: int,
|
||||
desired_height: int,
|
||||
desired_method: str,
|
||||
desired_type: str,
|
||||
thumbnail_infos: List[ThumbnailInfo],
|
||||
media_id: str,
|
||||
file_id: str,
|
||||
url_cache: bool,
|
||||
server_name: Optional[str] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Respond to a request with an appropriate thumbnail from the previously generated thumbnails.
|
||||
|
||||
Args:
|
||||
request: The incoming request.
|
||||
desired_width: The desired width, the returned thumbnail may be larger than this.
|
||||
desired_height: The desired height, the returned thumbnail may be larger than this.
|
||||
desired_method: The desired method used to generate the thumbnail.
|
||||
desired_type: The desired content-type of the thumbnail.
|
||||
thumbnail_infos: A list of thumbnail info of candidate thumbnails.
|
||||
file_id: The ID of the media that a thumbnail is being requested for.
|
||||
url_cache: True if this is from a URL cache.
|
||||
server_name: The server name, if this is a remote thumbnail.
|
||||
"""
|
||||
logger.debug(
|
||||
"_select_and_respond_with_thumbnail: media_id=%s desired=%sx%s (%s) thumbnail_infos=%s",
|
||||
media_id,
|
||||
desired_width,
|
||||
desired_height,
|
||||
desired_method,
|
||||
thumbnail_infos,
|
||||
)
|
||||
|
||||
# If `dynamic_thumbnails` is enabled, we expect Synapse to go down a
|
||||
# different code path to handle it.
|
||||
assert not self.dynamic_thumbnails
|
||||
|
||||
if thumbnail_infos:
|
||||
file_info = self._select_thumbnail(
|
||||
desired_width,
|
||||
desired_height,
|
||||
desired_method,
|
||||
desired_type,
|
||||
thumbnail_infos,
|
||||
file_id,
|
||||
url_cache,
|
||||
server_name,
|
||||
)
|
||||
if not file_info:
|
||||
logger.info("Couldn't find a thumbnail matching the desired inputs")
|
||||
respond_404(request)
|
||||
return
|
||||
|
||||
# The thumbnail property must exist.
|
||||
assert file_info.thumbnail is not None
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
if responder:
|
||||
await respond_with_responder(
|
||||
request,
|
||||
responder,
|
||||
file_info.thumbnail.type,
|
||||
file_info.thumbnail.length,
|
||||
)
|
||||
return
|
||||
|
||||
# If we can't find the thumbnail we regenerate it. This can happen
|
||||
# if e.g. we've deleted the thumbnails but still have the original
|
||||
# image somewhere.
|
||||
#
|
||||
# Since we have an entry for the thumbnail in the DB we a) know we
|
||||
# have have successfully generated the thumbnail in the past (so we
|
||||
# don't need to worry about repeatedly failing to generate
|
||||
# thumbnails), and b) have already calculated that appropriate
|
||||
# width/height/method so we can just call the "generate exact"
|
||||
# methods.
|
||||
|
||||
# First let's check that we do actually have the original image
|
||||
# still. This will throw a 404 if we don't.
|
||||
# TODO: We should refetch the thumbnails for remote media.
|
||||
await self.media_storage.ensure_media_is_in_local_cache(
|
||||
FileInfo(server_name, file_id, url_cache=url_cache)
|
||||
)
|
||||
|
||||
if server_name:
|
||||
await self.media_repo.generate_remote_exact_thumbnail(
|
||||
server_name,
|
||||
file_id=file_id,
|
||||
media_id=media_id,
|
||||
t_width=file_info.thumbnail.width,
|
||||
t_height=file_info.thumbnail.height,
|
||||
t_method=file_info.thumbnail.method,
|
||||
t_type=file_info.thumbnail.type,
|
||||
)
|
||||
else:
|
||||
await self.media_repo.generate_local_exact_thumbnail(
|
||||
media_id=media_id,
|
||||
t_width=file_info.thumbnail.width,
|
||||
t_height=file_info.thumbnail.height,
|
||||
t_method=file_info.thumbnail.method,
|
||||
t_type=file_info.thumbnail.type,
|
||||
url_cache=url_cache,
|
||||
)
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
await respond_with_responder(
|
||||
request,
|
||||
responder,
|
||||
file_info.thumbnail.type,
|
||||
file_info.thumbnail.length,
|
||||
)
|
||||
else:
|
||||
# This might be because:
|
||||
# 1. We can't create thumbnails for the given media (corrupted or
|
||||
# unsupported file type), or
|
||||
# 2. The thumbnailing process never ran or errored out initially
|
||||
# when the media was first uploaded (these bugs should be
|
||||
# reported and fixed).
|
||||
# Note that we don't attempt to generate a thumbnail now because
|
||||
# `dynamic_thumbnails` is disabled.
|
||||
logger.info("Failed to find any generated thumbnails")
|
||||
|
||||
assert request.path is not None
|
||||
respond_with_json(
|
||||
request,
|
||||
400,
|
||||
cs_error(
|
||||
"Cannot find any thumbnails for the requested media ('%s'). This might mean the media is not a supported_media_format=(%s) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)"
|
||||
% (
|
||||
request.path.decode(),
|
||||
", ".join(THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP.keys()),
|
||||
),
|
||||
code=Codes.UNKNOWN,
|
||||
),
|
||||
send_cors=True,
|
||||
)
|
||||
|
||||
def _select_thumbnail(
|
||||
self,
|
||||
desired_width: int,
|
||||
desired_height: int,
|
||||
desired_method: str,
|
||||
desired_type: str,
|
||||
thumbnail_infos: List[ThumbnailInfo],
|
||||
file_id: str,
|
||||
url_cache: bool,
|
||||
server_name: Optional[str],
|
||||
) -> Optional[FileInfo]:
|
||||
"""
|
||||
Choose an appropriate thumbnail from the previously generated thumbnails.
|
||||
|
||||
Args:
|
||||
desired_width: The desired width, the returned thumbnail may be larger than this.
|
||||
desired_height: The desired height, the returned thumbnail may be larger than this.
|
||||
desired_method: The desired method used to generate the thumbnail.
|
||||
desired_type: The desired content-type of the thumbnail.
|
||||
thumbnail_infos: A list of thumbnail infos of candidate thumbnails.
|
||||
file_id: The ID of the media that a thumbnail is being requested for.
|
||||
url_cache: True if this is from a URL cache.
|
||||
server_name: The server name, if this is a remote thumbnail.
|
||||
|
||||
Returns:
|
||||
The thumbnail which best matches the desired parameters.
|
||||
"""
|
||||
desired_method = desired_method.lower()
|
||||
|
||||
# The chosen thumbnail.
|
||||
thumbnail_info = None
|
||||
|
||||
d_w = desired_width
|
||||
d_h = desired_height
|
||||
|
||||
if desired_method == "crop":
|
||||
# Thumbnails that match equal or larger sizes of desired width/height.
|
||||
crop_info_list: List[
|
||||
Tuple[int, int, int, bool, Optional[int], ThumbnailInfo]
|
||||
] = []
|
||||
# Other thumbnails.
|
||||
crop_info_list2: List[
|
||||
Tuple[int, int, int, bool, Optional[int], ThumbnailInfo]
|
||||
] = []
|
||||
for info in thumbnail_infos:
|
||||
# Skip thumbnails generated with different methods.
|
||||
if info.method != "crop":
|
||||
continue
|
||||
|
||||
t_w = info.width
|
||||
t_h = info.height
|
||||
aspect_quality = abs(d_w * t_h - d_h * t_w)
|
||||
min_quality = 0 if d_w <= t_w and d_h <= t_h else 1
|
||||
size_quality = abs((d_w - t_w) * (d_h - t_h))
|
||||
type_quality = desired_type != info.type
|
||||
length_quality = info.length
|
||||
if t_w >= d_w or t_h >= d_h:
|
||||
crop_info_list.append(
|
||||
(
|
||||
aspect_quality,
|
||||
min_quality,
|
||||
size_quality,
|
||||
type_quality,
|
||||
length_quality,
|
||||
info,
|
||||
)
|
||||
)
|
||||
else:
|
||||
crop_info_list2.append(
|
||||
(
|
||||
aspect_quality,
|
||||
min_quality,
|
||||
size_quality,
|
||||
type_quality,
|
||||
length_quality,
|
||||
info,
|
||||
)
|
||||
)
|
||||
# Pick the most appropriate thumbnail. Some values of `desired_width` and
|
||||
# `desired_height` may result in a tie, in which case we avoid comparing on
|
||||
# the thumbnail info and pick the thumbnail that appears earlier
|
||||
# in the list of candidates.
|
||||
if crop_info_list:
|
||||
thumbnail_info = min(crop_info_list, key=lambda t: t[:-1])[-1]
|
||||
elif crop_info_list2:
|
||||
thumbnail_info = min(crop_info_list2, key=lambda t: t[:-1])[-1]
|
||||
elif desired_method == "scale":
|
||||
# Thumbnails that match equal or larger sizes of desired width/height.
|
||||
info_list: List[Tuple[int, bool, int, ThumbnailInfo]] = []
|
||||
# Other thumbnails.
|
||||
info_list2: List[Tuple[int, bool, int, ThumbnailInfo]] = []
|
||||
|
||||
for info in thumbnail_infos:
|
||||
# Skip thumbnails generated with different methods.
|
||||
if info.method != "scale":
|
||||
continue
|
||||
|
||||
t_w = info.width
|
||||
t_h = info.height
|
||||
size_quality = abs((d_w - t_w) * (d_h - t_h))
|
||||
type_quality = desired_type != info.type
|
||||
length_quality = info.length
|
||||
if t_w >= d_w or t_h >= d_h:
|
||||
info_list.append((size_quality, type_quality, length_quality, info))
|
||||
else:
|
||||
info_list2.append(
|
||||
(size_quality, type_quality, length_quality, info)
|
||||
)
|
||||
# Pick the most appropriate thumbnail. Some values of `desired_width` and
|
||||
# `desired_height` may result in a tie, in which case we avoid comparing on
|
||||
# the thumbnail info and pick the thumbnail that appears earlier
|
||||
# in the list of candidates.
|
||||
if info_list:
|
||||
thumbnail_info = min(info_list, key=lambda t: t[:-1])[-1]
|
||||
elif info_list2:
|
||||
thumbnail_info = min(info_list2, key=lambda t: t[:-1])[-1]
|
||||
|
||||
if thumbnail_info:
|
||||
return FileInfo(
|
||||
file_id=file_id,
|
||||
url_cache=url_cache,
|
||||
server_name=server_name,
|
||||
thumbnail=thumbnail_info,
|
||||
)
|
||||
|
||||
# No matching thumbnail was found.
|
||||
return None
|
||||
|
||||
@@ -592,7 +592,7 @@ class UrlPreviewer:
|
||||
|
||||
file_info = FileInfo(server_name=None, file_id=file_id, url_cache=True)
|
||||
|
||||
with self.media_storage.store_into_file(file_info) as (f, fname, finish):
|
||||
async with self.media_storage.store_into_file(file_info) as (f, fname):
|
||||
if url.startswith("data:"):
|
||||
if not allow_data_urls:
|
||||
raise SynapseError(
|
||||
@@ -603,8 +603,6 @@ class UrlPreviewer:
|
||||
else:
|
||||
download_result = await self._download_url(url, f)
|
||||
|
||||
await finish()
|
||||
|
||||
try:
|
||||
time_now_ms = self.clock.time_msec()
|
||||
|
||||
|
||||
@@ -763,6 +763,29 @@ class Notifier:
|
||||
|
||||
return result
|
||||
|
||||
async def wait_for_stream_token(self, stream_token: StreamToken) -> bool:
|
||||
"""Wait for this worker to catch up with the given stream token."""
|
||||
|
||||
start = self.clock.time_msec()
|
||||
while True:
|
||||
current_token = self.event_sources.get_current_token()
|
||||
if stream_token.is_before_or_eq(current_token):
|
||||
return True
|
||||
|
||||
now = self.clock.time_msec()
|
||||
|
||||
if now - start > 10_000:
|
||||
return False
|
||||
|
||||
logger.info(
|
||||
"Waiting for current token to reach %s; currently at %s",
|
||||
stream_token,
|
||||
current_token,
|
||||
)
|
||||
|
||||
# TODO: be better
|
||||
await self.clock.sleep(0.5)
|
||||
|
||||
async def _get_room_ids(
|
||||
self, user: UserID, explicit_room_id: Optional[str]
|
||||
) -> Tuple[StrCollection, bool]:
|
||||
|
||||
@@ -36,7 +36,6 @@ from synapse.http.servlet import (
|
||||
)
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.opentracing import log_kv, set_tag
|
||||
from synapse.replication.http.devices import ReplicationUploadKeysForUserRestServlet
|
||||
from synapse.rest.client._base import client_patterns, interactive_auth_handler
|
||||
from synapse.types import JsonDict, StreamToken
|
||||
from synapse.util.cancellation import cancellable
|
||||
@@ -105,13 +104,8 @@ class KeyUploadServlet(RestServlet):
|
||||
self.auth = hs.get_auth()
|
||||
self.e2e_keys_handler = hs.get_e2e_keys_handler()
|
||||
self.device_handler = hs.get_device_handler()
|
||||
|
||||
if hs.config.worker.worker_app is None:
|
||||
# if main process
|
||||
self.key_uploader = self.e2e_keys_handler.upload_keys_for_user
|
||||
else:
|
||||
# then a worker
|
||||
self.key_uploader = ReplicationUploadKeysForUserRestServlet.make_client(hs)
|
||||
self._clock = hs.get_clock()
|
||||
self._store = hs.get_datastores().main
|
||||
|
||||
async def on_POST(
|
||||
self, request: SynapseRequest, device_id: Optional[str]
|
||||
@@ -151,9 +145,10 @@ class KeyUploadServlet(RestServlet):
|
||||
400, "To upload keys, you must pass device_id when authenticating"
|
||||
)
|
||||
|
||||
result = await self.key_uploader(
|
||||
result = await self.e2e_keys_handler.upload_keys_for_user(
|
||||
user_id=user_id, device_id=device_id, keys=body
|
||||
)
|
||||
|
||||
return 200, result
|
||||
|
||||
|
||||
|
||||
207
synapse/rest/client/media.py
Normal file
207
synapse/rest/client/media.py
Normal file
@@ -0,0 +1,207 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
# Copyright 2015, 2016 OpenMarket Ltd
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
# Originally licensed under the Apache License, Version 2.0:
|
||||
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||
#
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
|
||||
import logging
|
||||
import re
|
||||
|
||||
from synapse.http.server import (
|
||||
HttpServer,
|
||||
respond_with_json,
|
||||
respond_with_json_bytes,
|
||||
set_corp_headers,
|
||||
set_cors_headers,
|
||||
)
|
||||
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.media._base import (
|
||||
DEFAULT_MAX_TIMEOUT_MS,
|
||||
MAXIMUM_ALLOWED_MAX_TIMEOUT_MS,
|
||||
respond_404,
|
||||
)
|
||||
from synapse.media.media_repository import MediaRepository
|
||||
from synapse.media.media_storage import MediaStorage
|
||||
from synapse.media.thumbnailer import ThumbnailProvider
|
||||
from synapse.server import HomeServer
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class UnstablePreviewURLServlet(RestServlet):
|
||||
"""
|
||||
Same as `GET /_matrix/media/r0/preview_url`, this endpoint provides a generic preview API
|
||||
for URLs which outputs Open Graph (https://ogp.me/) responses (with some Matrix
|
||||
specific additions).
|
||||
|
||||
This does have trade-offs compared to other designs:
|
||||
|
||||
* Pros:
|
||||
* Simple and flexible; can be used by any clients at any point
|
||||
* Cons:
|
||||
* If each homeserver provides one of these independently, all the homeservers in a
|
||||
room may needlessly DoS the target URI
|
||||
* The URL metadata must be stored somewhere, rather than just using Matrix
|
||||
itself to store the media.
|
||||
* Matrix cannot be used to distribute the metadata between homeservers.
|
||||
"""
|
||||
|
||||
PATTERNS = [
|
||||
re.compile(r"^/_matrix/client/unstable/org.matrix.msc3916/media/preview_url$")
|
||||
]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
hs: "HomeServer",
|
||||
media_repo: "MediaRepository",
|
||||
media_storage: MediaStorage,
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
self.auth = hs.get_auth()
|
||||
self.clock = hs.get_clock()
|
||||
self.media_repo = media_repo
|
||||
self.media_storage = media_storage
|
||||
assert self.media_repo.url_previewer is not None
|
||||
self.url_previewer = self.media_repo.url_previewer
|
||||
|
||||
async def on_GET(self, request: SynapseRequest) -> None:
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
url = parse_string(request, "url", required=True)
|
||||
ts = parse_integer(request, "ts")
|
||||
if ts is None:
|
||||
ts = self.clock.time_msec()
|
||||
|
||||
og = await self.url_previewer.preview(url, requester.user, ts)
|
||||
respond_with_json_bytes(request, 200, og, send_cors=True)
|
||||
|
||||
|
||||
class UnstableMediaConfigResource(RestServlet):
|
||||
PATTERNS = [
|
||||
re.compile(r"^/_matrix/client/unstable/org.matrix.msc3916/media/config$")
|
||||
]
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__()
|
||||
config = hs.config
|
||||
self.clock = hs.get_clock()
|
||||
self.auth = hs.get_auth()
|
||||
self.limits_dict = {"m.upload.size": config.media.max_upload_size}
|
||||
|
||||
async def on_GET(self, request: SynapseRequest) -> None:
|
||||
await self.auth.get_user_by_req(request)
|
||||
respond_with_json(request, 200, self.limits_dict, send_cors=True)
|
||||
|
||||
|
||||
class UnstableThumbnailResource(RestServlet):
|
||||
PATTERNS = [
|
||||
re.compile(
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/thumbnail/(?P<server_name>[^/]*)/(?P<media_id>[^/]*)$"
|
||||
)
|
||||
]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
hs: "HomeServer",
|
||||
media_repo: "MediaRepository",
|
||||
media_storage: MediaStorage,
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
self.store = hs.get_datastores().main
|
||||
self.media_repo = media_repo
|
||||
self.media_storage = media_storage
|
||||
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
|
||||
self._is_mine_server_name = hs.is_mine_server_name
|
||||
self._server_name = hs.hostname
|
||||
self.prevent_media_downloads_from = hs.config.media.prevent_media_downloads_from
|
||||
self.thumbnailer = ThumbnailProvider(hs, media_repo, media_storage)
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
async def on_GET(
|
||||
self, request: SynapseRequest, server_name: str, media_id: str
|
||||
) -> None:
|
||||
# Validate the server name, raising if invalid
|
||||
parse_and_validate_server_name(server_name)
|
||||
await self.auth.get_user_by_req(request)
|
||||
|
||||
set_cors_headers(request)
|
||||
set_corp_headers(request)
|
||||
width = parse_integer(request, "width", required=True)
|
||||
height = parse_integer(request, "height", required=True)
|
||||
method = parse_string(request, "method", "scale")
|
||||
# TODO Parse the Accept header to get an prioritised list of thumbnail types.
|
||||
m_type = "image/png"
|
||||
max_timeout_ms = parse_integer(
|
||||
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
|
||||
)
|
||||
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
|
||||
|
||||
if self._is_mine_server_name(server_name):
|
||||
if self.dynamic_thumbnails:
|
||||
await self.thumbnailer.select_or_generate_local_thumbnail(
|
||||
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||
)
|
||||
else:
|
||||
await self.thumbnailer.respond_local_thumbnail(
|
||||
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||
)
|
||||
self.media_repo.mark_recently_accessed(None, media_id)
|
||||
else:
|
||||
# Don't let users download media from configured domains, even if it
|
||||
# is already downloaded. This is Trust & Safety tooling to make some
|
||||
# media inaccessible to local users.
|
||||
# See `prevent_media_downloads_from` config docs for more info.
|
||||
if server_name in self.prevent_media_downloads_from:
|
||||
respond_404(request)
|
||||
return
|
||||
|
||||
ip_address = request.getClientAddress().host
|
||||
remote_resp_function = (
|
||||
self.thumbnailer.select_or_generate_remote_thumbnail
|
||||
if self.dynamic_thumbnails
|
||||
else self.thumbnailer.respond_remote_thumbnail
|
||||
)
|
||||
await remote_resp_function(
|
||||
request,
|
||||
server_name,
|
||||
media_id,
|
||||
width,
|
||||
height,
|
||||
method,
|
||||
m_type,
|
||||
max_timeout_ms,
|
||||
ip_address,
|
||||
)
|
||||
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||
|
||||
|
||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||
if hs.config.experimental.msc3916_authenticated_media_enabled:
|
||||
media_repo = hs.get_media_repository()
|
||||
if hs.config.media.url_preview_enabled:
|
||||
UnstablePreviewURLServlet(
|
||||
hs, media_repo, media_repo.media_storage
|
||||
).register(http_server)
|
||||
UnstableMediaConfigResource(hs).register(http_server)
|
||||
UnstableThumbnailResource(hs, media_repo, media_repo.media_storage).register(
|
||||
http_server
|
||||
)
|
||||
@@ -18,14 +18,30 @@
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
from typing import TYPE_CHECKING, Dict, Optional
|
||||
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union
|
||||
|
||||
from synapse._pydantic_compat import HAS_PYDANTIC_V2
|
||||
|
||||
if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
||||
from pydantic.v1 import Extra, StrictInt, StrictStr, constr, validator
|
||||
from pydantic.v1 import (
|
||||
Extra,
|
||||
StrictBool,
|
||||
StrictInt,
|
||||
StrictStr,
|
||||
conint,
|
||||
constr,
|
||||
validator,
|
||||
)
|
||||
else:
|
||||
from pydantic import Extra, StrictInt, StrictStr, constr, validator
|
||||
from pydantic import (
|
||||
Extra,
|
||||
StrictBool,
|
||||
StrictInt,
|
||||
StrictStr,
|
||||
conint,
|
||||
constr,
|
||||
validator,
|
||||
)
|
||||
|
||||
from synapse.rest.models import RequestBodyModel
|
||||
from synapse.util.threepids import validate_email
|
||||
@@ -97,3 +113,172 @@ else:
|
||||
class MsisdnRequestTokenBody(ThreepidRequestTokenBody):
|
||||
country: ISO3116_1_Alpha_2
|
||||
phone_number: StrictStr
|
||||
|
||||
|
||||
class SlidingSyncBody(RequestBodyModel):
|
||||
"""
|
||||
Sliding Sync API request body.
|
||||
|
||||
Attributes:
|
||||
lists: Sliding window API. A map of list key to list information
|
||||
(:class:`SlidingSyncList`). Max lists: 100. The list keys should be
|
||||
arbitrary strings which the client is using to refer to the list. Keep this
|
||||
small as it needs to be sent a lot. Max length: 64 bytes.
|
||||
room_subscriptions: Room subscription API. A map of room ID to room subscription
|
||||
information. Used to subscribe to a specific room. Sometimes clients know
|
||||
exactly which room they want to get information about e.g by following a
|
||||
permalink or by refreshing a webapp currently viewing a specific room. The
|
||||
sliding window API alone is insufficient for this use case because there's
|
||||
no way to say "please track this room explicitly".
|
||||
extensions: Extensions API. A map of extension key to extension config.
|
||||
"""
|
||||
|
||||
class CommonRoomParameters(RequestBodyModel):
|
||||
"""
|
||||
Common parameters shared between the sliding window and room subscription APIs.
|
||||
|
||||
Attributes:
|
||||
required_state: Required state for each room returned. An array of event
|
||||
type and state key tuples. Elements in this array are ORd together to
|
||||
produce the final set of state events to return. One unique exception is
|
||||
when you request all state events via `["*", "*"]`. When used, all state
|
||||
events are returned by default, and additional entries FILTER OUT the
|
||||
returned set of state events. These additional entries cannot use `*`
|
||||
themselves. For example, `["*", "*"], ["m.room.member",
|
||||
"@alice:example.com"]` will *exclude* every `m.room.member` event
|
||||
*except* for `@alice:example.com`, and include every other state event.
|
||||
In addition, `["*", "*"], ["m.space.child", "*"]` is an error, the
|
||||
`m.space.child` filter is not required as it would have been returned
|
||||
anyway.
|
||||
timeline_limit: The maximum number of timeline events to return per response.
|
||||
(Max 1000 messages)
|
||||
include_old_rooms: Determines if `predecessor` rooms are included in the
|
||||
`rooms` response. The user MUST be joined to old rooms for them to show up
|
||||
in the response.
|
||||
"""
|
||||
|
||||
class IncludeOldRooms(RequestBodyModel):
|
||||
timeline_limit: StrictInt
|
||||
required_state: List[Tuple[StrictStr, StrictStr]]
|
||||
|
||||
required_state: List[Tuple[StrictStr, StrictStr]]
|
||||
# mypy workaround via https://github.com/pydantic/pydantic/issues/156#issuecomment-1130883884
|
||||
if TYPE_CHECKING:
|
||||
timeline_limit: int
|
||||
else:
|
||||
timeline_limit: conint(le=1000, strict=True) # type: ignore[valid-type]
|
||||
include_old_rooms: Optional[IncludeOldRooms] = None
|
||||
|
||||
class SlidingSyncList(CommonRoomParameters):
|
||||
"""
|
||||
Attributes:
|
||||
ranges: Sliding window ranges. If this field is missing, no sliding window
|
||||
is used and all rooms are returned in this list. Integers are
|
||||
*inclusive*.
|
||||
sort: How the list should be sorted on the server. The first value is
|
||||
applied first, then tiebreaks are performed with each subsequent sort
|
||||
listed.
|
||||
|
||||
FIXME: Furthermore, it's not currently defined how servers should behave
|
||||
if they encounter a filter or sort operation they do not recognise. If
|
||||
the server rejects the request with an HTTP 400 then that will break
|
||||
backwards compatibility with new clients vs old servers. However, the
|
||||
client would be otherwise unaware that only some of the sort/filter
|
||||
operations have taken effect. We may need to include a "warnings"
|
||||
section to indicate which sort/filter operations are unrecognised,
|
||||
allowing for some form of graceful degradation of service.
|
||||
-- https://github.com/matrix-org/matrix-spec-proposals/blob/kegan/sync-v3/proposals/3575-sync.md#filter-and-sort-extensions
|
||||
|
||||
slow_get_all_rooms: Just get all rooms (for clients that don't want to deal with
|
||||
sliding windows). When true, the `ranges` and `sort` fields are ignored.
|
||||
required_state: Required state for each room returned. An array of event
|
||||
type and state key tuples. Elements in this array are ORd together to
|
||||
produce the final set of state events to return.
|
||||
|
||||
One unique exception is when you request all state events via `["*",
|
||||
"*"]`. When used, all state events are returned by default, and
|
||||
additional entries FILTER OUT the returned set of state events. These
|
||||
additional entries cannot use `*` themselves. For example, `["*", "*"],
|
||||
["m.room.member", "@alice:example.com"]` will *exclude* every
|
||||
`m.room.member` event *except* for `@alice:example.com`, and include
|
||||
every other state event. In addition, `["*", "*"], ["m.space.child",
|
||||
"*"]` is an error, the `m.space.child` filter is not required as it
|
||||
would have been returned anyway.
|
||||
|
||||
Room members can be lazily-loaded by using the special `$LAZY` state key
|
||||
(`["m.room.member", "$LAZY"]`). Typically, when you view a room, you
|
||||
want to retrieve all state events except for m.room.member events which
|
||||
you want to lazily load. To get this behaviour, clients can send the
|
||||
following::
|
||||
|
||||
{
|
||||
"required_state": [
|
||||
// activate lazy loading
|
||||
["m.room.member", "$LAZY"],
|
||||
// request all state events _except_ for m.room.member
|
||||
events which are lazily loaded
|
||||
["*", "*"]
|
||||
]
|
||||
}
|
||||
|
||||
timeline_limit: The maximum number of timeline events to return per response.
|
||||
include_old_rooms: Determines if `predecessor` rooms are included in the
|
||||
`rooms` response. The user MUST be joined to old rooms for them to show up
|
||||
in the response.
|
||||
include_heroes: Return a stripped variant of membership events (containing
|
||||
`user_id` and optionally `avatar_url` and `displayname`) for the users used
|
||||
to calculate the room name.
|
||||
filters: Filters to apply to the list before sorting.
|
||||
bump_event_types: Allowlist of event types which should be considered recent activity
|
||||
when sorting `by_recency`. By omitting event types from this field,
|
||||
clients can ensure that uninteresting events (e.g. a profile rename) do
|
||||
not cause a room to jump to the top of its list(s). Empty or omitted
|
||||
`bump_event_types` have no effect—all events in a room will be
|
||||
considered recent activity.
|
||||
"""
|
||||
|
||||
class Filters(RequestBodyModel):
|
||||
is_dm: Optional[StrictBool] = None
|
||||
spaces: Optional[List[StrictStr]] = None
|
||||
is_encrypted: Optional[StrictBool] = None
|
||||
is_invite: Optional[StrictBool] = None
|
||||
room_types: Optional[List[Union[StrictStr, None]]] = None
|
||||
not_room_types: Optional[List[StrictStr]] = None
|
||||
room_name_like: Optional[StrictStr] = None
|
||||
tags: Optional[List[StrictStr]] = None
|
||||
not_tags: Optional[List[StrictStr]] = None
|
||||
|
||||
# mypy workaround via https://github.com/pydantic/pydantic/issues/156#issuecomment-1130883884
|
||||
if TYPE_CHECKING:
|
||||
ranges: Optional[List[Tuple[int, int]]] = None
|
||||
else:
|
||||
ranges: Optional[List[Tuple[conint(ge=0, strict=True), conint(ge=0, strict=True)]]] = None # type: ignore[valid-type]
|
||||
sort: Optional[List[StrictStr]] = None
|
||||
slow_get_all_rooms: Optional[StrictBool] = False
|
||||
include_heroes: Optional[StrictBool] = False
|
||||
filters: Optional[Filters] = None
|
||||
bump_event_types: Optional[List[StrictStr]] = None
|
||||
|
||||
class RoomSubscription(CommonRoomParameters):
|
||||
pass
|
||||
|
||||
class Extension(RequestBodyModel):
|
||||
enabled: Optional[StrictBool] = False
|
||||
lists: Optional[List[StrictStr]] = None
|
||||
rooms: Optional[List[StrictStr]] = None
|
||||
|
||||
# mypy workaround via https://github.com/pydantic/pydantic/issues/156#issuecomment-1130883884
|
||||
if TYPE_CHECKING:
|
||||
lists: Optional[Dict[str, SlidingSyncList]] = None
|
||||
else:
|
||||
lists: Optional[Dict[constr(max_length=64, strict=True), SlidingSyncList]] = None # type: ignore[valid-type]
|
||||
room_subscriptions: Optional[Dict[StrictStr, RoomSubscription]] = None
|
||||
extensions: Optional[Dict[StrictStr, Extension]] = None
|
||||
|
||||
@validator("lists")
|
||||
def lists_length_check(
|
||||
cls, value: Optional[Dict[str, SlidingSyncList]]
|
||||
) -> Optional[Dict[str, SlidingSyncList]]:
|
||||
if value is not None:
|
||||
assert len(value) <= 100, f"Max lists: 100 but saw {len(value)}"
|
||||
return value
|
||||
|
||||
@@ -292,6 +292,9 @@ class RoomStateEventRestServlet(RestServlet):
|
||||
try:
|
||||
if event_type == EventTypes.Member:
|
||||
membership = content.get("membership", None)
|
||||
if not isinstance(membership, str):
|
||||
raise SynapseError(400, "Invalid membership (must be a string)")
|
||||
|
||||
event_id, _ = await self.room_member_handler.update_membership(
|
||||
requester,
|
||||
target=UserID.from_string(state_key),
|
||||
|
||||
@@ -33,6 +33,7 @@ from synapse.events.utils import (
|
||||
format_event_raw,
|
||||
)
|
||||
from synapse.handlers.presence import format_user_presence_state
|
||||
from synapse.handlers.sliding_sync import SlidingSyncConfig, SlidingSyncResult
|
||||
from synapse.handlers.sync import (
|
||||
ArchivedSyncResult,
|
||||
InvitedSyncResult,
|
||||
@@ -43,9 +44,16 @@ from synapse.handlers.sync import (
|
||||
SyncVersion,
|
||||
)
|
||||
from synapse.http.server import HttpServer
|
||||
from synapse.http.servlet import RestServlet, parse_boolean, parse_integer, parse_string
|
||||
from synapse.http.servlet import (
|
||||
RestServlet,
|
||||
parse_and_validate_json_object_from_request,
|
||||
parse_boolean,
|
||||
parse_integer,
|
||||
parse_string,
|
||||
)
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.opentracing import trace_with_opname
|
||||
from synapse.rest.client.models import SlidingSyncBody
|
||||
from synapse.types import JsonDict, Requester, StreamToken
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.caches.lrucache import LruCache
|
||||
@@ -567,5 +575,396 @@ class SyncRestServlet(RestServlet):
|
||||
return result
|
||||
|
||||
|
||||
class SlidingSyncE2eeRestServlet(RestServlet):
|
||||
"""
|
||||
API endpoint for MSC3575 Sliding Sync `/sync/e2ee`. This is being introduced as part
|
||||
of Sliding Sync but doesn't have any sliding window component. It's just a way to
|
||||
get E2EE events without having to sit through a big initial sync (`/sync` v2). And
|
||||
we can avoid encryption events being backed up by the main sync response.
|
||||
|
||||
Having To-Device messages split out to this sync endpoint also helps when clients
|
||||
need to have 2 or more sync streams open at a time, e.g a push notification process
|
||||
and a main process. This can cause the two processes to race to fetch the To-Device
|
||||
events, resulting in the need for complex synchronisation rules to ensure the token
|
||||
is correctly and atomically exchanged between processes.
|
||||
|
||||
GET parameters::
|
||||
timeout(int): How long to wait for new events in milliseconds.
|
||||
since(batch_token): Batch token when asking for incremental deltas.
|
||||
|
||||
Response JSON::
|
||||
{
|
||||
"next_batch": // batch token for the next /sync
|
||||
"to_device": {
|
||||
// list of to-device events
|
||||
"events": [
|
||||
{
|
||||
"content: { "algorithm": "m.olm.v1.curve25519-aes-sha2", "ciphertext": { ... }, "org.matrix.msgid": "abcd", "session_id": "abcd" },
|
||||
"type": "m.room.encrypted",
|
||||
"sender": "@alice:example.com",
|
||||
}
|
||||
// ...
|
||||
]
|
||||
},
|
||||
"device_lists": {
|
||||
"changed": ["@alice:example.com"],
|
||||
"left": ["@bob:example.com"]
|
||||
},
|
||||
"device_one_time_keys_count": {
|
||||
"signed_curve25519": 50
|
||||
},
|
||||
"device_unused_fallback_key_types": [
|
||||
"signed_curve25519"
|
||||
]
|
||||
}
|
||||
"""
|
||||
|
||||
PATTERNS = client_patterns(
|
||||
"/org.matrix.msc3575/sync/e2ee$", releases=[], v1=False, unstable=True
|
||||
)
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__()
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.store = hs.get_datastores().main
|
||||
self.sync_handler = hs.get_sync_handler()
|
||||
|
||||
# Filtering only matters for the `device_lists` because it requires a bunch of
|
||||
# derived information from rooms (see how `_generate_sync_entry_for_rooms()`
|
||||
# prepares a bunch of data for `_generate_sync_entry_for_device_list()`).
|
||||
self.only_member_events_filter_collection = FilterCollection(
|
||||
self.hs,
|
||||
{
|
||||
"room": {
|
||||
# We only care about membership events for the `device_lists`.
|
||||
# Membership will tell us whether a user has joined/left a room and
|
||||
# if there are new devices to encrypt for.
|
||||
"timeline": {
|
||||
"types": ["m.room.member"],
|
||||
},
|
||||
"state": {
|
||||
"types": ["m.room.member"],
|
||||
},
|
||||
# We don't want any extra account_data generated because it's not
|
||||
# returned by this endpoint. This helps us avoid work in
|
||||
# `_generate_sync_entry_for_rooms()`
|
||||
"account_data": {
|
||||
"not_types": ["*"],
|
||||
},
|
||||
# We don't want any extra ephemeral data generated because it's not
|
||||
# returned by this endpoint. This helps us avoid work in
|
||||
# `_generate_sync_entry_for_rooms()`
|
||||
"ephemeral": {
|
||||
"not_types": ["*"],
|
||||
},
|
||||
},
|
||||
# We don't want any extra account_data generated because it's not
|
||||
# returned by this endpoint. (This is just here for good measure)
|
||||
"account_data": {
|
||||
"not_types": ["*"],
|
||||
},
|
||||
# We don't want any extra presence data generated because it's not
|
||||
# returned by this endpoint. (This is just here for good measure)
|
||||
"presence": {
|
||||
"not_types": ["*"],
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||
requester = await self.auth.get_user_by_req(request, allow_guest=True)
|
||||
user = requester.user
|
||||
device_id = requester.device_id
|
||||
|
||||
timeout = parse_integer(request, "timeout", default=0)
|
||||
since = parse_string(request, "since")
|
||||
|
||||
sync_config = SyncConfig(
|
||||
user=user,
|
||||
filter_collection=self.only_member_events_filter_collection,
|
||||
is_guest=requester.is_guest,
|
||||
device_id=device_id,
|
||||
)
|
||||
|
||||
since_token = None
|
||||
if since is not None:
|
||||
since_token = await StreamToken.from_string(self.store, since)
|
||||
|
||||
# Request cache key
|
||||
request_key = (
|
||||
SyncVersion.E2EE_SYNC,
|
||||
user,
|
||||
timeout,
|
||||
since,
|
||||
)
|
||||
|
||||
# Gather data for the response
|
||||
sync_result = await self.sync_handler.wait_for_sync_for_user(
|
||||
requester,
|
||||
sync_config,
|
||||
SyncVersion.E2EE_SYNC,
|
||||
request_key,
|
||||
since_token=since_token,
|
||||
timeout=timeout,
|
||||
full_state=False,
|
||||
)
|
||||
|
||||
# The client may have disconnected by now; don't bother to serialize the
|
||||
# response if so.
|
||||
if request._disconnected:
|
||||
logger.info("Client has disconnected; not serializing response.")
|
||||
return 200, {}
|
||||
|
||||
response: JsonDict = defaultdict(dict)
|
||||
response["next_batch"] = await sync_result.next_batch.to_string(self.store)
|
||||
|
||||
if sync_result.to_device:
|
||||
response["to_device"] = {"events": sync_result.to_device}
|
||||
|
||||
if sync_result.device_lists.changed:
|
||||
response["device_lists"]["changed"] = list(sync_result.device_lists.changed)
|
||||
if sync_result.device_lists.left:
|
||||
response["device_lists"]["left"] = list(sync_result.device_lists.left)
|
||||
|
||||
# We always include this because https://github.com/vector-im/element-android/issues/3725
|
||||
# The spec isn't terribly clear on when this can be omitted and how a client would tell
|
||||
# the difference between "no keys present" and "nothing changed" in terms of whole field
|
||||
# absent / individual key type entry absent
|
||||
# Corresponding synapse issue: https://github.com/matrix-org/synapse/issues/10456
|
||||
response["device_one_time_keys_count"] = sync_result.device_one_time_keys_count
|
||||
|
||||
# https://github.com/matrix-org/matrix-doc/blob/54255851f642f84a4f1aaf7bc063eebe3d76752b/proposals/2732-olm-fallback-keys.md
|
||||
# states that this field should always be included, as long as the server supports the feature.
|
||||
response["device_unused_fallback_key_types"] = (
|
||||
sync_result.device_unused_fallback_key_types
|
||||
)
|
||||
|
||||
return 200, response
|
||||
|
||||
|
||||
class SlidingSyncRestServlet(RestServlet):
|
||||
"""
|
||||
API endpoint for MSC3575 Sliding Sync `/sync`. Allows for clients to request a
|
||||
subset (sliding window) of rooms, state, and timeline events (just what they need)
|
||||
in order to bootstrap quickly and subscribe to only what the client cares about.
|
||||
Because the client can specify what it cares about, we can respond quickly and skip
|
||||
all of the work we would normally have to do with a sync v2 response.
|
||||
|
||||
Request query parameters:
|
||||
timeout: How long to wait for new events in milliseconds.
|
||||
pos: Stream position token when asking for incremental deltas.
|
||||
|
||||
Request body::
|
||||
{
|
||||
// Sliding Window API
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [ [0, 99] ],
|
||||
"sort": [ "by_notification_level", "by_recency", "by_name" ],
|
||||
"required_state": [
|
||||
["m.room.join_rules", ""],
|
||||
["m.room.history_visibility", ""],
|
||||
["m.space.child", "*"]
|
||||
],
|
||||
"timeline_limit": 10,
|
||||
"filters": {
|
||||
"is_dm": true
|
||||
},
|
||||
"bump_event_types": [ "m.room.message", "m.room.encrypted" ],
|
||||
}
|
||||
},
|
||||
// Room Subscriptions API
|
||||
"room_subscriptions": {
|
||||
"!sub1:bar": {
|
||||
"required_state": [ ["*","*"] ],
|
||||
"timeline_limit": 10,
|
||||
"include_old_rooms": {
|
||||
"timeline_limit": 1,
|
||||
"required_state": [ ["m.room.tombstone", ""], ["m.room.create", ""] ],
|
||||
}
|
||||
}
|
||||
},
|
||||
// Extensions API
|
||||
"extensions": {}
|
||||
}
|
||||
|
||||
Response JSON::
|
||||
{
|
||||
"next_pos": "s58_224_0_13_10_1_1_16_0_1",
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"count": 1337,
|
||||
"ops": [{
|
||||
"op": "SYNC",
|
||||
"range": [0, 99],
|
||||
"room_ids": [
|
||||
"!foo:bar",
|
||||
// ... 99 more room IDs
|
||||
]
|
||||
}]
|
||||
}
|
||||
},
|
||||
// Aggregated rooms from lists and room subscriptions
|
||||
"rooms": {
|
||||
// Room from room subscription
|
||||
"!sub1:bar": {
|
||||
"name": "Alice and Bob",
|
||||
"avatar": "mxc://...",
|
||||
"initial": true,
|
||||
"required_state": [
|
||||
{"sender":"@alice:example.com","type":"m.room.create", "state_key":"", "content":{"creator":"@alice:example.com"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.join_rules", "state_key":"", "content":{"join_rule":"invite"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.history_visibility", "state_key":"", "content":{"history_visibility":"joined"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.member", "state_key":"@alice:example.com", "content":{"membership":"join"}}
|
||||
],
|
||||
"timeline": [
|
||||
{"sender":"@alice:example.com","type":"m.room.create", "state_key":"", "content":{"creator":"@alice:example.com"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.join_rules", "state_key":"", "content":{"join_rule":"invite"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.history_visibility", "state_key":"", "content":{"history_visibility":"joined"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.member", "state_key":"@alice:example.com", "content":{"membership":"join"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"A"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"B"}},
|
||||
],
|
||||
"prev_batch": "t111_222_333",
|
||||
"joined_count": 41,
|
||||
"invited_count": 1,
|
||||
"notification_count": 1,
|
||||
"highlight_count": 0
|
||||
},
|
||||
// rooms from list
|
||||
"!foo:bar": {
|
||||
"name": "The calculated room name",
|
||||
"avatar": "mxc://...",
|
||||
"initial": true,
|
||||
"required_state": [
|
||||
{"sender":"@alice:example.com","type":"m.room.join_rules", "state_key":"", "content":{"join_rule":"invite"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.history_visibility", "state_key":"", "content":{"history_visibility":"joined"}},
|
||||
{"sender":"@alice:example.com","type":"m.space.child", "state_key":"!foo:example.com", "content":{"via":["example.com"]}},
|
||||
{"sender":"@alice:example.com","type":"m.space.child", "state_key":"!bar:example.com", "content":{"via":["example.com"]}},
|
||||
{"sender":"@alice:example.com","type":"m.space.child", "state_key":"!baz:example.com", "content":{"via":["example.com"]}}
|
||||
],
|
||||
"timeline": [
|
||||
{"sender":"@alice:example.com","type":"m.room.join_rules", "state_key":"", "content":{"join_rule":"invite"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"A"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"B"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"C"}},
|
||||
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"D"}},
|
||||
],
|
||||
"prev_batch": "t111_222_333",
|
||||
"joined_count": 4,
|
||||
"invited_count": 0,
|
||||
"notification_count": 54,
|
||||
"highlight_count": 3
|
||||
},
|
||||
// ... 99 more items
|
||||
},
|
||||
"extensions": {}
|
||||
}
|
||||
"""
|
||||
|
||||
PATTERNS = client_patterns(
|
||||
"/org.matrix.msc3575/sync$", releases=[], v1=False, unstable=True
|
||||
)
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__()
|
||||
self.auth = hs.get_auth()
|
||||
self.store = hs.get_datastores().main
|
||||
self.filtering = hs.get_filtering()
|
||||
self.sliding_sync_handler = hs.get_sliding_sync_handler()
|
||||
|
||||
# TODO: Update this to `on_GET` once we figure out how we want to handle params
|
||||
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||
requester = await self.auth.get_user_by_req(request, allow_guest=True)
|
||||
user = requester.user
|
||||
device_id = requester.device_id
|
||||
|
||||
timeout = parse_integer(request, "timeout", default=0)
|
||||
# Position in the stream
|
||||
from_token_string = parse_string(request, "pos")
|
||||
|
||||
from_token = None
|
||||
if from_token_string is not None:
|
||||
from_token = await StreamToken.from_string(self.store, from_token_string)
|
||||
|
||||
# TODO: We currently don't know whether we're going to use sticky params or
|
||||
# maybe some filters like sync v2 where they are built up once and referenced
|
||||
# by filter ID. For now, we will just prototype with always passing everything
|
||||
# in.
|
||||
body = parse_and_validate_json_object_from_request(request, SlidingSyncBody)
|
||||
logger.info("Sliding sync request: %r", body)
|
||||
|
||||
sync_config = SlidingSyncConfig(
|
||||
user=user,
|
||||
device_id=device_id,
|
||||
# FIXME: Currently, we're just manually copying the fields from the
|
||||
# `SlidingSyncBody` into the config. How can we gurantee into the future
|
||||
# that we don't forget any? I would like something more structured like
|
||||
# `copy_attributes(from=body, to=config)`
|
||||
lists=body.lists,
|
||||
room_subscriptions=body.room_subscriptions,
|
||||
extensions=body.extensions,
|
||||
)
|
||||
|
||||
sliding_sync_results = await self.sliding_sync_handler.wait_for_sync_for_user(
|
||||
requester,
|
||||
sync_config,
|
||||
from_token,
|
||||
timeout,
|
||||
)
|
||||
|
||||
# The client may have disconnected by now; don't bother to serialize the
|
||||
# response if so.
|
||||
if request._disconnected:
|
||||
logger.info("Client has disconnected; not serializing response.")
|
||||
return 200, {}
|
||||
|
||||
response_content = await self.encode_response(sliding_sync_results)
|
||||
|
||||
return 200, response_content
|
||||
|
||||
# TODO: Is there a better way to encode things?
|
||||
async def encode_response(
|
||||
self,
|
||||
sliding_sync_result: SlidingSyncResult,
|
||||
) -> JsonDict:
|
||||
response: JsonDict = defaultdict(dict)
|
||||
|
||||
response["next_pos"] = await sliding_sync_result.next_pos.to_string(self.store)
|
||||
serialized_lists = self.encode_lists(sliding_sync_result.lists)
|
||||
if serialized_lists:
|
||||
response["lists"] = serialized_lists
|
||||
response["rooms"] = {} # TODO: sliding_sync_result.rooms
|
||||
response["extensions"] = {} # TODO: sliding_sync_result.extensions
|
||||
|
||||
return response
|
||||
|
||||
def encode_lists(
|
||||
self, lists: Dict[str, SlidingSyncResult.SlidingWindowList]
|
||||
) -> JsonDict:
|
||||
def encode_operation(
|
||||
operation: SlidingSyncResult.SlidingWindowList.Operation,
|
||||
) -> JsonDict:
|
||||
return {
|
||||
"op": operation.op.value,
|
||||
"range": operation.range,
|
||||
"room_ids": operation.room_ids,
|
||||
}
|
||||
|
||||
serialized_lists = {}
|
||||
for list_key, list_result in lists.items():
|
||||
serialized_lists[list_key] = {
|
||||
"count": list_result.count,
|
||||
"ops": [encode_operation(op) for op in list_result.ops],
|
||||
}
|
||||
|
||||
return serialized_lists
|
||||
|
||||
|
||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||
SyncRestServlet(hs).register(http_server)
|
||||
|
||||
if hs.config.experimental.msc3575_enabled:
|
||||
SlidingSyncRestServlet(hs).register(http_server)
|
||||
SlidingSyncE2eeRestServlet(hs).register(http_server)
|
||||
|
||||
@@ -97,6 +97,12 @@ class DownloadResource(RestServlet):
|
||||
respond_404(request)
|
||||
return
|
||||
|
||||
ip_address = request.getClientAddress().host
|
||||
await self.media_repo.get_remote_media(
|
||||
request, server_name, media_id, file_name, max_timeout_ms
|
||||
request,
|
||||
server_name,
|
||||
media_id,
|
||||
file_name,
|
||||
max_timeout_ms,
|
||||
ip_address,
|
||||
)
|
||||
|
||||
@@ -22,23 +22,18 @@
|
||||
|
||||
import logging
|
||||
import re
|
||||
from typing import TYPE_CHECKING, List, Optional, Tuple
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from synapse.api.errors import Codes, SynapseError, cs_error
|
||||
from synapse.config.repository import THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP
|
||||
from synapse.http.server import respond_with_json, set_corp_headers, set_cors_headers
|
||||
from synapse.http.server import set_corp_headers, set_cors_headers
|
||||
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.media._base import (
|
||||
DEFAULT_MAX_TIMEOUT_MS,
|
||||
MAXIMUM_ALLOWED_MAX_TIMEOUT_MS,
|
||||
FileInfo,
|
||||
ThumbnailInfo,
|
||||
respond_404,
|
||||
respond_with_file,
|
||||
respond_with_responder,
|
||||
)
|
||||
from synapse.media.media_storage import MediaStorage
|
||||
from synapse.media.thumbnailer import ThumbnailProvider
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -66,10 +61,11 @@ class ThumbnailResource(RestServlet):
|
||||
self.store = hs.get_datastores().main
|
||||
self.media_repo = media_repo
|
||||
self.media_storage = media_storage
|
||||
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
|
||||
self._is_mine_server_name = hs.is_mine_server_name
|
||||
self._server_name = hs.hostname
|
||||
self.prevent_media_downloads_from = hs.config.media.prevent_media_downloads_from
|
||||
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
|
||||
self.thumbnail_provider = ThumbnailProvider(hs, media_repo, media_storage)
|
||||
|
||||
async def on_GET(
|
||||
self, request: SynapseRequest, server_name: str, media_id: str
|
||||
@@ -91,11 +87,11 @@ class ThumbnailResource(RestServlet):
|
||||
|
||||
if self._is_mine_server_name(server_name):
|
||||
if self.dynamic_thumbnails:
|
||||
await self._select_or_generate_local_thumbnail(
|
||||
await self.thumbnail_provider.select_or_generate_local_thumbnail(
|
||||
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||
)
|
||||
else:
|
||||
await self._respond_local_thumbnail(
|
||||
await self.thumbnail_provider.respond_local_thumbnail(
|
||||
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||
)
|
||||
self.media_repo.mark_recently_accessed(None, media_id)
|
||||
@@ -108,10 +104,11 @@ class ThumbnailResource(RestServlet):
|
||||
respond_404(request)
|
||||
return
|
||||
|
||||
ip_address = request.getClientAddress().host
|
||||
remote_resp_function = (
|
||||
self._select_or_generate_remote_thumbnail
|
||||
self.thumbnail_provider.select_or_generate_remote_thumbnail
|
||||
if self.dynamic_thumbnails
|
||||
else self._respond_remote_thumbnail
|
||||
else self.thumbnail_provider.respond_remote_thumbnail
|
||||
)
|
||||
await remote_resp_function(
|
||||
request,
|
||||
@@ -122,459 +119,6 @@ class ThumbnailResource(RestServlet):
|
||||
method,
|
||||
m_type,
|
||||
max_timeout_ms,
|
||||
ip_address,
|
||||
)
|
||||
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||
|
||||
async def _respond_local_thumbnail(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
media_id: str,
|
||||
width: int,
|
||||
height: int,
|
||||
method: str,
|
||||
m_type: str,
|
||||
max_timeout_ms: int,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_local_media_info(
|
||||
request, media_id, max_timeout_ms
|
||||
)
|
||||
if not media_info:
|
||||
return
|
||||
|
||||
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
||||
await self._select_and_respond_with_thumbnail(
|
||||
request,
|
||||
width,
|
||||
height,
|
||||
method,
|
||||
m_type,
|
||||
thumbnail_infos,
|
||||
media_id,
|
||||
media_id,
|
||||
url_cache=bool(media_info.url_cache),
|
||||
server_name=None,
|
||||
)
|
||||
|
||||
async def _select_or_generate_local_thumbnail(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
media_id: str,
|
||||
desired_width: int,
|
||||
desired_height: int,
|
||||
desired_method: str,
|
||||
desired_type: str,
|
||||
max_timeout_ms: int,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_local_media_info(
|
||||
request, media_id, max_timeout_ms
|
||||
)
|
||||
|
||||
if not media_info:
|
||||
return
|
||||
|
||||
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
||||
for info in thumbnail_infos:
|
||||
t_w = info.width == desired_width
|
||||
t_h = info.height == desired_height
|
||||
t_method = info.method == desired_method
|
||||
t_type = info.type == desired_type
|
||||
|
||||
if t_w and t_h and t_method and t_type:
|
||||
file_info = FileInfo(
|
||||
server_name=None,
|
||||
file_id=media_id,
|
||||
url_cache=bool(media_info.url_cache),
|
||||
thumbnail=info,
|
||||
)
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
if responder:
|
||||
await respond_with_responder(
|
||||
request, responder, info.type, info.length
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug("We don't have a thumbnail of that size. Generating")
|
||||
|
||||
# Okay, so we generate one.
|
||||
file_path = await self.media_repo.generate_local_exact_thumbnail(
|
||||
media_id,
|
||||
desired_width,
|
||||
desired_height,
|
||||
desired_method,
|
||||
desired_type,
|
||||
url_cache=bool(media_info.url_cache),
|
||||
)
|
||||
|
||||
if file_path:
|
||||
await respond_with_file(request, desired_type, file_path)
|
||||
else:
|
||||
logger.warning("Failed to generate thumbnail")
|
||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||
|
||||
async def _select_or_generate_remote_thumbnail(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
server_name: str,
|
||||
media_id: str,
|
||||
desired_width: int,
|
||||
desired_height: int,
|
||||
desired_method: str,
|
||||
desired_type: str,
|
||||
max_timeout_ms: int,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_remote_media_info(
|
||||
server_name, media_id, max_timeout_ms
|
||||
)
|
||||
if not media_info:
|
||||
respond_404(request)
|
||||
return
|
||||
|
||||
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
||||
server_name, media_id
|
||||
)
|
||||
|
||||
file_id = media_info.filesystem_id
|
||||
|
||||
for info in thumbnail_infos:
|
||||
t_w = info.width == desired_width
|
||||
t_h = info.height == desired_height
|
||||
t_method = info.method == desired_method
|
||||
t_type = info.type == desired_type
|
||||
|
||||
if t_w and t_h and t_method and t_type:
|
||||
file_info = FileInfo(
|
||||
server_name=server_name,
|
||||
file_id=file_id,
|
||||
thumbnail=info,
|
||||
)
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
if responder:
|
||||
await respond_with_responder(
|
||||
request, responder, info.type, info.length
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug("We don't have a thumbnail of that size. Generating")
|
||||
|
||||
# Okay, so we generate one.
|
||||
file_path = await self.media_repo.generate_remote_exact_thumbnail(
|
||||
server_name,
|
||||
file_id,
|
||||
media_id,
|
||||
desired_width,
|
||||
desired_height,
|
||||
desired_method,
|
||||
desired_type,
|
||||
)
|
||||
|
||||
if file_path:
|
||||
await respond_with_file(request, desired_type, file_path)
|
||||
else:
|
||||
logger.warning("Failed to generate thumbnail")
|
||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||
|
||||
async def _respond_remote_thumbnail(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
server_name: str,
|
||||
media_id: str,
|
||||
width: int,
|
||||
height: int,
|
||||
method: str,
|
||||
m_type: str,
|
||||
max_timeout_ms: int,
|
||||
) -> None:
|
||||
# TODO: Don't download the whole remote file
|
||||
# We should proxy the thumbnail from the remote server instead of
|
||||
# downloading the remote file and generating our own thumbnails.
|
||||
media_info = await self.media_repo.get_remote_media_info(
|
||||
server_name, media_id, max_timeout_ms
|
||||
)
|
||||
if not media_info:
|
||||
return
|
||||
|
||||
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
||||
server_name, media_id
|
||||
)
|
||||
await self._select_and_respond_with_thumbnail(
|
||||
request,
|
||||
width,
|
||||
height,
|
||||
method,
|
||||
m_type,
|
||||
thumbnail_infos,
|
||||
media_id,
|
||||
media_info.filesystem_id,
|
||||
url_cache=False,
|
||||
server_name=server_name,
|
||||
)
|
||||
|
||||
async def _select_and_respond_with_thumbnail(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
desired_width: int,
|
||||
desired_height: int,
|
||||
desired_method: str,
|
||||
desired_type: str,
|
||||
thumbnail_infos: List[ThumbnailInfo],
|
||||
media_id: str,
|
||||
file_id: str,
|
||||
url_cache: bool,
|
||||
server_name: Optional[str] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Respond to a request with an appropriate thumbnail from the previously generated thumbnails.
|
||||
|
||||
Args:
|
||||
request: The incoming request.
|
||||
desired_width: The desired width, the returned thumbnail may be larger than this.
|
||||
desired_height: The desired height, the returned thumbnail may be larger than this.
|
||||
desired_method: The desired method used to generate the thumbnail.
|
||||
desired_type: The desired content-type of the thumbnail.
|
||||
thumbnail_infos: A list of thumbnail info of candidate thumbnails.
|
||||
file_id: The ID of the media that a thumbnail is being requested for.
|
||||
url_cache: True if this is from a URL cache.
|
||||
server_name: The server name, if this is a remote thumbnail.
|
||||
"""
|
||||
logger.debug(
|
||||
"_select_and_respond_with_thumbnail: media_id=%s desired=%sx%s (%s) thumbnail_infos=%s",
|
||||
media_id,
|
||||
desired_width,
|
||||
desired_height,
|
||||
desired_method,
|
||||
thumbnail_infos,
|
||||
)
|
||||
|
||||
# If `dynamic_thumbnails` is enabled, we expect Synapse to go down a
|
||||
# different code path to handle it.
|
||||
assert not self.dynamic_thumbnails
|
||||
|
||||
if thumbnail_infos:
|
||||
file_info = self._select_thumbnail(
|
||||
desired_width,
|
||||
desired_height,
|
||||
desired_method,
|
||||
desired_type,
|
||||
thumbnail_infos,
|
||||
file_id,
|
||||
url_cache,
|
||||
server_name,
|
||||
)
|
||||
if not file_info:
|
||||
logger.info("Couldn't find a thumbnail matching the desired inputs")
|
||||
respond_404(request)
|
||||
return
|
||||
|
||||
# The thumbnail property must exist.
|
||||
assert file_info.thumbnail is not None
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
if responder:
|
||||
await respond_with_responder(
|
||||
request,
|
||||
responder,
|
||||
file_info.thumbnail.type,
|
||||
file_info.thumbnail.length,
|
||||
)
|
||||
return
|
||||
|
||||
# If we can't find the thumbnail we regenerate it. This can happen
|
||||
# if e.g. we've deleted the thumbnails but still have the original
|
||||
# image somewhere.
|
||||
#
|
||||
# Since we have an entry for the thumbnail in the DB we a) know we
|
||||
# have have successfully generated the thumbnail in the past (so we
|
||||
# don't need to worry about repeatedly failing to generate
|
||||
# thumbnails), and b) have already calculated that appropriate
|
||||
# width/height/method so we can just call the "generate exact"
|
||||
# methods.
|
||||
|
||||
# First let's check that we do actually have the original image
|
||||
# still. This will throw a 404 if we don't.
|
||||
# TODO: We should refetch the thumbnails for remote media.
|
||||
await self.media_storage.ensure_media_is_in_local_cache(
|
||||
FileInfo(server_name, file_id, url_cache=url_cache)
|
||||
)
|
||||
|
||||
if server_name:
|
||||
await self.media_repo.generate_remote_exact_thumbnail(
|
||||
server_name,
|
||||
file_id=file_id,
|
||||
media_id=media_id,
|
||||
t_width=file_info.thumbnail.width,
|
||||
t_height=file_info.thumbnail.height,
|
||||
t_method=file_info.thumbnail.method,
|
||||
t_type=file_info.thumbnail.type,
|
||||
)
|
||||
else:
|
||||
await self.media_repo.generate_local_exact_thumbnail(
|
||||
media_id=media_id,
|
||||
t_width=file_info.thumbnail.width,
|
||||
t_height=file_info.thumbnail.height,
|
||||
t_method=file_info.thumbnail.method,
|
||||
t_type=file_info.thumbnail.type,
|
||||
url_cache=url_cache,
|
||||
)
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
await respond_with_responder(
|
||||
request,
|
||||
responder,
|
||||
file_info.thumbnail.type,
|
||||
file_info.thumbnail.length,
|
||||
)
|
||||
else:
|
||||
# This might be because:
|
||||
# 1. We can't create thumbnails for the given media (corrupted or
|
||||
# unsupported file type), or
|
||||
# 2. The thumbnailing process never ran or errored out initially
|
||||
# when the media was first uploaded (these bugs should be
|
||||
# reported and fixed).
|
||||
# Note that we don't attempt to generate a thumbnail now because
|
||||
# `dynamic_thumbnails` is disabled.
|
||||
logger.info("Failed to find any generated thumbnails")
|
||||
|
||||
assert request.path is not None
|
||||
respond_with_json(
|
||||
request,
|
||||
400,
|
||||
cs_error(
|
||||
"Cannot find any thumbnails for the requested media ('%s'). This might mean the media is not a supported_media_format=(%s) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)"
|
||||
% (
|
||||
request.path.decode(),
|
||||
", ".join(THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP.keys()),
|
||||
),
|
||||
code=Codes.UNKNOWN,
|
||||
),
|
||||
send_cors=True,
|
||||
)
|
||||
|
||||
def _select_thumbnail(
|
||||
self,
|
||||
desired_width: int,
|
||||
desired_height: int,
|
||||
desired_method: str,
|
||||
desired_type: str,
|
||||
thumbnail_infos: List[ThumbnailInfo],
|
||||
file_id: str,
|
||||
url_cache: bool,
|
||||
server_name: Optional[str],
|
||||
) -> Optional[FileInfo]:
|
||||
"""
|
||||
Choose an appropriate thumbnail from the previously generated thumbnails.
|
||||
|
||||
Args:
|
||||
desired_width: The desired width, the returned thumbnail may be larger than this.
|
||||
desired_height: The desired height, the returned thumbnail may be larger than this.
|
||||
desired_method: The desired method used to generate the thumbnail.
|
||||
desired_type: The desired content-type of the thumbnail.
|
||||
thumbnail_infos: A list of thumbnail infos of candidate thumbnails.
|
||||
file_id: The ID of the media that a thumbnail is being requested for.
|
||||
url_cache: True if this is from a URL cache.
|
||||
server_name: The server name, if this is a remote thumbnail.
|
||||
|
||||
Returns:
|
||||
The thumbnail which best matches the desired parameters.
|
||||
"""
|
||||
desired_method = desired_method.lower()
|
||||
|
||||
# The chosen thumbnail.
|
||||
thumbnail_info = None
|
||||
|
||||
d_w = desired_width
|
||||
d_h = desired_height
|
||||
|
||||
if desired_method == "crop":
|
||||
# Thumbnails that match equal or larger sizes of desired width/height.
|
||||
crop_info_list: List[
|
||||
Tuple[int, int, int, bool, Optional[int], ThumbnailInfo]
|
||||
] = []
|
||||
# Other thumbnails.
|
||||
crop_info_list2: List[
|
||||
Tuple[int, int, int, bool, Optional[int], ThumbnailInfo]
|
||||
] = []
|
||||
for info in thumbnail_infos:
|
||||
# Skip thumbnails generated with different methods.
|
||||
if info.method != "crop":
|
||||
continue
|
||||
|
||||
t_w = info.width
|
||||
t_h = info.height
|
||||
aspect_quality = abs(d_w * t_h - d_h * t_w)
|
||||
min_quality = 0 if d_w <= t_w and d_h <= t_h else 1
|
||||
size_quality = abs((d_w - t_w) * (d_h - t_h))
|
||||
type_quality = desired_type != info.type
|
||||
length_quality = info.length
|
||||
if t_w >= d_w or t_h >= d_h:
|
||||
crop_info_list.append(
|
||||
(
|
||||
aspect_quality,
|
||||
min_quality,
|
||||
size_quality,
|
||||
type_quality,
|
||||
length_quality,
|
||||
info,
|
||||
)
|
||||
)
|
||||
else:
|
||||
crop_info_list2.append(
|
||||
(
|
||||
aspect_quality,
|
||||
min_quality,
|
||||
size_quality,
|
||||
type_quality,
|
||||
length_quality,
|
||||
info,
|
||||
)
|
||||
)
|
||||
# Pick the most appropriate thumbnail. Some values of `desired_width` and
|
||||
# `desired_height` may result in a tie, in which case we avoid comparing on
|
||||
# the thumbnail info and pick the thumbnail that appears earlier
|
||||
# in the list of candidates.
|
||||
if crop_info_list:
|
||||
thumbnail_info = min(crop_info_list, key=lambda t: t[:-1])[-1]
|
||||
elif crop_info_list2:
|
||||
thumbnail_info = min(crop_info_list2, key=lambda t: t[:-1])[-1]
|
||||
elif desired_method == "scale":
|
||||
# Thumbnails that match equal or larger sizes of desired width/height.
|
||||
info_list: List[Tuple[int, bool, int, ThumbnailInfo]] = []
|
||||
# Other thumbnails.
|
||||
info_list2: List[Tuple[int, bool, int, ThumbnailInfo]] = []
|
||||
|
||||
for info in thumbnail_infos:
|
||||
# Skip thumbnails generated with different methods.
|
||||
if info.method != "scale":
|
||||
continue
|
||||
|
||||
t_w = info.width
|
||||
t_h = info.height
|
||||
size_quality = abs((d_w - t_w) * (d_h - t_h))
|
||||
type_quality = desired_type != info.type
|
||||
length_quality = info.length
|
||||
if t_w >= d_w or t_h >= d_h:
|
||||
info_list.append((size_quality, type_quality, length_quality, info))
|
||||
else:
|
||||
info_list2.append(
|
||||
(size_quality, type_quality, length_quality, info)
|
||||
)
|
||||
# Pick the most appropriate thumbnail. Some values of `desired_width` and
|
||||
# `desired_height` may result in a tie, in which case we avoid comparing on
|
||||
# the thumbnail info and pick the thumbnail that appears earlier
|
||||
# in the list of candidates.
|
||||
if info_list:
|
||||
thumbnail_info = min(info_list, key=lambda t: t[:-1])[-1]
|
||||
elif info_list2:
|
||||
thumbnail_info = min(info_list2, key=lambda t: t[:-1])[-1]
|
||||
|
||||
if thumbnail_info:
|
||||
return FileInfo(
|
||||
file_id=file_id,
|
||||
url_cache=url_cache,
|
||||
server_name=server_name,
|
||||
thumbnail=thumbnail_info,
|
||||
)
|
||||
|
||||
# No matching thumbnail was found.
|
||||
return None
|
||||
|
||||
@@ -109,6 +109,7 @@ from synapse.handlers.room_summary import RoomSummaryHandler
|
||||
from synapse.handlers.search import SearchHandler
|
||||
from synapse.handlers.send_email import SendEmailHandler
|
||||
from synapse.handlers.set_password import SetPasswordHandler
|
||||
from synapse.handlers.sliding_sync import SlidingSyncHandler
|
||||
from synapse.handlers.sso import SsoHandler
|
||||
from synapse.handlers.stats import StatsHandler
|
||||
from synapse.handlers.sync import SyncHandler
|
||||
@@ -554,6 +555,9 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||
def get_sync_handler(self) -> SyncHandler:
|
||||
return SyncHandler(self)
|
||||
|
||||
def get_sliding_sync_handler(self) -> SlidingSyncHandler:
|
||||
return SlidingSyncHandler(self)
|
||||
|
||||
@cache_in_self
|
||||
def get_room_list_handler(self) -> RoomListHandler:
|
||||
return RoomListHandler(self)
|
||||
|
||||
@@ -2461,7 +2461,11 @@ class DatabasePool:
|
||||
|
||||
|
||||
def make_in_list_sql_clause(
|
||||
database_engine: BaseDatabaseEngine, column: str, iterable: Collection[Any]
|
||||
database_engine: BaseDatabaseEngine,
|
||||
column: str,
|
||||
iterable: Collection[Any],
|
||||
*,
|
||||
negative: bool = False,
|
||||
) -> Tuple[str, list]:
|
||||
"""Returns an SQL clause that checks the given column is in the iterable.
|
||||
|
||||
@@ -2474,6 +2478,7 @@ def make_in_list_sql_clause(
|
||||
database_engine
|
||||
column: Name of the column
|
||||
iterable: The values to check the column against.
|
||||
negative: Whether we should check for inequality, i.e. `NOT IN`
|
||||
|
||||
Returns:
|
||||
A tuple of SQL query and the args
|
||||
@@ -2482,9 +2487,19 @@ def make_in_list_sql_clause(
|
||||
if database_engine.supports_using_any_list:
|
||||
# This should hopefully be faster, but also makes postgres query
|
||||
# stats easier to understand.
|
||||
return "%s = ANY(?)" % (column,), [list(iterable)]
|
||||
if not negative:
|
||||
clause = f"{column} = ANY(?)"
|
||||
else:
|
||||
clause = f"{column} != ALL(?)"
|
||||
|
||||
return clause, [list(iterable)]
|
||||
else:
|
||||
return "%s IN (%s)" % (column, ",".join("?" for _ in iterable)), list(iterable)
|
||||
params = ",".join("?" for _ in iterable)
|
||||
if not negative:
|
||||
clause = f"{column} IN ({params})"
|
||||
else:
|
||||
clause = f"{column} NOT IN ({params})"
|
||||
return clause, list(iterable)
|
||||
|
||||
|
||||
# These overloads ensure that `columns` and `iterable` values have the same length.
|
||||
|
||||
@@ -43,11 +43,9 @@ from synapse.storage.database import (
|
||||
)
|
||||
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
||||
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.storage.util.id_generators import (
|
||||
AbstractStreamIdGenerator,
|
||||
MultiWriterIdGenerator,
|
||||
StreamIdGenerator,
|
||||
)
|
||||
from synapse.types import JsonDict, JsonMapping
|
||||
from synapse.util import json_encoder
|
||||
@@ -75,37 +73,20 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
|
||||
|
||||
self._account_data_id_gen: AbstractStreamIdGenerator
|
||||
|
||||
if isinstance(database.engine, PostgresEngine):
|
||||
self._account_data_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="account_data",
|
||||
instance_name=self._instance_name,
|
||||
tables=[
|
||||
("room_account_data", "instance_name", "stream_id"),
|
||||
("room_tags_revisions", "instance_name", "stream_id"),
|
||||
("account_data", "instance_name", "stream_id"),
|
||||
],
|
||||
sequence_name="account_data_sequence",
|
||||
writers=hs.config.worker.writers.account_data,
|
||||
)
|
||||
else:
|
||||
# Multiple writers are not supported for SQLite.
|
||||
#
|
||||
# We shouldn't be running in worker mode with SQLite, but its useful
|
||||
# to support it for unit tests.
|
||||
self._account_data_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
"room_account_data",
|
||||
"stream_id",
|
||||
extra_tables=[
|
||||
("account_data", "stream_id"),
|
||||
("room_tags_revisions", "stream_id"),
|
||||
],
|
||||
is_writer=self._instance_name in hs.config.worker.writers.account_data,
|
||||
)
|
||||
self._account_data_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="account_data",
|
||||
instance_name=self._instance_name,
|
||||
tables=[
|
||||
("room_account_data", "instance_name", "stream_id"),
|
||||
("room_tags_revisions", "instance_name", "stream_id"),
|
||||
("account_data", "instance_name", "stream_id"),
|
||||
],
|
||||
sequence_name="account_data_sequence",
|
||||
writers=hs.config.worker.writers.account_data,
|
||||
)
|
||||
|
||||
account_max = self.get_max_account_data_stream_id()
|
||||
self._account_data_stream_cache = StreamChangeCache(
|
||||
|
||||
@@ -318,7 +318,13 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
self._invalidate_local_get_event_cache(redacts) # type: ignore[attr-defined]
|
||||
# Caches which might leak edits must be invalidated for the event being
|
||||
# redacted.
|
||||
self._attempt_to_invalidate_cache("get_relations_for_event", (redacts,))
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_relations_for_event",
|
||||
(
|
||||
room_id,
|
||||
redacts,
|
||||
),
|
||||
)
|
||||
self._attempt_to_invalidate_cache("get_applicable_edit", (redacts,))
|
||||
self._attempt_to_invalidate_cache("get_thread_id", (redacts,))
|
||||
self._attempt_to_invalidate_cache("get_thread_id_for_receipts", (redacts,))
|
||||
@@ -345,7 +351,13 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
)
|
||||
|
||||
if relates_to:
|
||||
self._attempt_to_invalidate_cache("get_relations_for_event", (relates_to,))
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_relations_for_event",
|
||||
(
|
||||
room_id,
|
||||
relates_to,
|
||||
),
|
||||
)
|
||||
self._attempt_to_invalidate_cache("get_references_for_event", (relates_to,))
|
||||
self._attempt_to_invalidate_cache("get_applicable_edit", (relates_to,))
|
||||
self._attempt_to_invalidate_cache("get_thread_summary", (relates_to,))
|
||||
@@ -380,9 +392,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_unread_event_push_actions_by_room_for_user", (room_id,)
|
||||
)
|
||||
self._attempt_to_invalidate_cache("get_relations_for_event", (room_id,))
|
||||
|
||||
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
|
||||
self._attempt_to_invalidate_cache("get_relations_for_event", None)
|
||||
self._attempt_to_invalidate_cache("get_applicable_edit", None)
|
||||
self._attempt_to_invalidate_cache("get_thread_id", None)
|
||||
self._attempt_to_invalidate_cache("get_thread_id_for_receipts", None)
|
||||
|
||||
@@ -50,16 +50,15 @@ from synapse.storage.database import (
|
||||
LoggingTransaction,
|
||||
make_in_list_sql_clause,
|
||||
)
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.storage.util.id_generators import (
|
||||
AbstractStreamIdGenerator,
|
||||
MultiWriterIdGenerator,
|
||||
StreamIdGenerator,
|
||||
)
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.caches.expiringcache import ExpiringCache
|
||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -89,35 +88,23 @@ class DeviceInboxWorkerStore(SQLBaseStore):
|
||||
expiry_ms=30 * 60 * 1000,
|
||||
)
|
||||
|
||||
if isinstance(database.engine, PostgresEngine):
|
||||
self._can_write_to_device = (
|
||||
self._instance_name in hs.config.worker.writers.to_device
|
||||
)
|
||||
self._can_write_to_device = (
|
||||
self._instance_name in hs.config.worker.writers.to_device
|
||||
)
|
||||
|
||||
self._to_device_msg_id_gen: AbstractStreamIdGenerator = (
|
||||
MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="to_device",
|
||||
instance_name=self._instance_name,
|
||||
tables=[
|
||||
("device_inbox", "instance_name", "stream_id"),
|
||||
("device_federation_outbox", "instance_name", "stream_id"),
|
||||
],
|
||||
sequence_name="device_inbox_sequence",
|
||||
writers=hs.config.worker.writers.to_device,
|
||||
)
|
||||
)
|
||||
else:
|
||||
self._can_write_to_device = True
|
||||
self._to_device_msg_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
"device_inbox",
|
||||
"stream_id",
|
||||
extra_tables=[("device_federation_outbox", "stream_id")],
|
||||
)
|
||||
self._to_device_msg_id_gen: AbstractStreamIdGenerator = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="to_device",
|
||||
instance_name=self._instance_name,
|
||||
tables=[
|
||||
("device_inbox", "instance_name", "stream_id"),
|
||||
("device_federation_outbox", "instance_name", "stream_id"),
|
||||
],
|
||||
sequence_name="device_inbox_sequence",
|
||||
writers=hs.config.worker.writers.to_device,
|
||||
)
|
||||
|
||||
max_device_inbox_id = self._to_device_msg_id_gen.get_current_token()
|
||||
device_inbox_prefill, min_device_inbox_id = self.db_pool.get_cache_dict(
|
||||
@@ -978,6 +965,7 @@ class DeviceInboxWorkerStore(SQLBaseStore):
|
||||
class DeviceInboxBackgroundUpdateStore(SQLBaseStore):
|
||||
DEVICE_INBOX_STREAM_ID = "device_inbox_stream_drop"
|
||||
REMOVE_DEAD_DEVICES_FROM_INBOX = "remove_dead_devices_from_device_inbox"
|
||||
CLEANUP_DEVICE_FEDERATION_OUTBOX = "cleanup_device_federation_outbox"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
@@ -1003,6 +991,11 @@ class DeviceInboxBackgroundUpdateStore(SQLBaseStore):
|
||||
self._remove_dead_devices_from_device_inbox,
|
||||
)
|
||||
|
||||
self.db_pool.updates.register_background_update_handler(
|
||||
self.CLEANUP_DEVICE_FEDERATION_OUTBOX,
|
||||
self._cleanup_device_federation_outbox,
|
||||
)
|
||||
|
||||
async def _background_drop_index_device_inbox(
|
||||
self, progress: JsonDict, batch_size: int
|
||||
) -> int:
|
||||
@@ -1094,6 +1087,75 @@ class DeviceInboxBackgroundUpdateStore(SQLBaseStore):
|
||||
|
||||
return batch_size
|
||||
|
||||
async def _cleanup_device_federation_outbox(
|
||||
self,
|
||||
progress: JsonDict,
|
||||
batch_size: int,
|
||||
) -> int:
|
||||
def _cleanup_device_federation_outbox_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> bool:
|
||||
if "max_stream_id" in progress:
|
||||
max_stream_id = progress["max_stream_id"]
|
||||
else:
|
||||
txn.execute("SELECT max(stream_id) FROM device_federation_outbox")
|
||||
res = cast(Tuple[Optional[int]], txn.fetchone())
|
||||
if res[0] is None:
|
||||
# this can only happen if the `device_inbox` table is empty, in which
|
||||
# case we have no work to do.
|
||||
return True
|
||||
else:
|
||||
max_stream_id = res[0]
|
||||
|
||||
start = progress.get("stream_id", 0)
|
||||
stop = start + batch_size
|
||||
|
||||
sql = """
|
||||
SELECT destination FROM device_federation_outbox
|
||||
WHERE ? < stream_id AND stream_id <= ?
|
||||
"""
|
||||
|
||||
txn.execute(sql, (start, stop))
|
||||
|
||||
destinations = {d for d, in txn}
|
||||
to_remove = set()
|
||||
for d in destinations:
|
||||
try:
|
||||
parse_and_validate_server_name(d)
|
||||
except ValueError:
|
||||
to_remove.add(d)
|
||||
|
||||
self.db_pool.simple_delete_many_txn(
|
||||
txn,
|
||||
table="device_federation_outbox",
|
||||
column="destination",
|
||||
values=to_remove,
|
||||
keyvalues={},
|
||||
)
|
||||
|
||||
self.db_pool.updates._background_update_progress_txn(
|
||||
txn,
|
||||
self.CLEANUP_DEVICE_FEDERATION_OUTBOX,
|
||||
{
|
||||
"stream_id": stop,
|
||||
"max_stream_id": max_stream_id,
|
||||
},
|
||||
)
|
||||
|
||||
return stop >= max_stream_id
|
||||
|
||||
finished = await self.db_pool.runInteraction(
|
||||
"_cleanup_device_federation_outbox",
|
||||
_cleanup_device_federation_outbox_txn,
|
||||
)
|
||||
|
||||
if finished:
|
||||
await self.db_pool.updates._end_background_update(
|
||||
self.CLEANUP_DEVICE_FEDERATION_OUTBOX,
|
||||
)
|
||||
|
||||
return batch_size
|
||||
|
||||
|
||||
class DeviceInboxStore(DeviceInboxWorkerStore, DeviceInboxBackgroundUpdateStore):
|
||||
pass
|
||||
|
||||
@@ -57,10 +57,7 @@ from synapse.storage.database import (
|
||||
from synapse.storage.databases.main.end_to_end_keys import EndToEndKeyWorkerStore
|
||||
from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
|
||||
from synapse.storage.types import Cursor
|
||||
from synapse.storage.util.id_generators import (
|
||||
AbstractStreamIdGenerator,
|
||||
StreamIdGenerator,
|
||||
)
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||
from synapse.types import (
|
||||
JsonDict,
|
||||
JsonMapping,
|
||||
@@ -99,19 +96,21 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||
|
||||
# In the worker store this is an ID tracker which we overwrite in the non-worker
|
||||
# class below that is used on the main process.
|
||||
self._device_list_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
"device_lists_stream",
|
||||
"stream_id",
|
||||
extra_tables=[
|
||||
("user_signature_stream", "stream_id"),
|
||||
("device_lists_outbound_pokes", "stream_id"),
|
||||
("device_lists_changes_in_room", "stream_id"),
|
||||
("device_lists_remote_pending", "stream_id"),
|
||||
("device_lists_changes_converted_stream_position", "stream_id"),
|
||||
self._device_list_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="device_lists_stream",
|
||||
instance_name=self._instance_name,
|
||||
tables=[
|
||||
("device_lists_stream", "instance_name", "stream_id"),
|
||||
("user_signature_stream", "instance_name", "stream_id"),
|
||||
("device_lists_outbound_pokes", "instance_name", "stream_id"),
|
||||
("device_lists_changes_in_room", "instance_name", "stream_id"),
|
||||
("device_lists_remote_pending", "instance_name", "stream_id"),
|
||||
],
|
||||
is_writer=hs.config.worker.worker_app is None,
|
||||
sequence_name="device_lists_sequence",
|
||||
writers=["master"],
|
||||
)
|
||||
|
||||
device_list_max = self._device_list_id_gen.get_current_token()
|
||||
@@ -762,6 +761,7 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||
"stream_id": stream_id,
|
||||
"from_user_id": from_user_id,
|
||||
"user_ids": json_encoder.encode(user_ids),
|
||||
"instance_name": self._instance_name,
|
||||
},
|
||||
)
|
||||
|
||||
@@ -1582,6 +1582,8 @@ class DeviceBackgroundUpdateStore(SQLBaseStore):
|
||||
):
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
self._instance_name = hs.get_instance_name()
|
||||
|
||||
self.db_pool.updates.register_background_index_update(
|
||||
"device_lists_stream_idx",
|
||||
index_name="device_lists_stream_user_id",
|
||||
@@ -1694,6 +1696,7 @@ class DeviceBackgroundUpdateStore(SQLBaseStore):
|
||||
"device_lists_outbound_pokes",
|
||||
{
|
||||
"stream_id": stream_id,
|
||||
"instance_name": self._instance_name,
|
||||
"destination": destination,
|
||||
"user_id": user_id,
|
||||
"device_id": device_id,
|
||||
@@ -1730,10 +1733,6 @@ class DeviceBackgroundUpdateStore(SQLBaseStore):
|
||||
|
||||
|
||||
class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
# Because we have write access, this will be a StreamIdGenerator
|
||||
# (see DeviceWorkerStore.__init__)
|
||||
_device_list_id_gen: AbstractStreamIdGenerator
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
database: DatabasePool,
|
||||
@@ -2092,9 +2091,9 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
self.db_pool.simple_insert_many_txn(
|
||||
txn,
|
||||
table="device_lists_stream",
|
||||
keys=("stream_id", "user_id", "device_id"),
|
||||
keys=("instance_name", "stream_id", "user_id", "device_id"),
|
||||
values=[
|
||||
(stream_id, user_id, device_id)
|
||||
(self._instance_name, stream_id, user_id, device_id)
|
||||
for stream_id, device_id in zip(stream_ids, device_ids)
|
||||
],
|
||||
)
|
||||
@@ -2124,6 +2123,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
values = [
|
||||
(
|
||||
destination,
|
||||
self._instance_name,
|
||||
next(stream_id_iterator),
|
||||
user_id,
|
||||
device_id,
|
||||
@@ -2139,6 +2139,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
table="device_lists_outbound_pokes",
|
||||
keys=(
|
||||
"destination",
|
||||
"instance_name",
|
||||
"stream_id",
|
||||
"user_id",
|
||||
"device_id",
|
||||
@@ -2157,7 +2158,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
device_id,
|
||||
{
|
||||
stream_id: destination
|
||||
for (destination, stream_id, _, _, _, _, _) in values
|
||||
for (destination, _, stream_id, _, _, _, _, _) in values
|
||||
},
|
||||
)
|
||||
|
||||
@@ -2210,6 +2211,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
"device_id",
|
||||
"room_id",
|
||||
"stream_id",
|
||||
"instance_name",
|
||||
"converted_to_destinations",
|
||||
"opentracing_context",
|
||||
),
|
||||
@@ -2219,6 +2221,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
device_id,
|
||||
room_id,
|
||||
stream_id,
|
||||
self._instance_name,
|
||||
# We only need to calculate outbound pokes for local users
|
||||
not self.hs.is_mine_id(user_id),
|
||||
encoded_context,
|
||||
@@ -2338,7 +2341,10 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
"user_id": user_id,
|
||||
"device_id": device_id,
|
||||
},
|
||||
values={"stream_id": stream_id},
|
||||
values={
|
||||
"stream_id": stream_id,
|
||||
"instance_name": self._instance_name,
|
||||
},
|
||||
desc="add_remote_device_list_to_pending",
|
||||
)
|
||||
|
||||
|
||||
@@ -58,7 +58,7 @@ from synapse.storage.database import (
|
||||
)
|
||||
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.storage.util.id_generators import StreamIdGenerator
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||
from synapse.types import JsonDict, JsonMapping
|
||||
from synapse.util import json_decoder, json_encoder
|
||||
from synapse.util.caches.descriptors import cached, cachedList
|
||||
@@ -1448,11 +1448,17 @@ class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
|
||||
):
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
self._cross_signing_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
"e2e_cross_signing_keys",
|
||||
"stream_id",
|
||||
self._cross_signing_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="e2e_cross_signing_keys",
|
||||
instance_name=self._instance_name,
|
||||
tables=[
|
||||
("e2e_cross_signing_keys", "instance_name", "stream_id"),
|
||||
],
|
||||
sequence_name="e2e_cross_signing_keys_sequence",
|
||||
writers=["master"],
|
||||
)
|
||||
|
||||
async def set_e2e_device_keys(
|
||||
@@ -1627,6 +1633,7 @@ class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
|
||||
"keytype": key_type,
|
||||
"keydata": json_encoder.encode(key),
|
||||
"stream_id": stream_id,
|
||||
"instance_name": self._instance_name,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@@ -95,6 +95,10 @@ class DeltaState:
|
||||
to_insert: StateMap[str]
|
||||
no_longer_in_room: bool = False
|
||||
|
||||
def is_noop(self) -> bool:
|
||||
"""Whether this state delta is actually empty"""
|
||||
return not self.to_delete and not self.to_insert and not self.no_longer_in_room
|
||||
|
||||
|
||||
class PersistEventsStore:
|
||||
"""Contains all the functions for writing events to the database.
|
||||
@@ -1017,6 +1021,9 @@ class PersistEventsStore:
|
||||
) -> None:
|
||||
"""Update the current state stored in the datatabase for the given room"""
|
||||
|
||||
if state_delta.is_noop():
|
||||
return
|
||||
|
||||
async with self._stream_id_gen.get_next() as stream_ordering:
|
||||
await self.db_pool.runInteraction(
|
||||
"update_current_state",
|
||||
@@ -1923,7 +1930,12 @@ class PersistEventsStore:
|
||||
|
||||
# Any relation information for the related event must be cleared.
|
||||
self.store._invalidate_cache_and_stream(
|
||||
txn, self.store.get_relations_for_event, (redacted_relates_to,)
|
||||
txn,
|
||||
self.store.get_relations_for_event,
|
||||
(
|
||||
room_id,
|
||||
redacted_relates_to,
|
||||
),
|
||||
)
|
||||
if rel_type == RelationTypes.REFERENCE:
|
||||
self.store._invalidate_cache_and_stream(
|
||||
|
||||
@@ -1181,7 +1181,7 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
|
||||
|
||||
results = list(txn)
|
||||
# (event_id, parent_id, rel_type) for each relation
|
||||
relations_to_insert: List[Tuple[str, str, str]] = []
|
||||
relations_to_insert: List[Tuple[str, str, str, str]] = []
|
||||
for event_id, event_json_raw in results:
|
||||
try:
|
||||
event_json = db_to_json(event_json_raw)
|
||||
@@ -1214,7 +1214,8 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
|
||||
if not isinstance(parent_id, str):
|
||||
continue
|
||||
|
||||
relations_to_insert.append((event_id, parent_id, rel_type))
|
||||
room_id = event_json["room_id"]
|
||||
relations_to_insert.append((room_id, event_id, parent_id, rel_type))
|
||||
|
||||
# Insert the missing data, note that we upsert here in case the event
|
||||
# has already been processed.
|
||||
@@ -1223,18 +1224,27 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
|
||||
txn=txn,
|
||||
table="event_relations",
|
||||
key_names=("event_id",),
|
||||
key_values=[(r[0],) for r in relations_to_insert],
|
||||
key_values=[(r[1],) for r in relations_to_insert],
|
||||
value_names=("relates_to_id", "relation_type"),
|
||||
value_values=[r[1:] for r in relations_to_insert],
|
||||
value_values=[r[2:] for r in relations_to_insert],
|
||||
)
|
||||
|
||||
# Iterate the parent IDs and invalidate caches.
|
||||
cache_tuples = {(r[1],) for r in relations_to_insert}
|
||||
self._invalidate_cache_and_stream_bulk( # type: ignore[attr-defined]
|
||||
txn, self.get_relations_for_event, cache_tuples # type: ignore[attr-defined]
|
||||
txn,
|
||||
self.get_relations_for_event, # type: ignore[attr-defined]
|
||||
{
|
||||
(
|
||||
r[0], # room_id
|
||||
r[2], # parent_id
|
||||
)
|
||||
for r in relations_to_insert
|
||||
},
|
||||
)
|
||||
self._invalidate_cache_and_stream_bulk( # type: ignore[attr-defined]
|
||||
txn, self.get_thread_summary, cache_tuples # type: ignore[attr-defined]
|
||||
txn,
|
||||
self.get_thread_summary, # type: ignore[attr-defined]
|
||||
{(r[1],) for r in relations_to_insert},
|
||||
)
|
||||
|
||||
if results:
|
||||
|
||||
@@ -75,12 +75,10 @@ from synapse.storage.database import (
|
||||
LoggingDatabaseConnection,
|
||||
LoggingTransaction,
|
||||
)
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.storage.types import Cursor
|
||||
from synapse.storage.util.id_generators import (
|
||||
AbstractStreamIdGenerator,
|
||||
MultiWriterIdGenerator,
|
||||
StreamIdGenerator,
|
||||
)
|
||||
from synapse.storage.util.sequence import build_sequence_generator
|
||||
from synapse.types import JsonDict, get_domain_from_id
|
||||
@@ -195,51 +193,35 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
|
||||
self._stream_id_gen: AbstractStreamIdGenerator
|
||||
self._backfill_id_gen: AbstractStreamIdGenerator
|
||||
if isinstance(database.engine, PostgresEngine):
|
||||
# If we're using Postgres than we can use `MultiWriterIdGenerator`
|
||||
# regardless of whether this process writes to the streams or not.
|
||||
self._stream_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="events",
|
||||
instance_name=hs.get_instance_name(),
|
||||
tables=[("events", "instance_name", "stream_ordering")],
|
||||
sequence_name="events_stream_seq",
|
||||
writers=hs.config.worker.writers.events,
|
||||
)
|
||||
self._backfill_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="backfill",
|
||||
instance_name=hs.get_instance_name(),
|
||||
tables=[("events", "instance_name", "stream_ordering")],
|
||||
sequence_name="events_backfill_stream_seq",
|
||||
positive=False,
|
||||
writers=hs.config.worker.writers.events,
|
||||
)
|
||||
else:
|
||||
# Multiple writers are not supported for SQLite.
|
||||
#
|
||||
# We shouldn't be running in worker mode with SQLite, but its useful
|
||||
# to support it for unit tests.
|
||||
self._stream_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
"events",
|
||||
"stream_ordering",
|
||||
is_writer=hs.get_instance_name() in hs.config.worker.writers.events,
|
||||
)
|
||||
self._backfill_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
"events",
|
||||
"stream_ordering",
|
||||
step=-1,
|
||||
extra_tables=[("ex_outlier_stream", "event_stream_ordering")],
|
||||
is_writer=hs.get_instance_name() in hs.config.worker.writers.events,
|
||||
)
|
||||
|
||||
self._stream_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="events",
|
||||
instance_name=hs.get_instance_name(),
|
||||
tables=[
|
||||
("events", "instance_name", "stream_ordering"),
|
||||
("current_state_delta_stream", "instance_name", "stream_id"),
|
||||
("ex_outlier_stream", "instance_name", "event_stream_ordering"),
|
||||
],
|
||||
sequence_name="events_stream_seq",
|
||||
writers=hs.config.worker.writers.events,
|
||||
)
|
||||
self._backfill_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="backfill",
|
||||
instance_name=hs.get_instance_name(),
|
||||
tables=[
|
||||
("events", "instance_name", "stream_ordering"),
|
||||
("ex_outlier_stream", "instance_name", "event_stream_ordering"),
|
||||
],
|
||||
sequence_name="events_backfill_stream_seq",
|
||||
positive=False,
|
||||
writers=hs.config.worker.writers.events,
|
||||
)
|
||||
|
||||
events_max = self._stream_id_gen.get_current_token()
|
||||
curr_state_delta_prefill, min_curr_state_delta_id = self.db_pool.get_cache_dict(
|
||||
@@ -309,27 +291,17 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
|
||||
self._un_partial_stated_events_stream_id_gen: AbstractStreamIdGenerator
|
||||
|
||||
if isinstance(database.engine, PostgresEngine):
|
||||
self._un_partial_stated_events_stream_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="un_partial_stated_event_stream",
|
||||
instance_name=hs.get_instance_name(),
|
||||
tables=[
|
||||
("un_partial_stated_event_stream", "instance_name", "stream_id")
|
||||
],
|
||||
sequence_name="un_partial_stated_event_stream_sequence",
|
||||
# TODO(faster_joins, multiple writers) Support multiple writers.
|
||||
writers=["master"],
|
||||
)
|
||||
else:
|
||||
self._un_partial_stated_events_stream_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
"un_partial_stated_event_stream",
|
||||
"stream_id",
|
||||
)
|
||||
self._un_partial_stated_events_stream_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="un_partial_stated_event_stream",
|
||||
instance_name=hs.get_instance_name(),
|
||||
tables=[("un_partial_stated_event_stream", "instance_name", "stream_id")],
|
||||
sequence_name="un_partial_stated_event_stream_sequence",
|
||||
# TODO(faster_joins, multiple writers) Support multiple writers.
|
||||
writers=["master"],
|
||||
)
|
||||
|
||||
def get_un_partial_stated_events_token(self, instance_name: str) -> int:
|
||||
return (
|
||||
|
||||
@@ -40,13 +40,11 @@ from synapse.storage.database import (
|
||||
LoggingTransaction,
|
||||
)
|
||||
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.storage.engines._base import IsolationLevel
|
||||
from synapse.storage.types import Connection
|
||||
from synapse.storage.util.id_generators import (
|
||||
AbstractStreamIdGenerator,
|
||||
MultiWriterIdGenerator,
|
||||
StreamIdGenerator,
|
||||
)
|
||||
from synapse.util.caches.descriptors import cached, cachedList
|
||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
@@ -91,21 +89,16 @@ class PresenceStore(PresenceBackgroundUpdateStore, CacheInvalidationWorkerStore)
|
||||
self._instance_name in hs.config.worker.writers.presence
|
||||
)
|
||||
|
||||
if isinstance(database.engine, PostgresEngine):
|
||||
self._presence_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="presence_stream",
|
||||
instance_name=self._instance_name,
|
||||
tables=[("presence_stream", "instance_name", "stream_id")],
|
||||
sequence_name="presence_stream_sequence",
|
||||
writers=hs.config.worker.writers.presence,
|
||||
)
|
||||
else:
|
||||
self._presence_id_gen = StreamIdGenerator(
|
||||
db_conn, hs.get_replication_notifier(), "presence_stream", "stream_id"
|
||||
)
|
||||
self._presence_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="presence_stream",
|
||||
instance_name=self._instance_name,
|
||||
tables=[("presence_stream", "instance_name", "stream_id")],
|
||||
sequence_name="presence_stream_sequence",
|
||||
writers=hs.config.worker.writers.presence,
|
||||
)
|
||||
|
||||
self.hs = hs
|
||||
self._presence_on_startup = self._get_active_presence(db_conn)
|
||||
|
||||
@@ -53,7 +53,7 @@ from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
|
||||
from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
|
||||
from synapse.storage.engines import PostgresEngine, Sqlite3Engine
|
||||
from synapse.storage.push_rule import InconsistentRuleException, RuleNotFoundException
|
||||
from synapse.storage.util.id_generators import IdGenerator, StreamIdGenerator
|
||||
from synapse.storage.util.id_generators import IdGenerator, MultiWriterIdGenerator
|
||||
from synapse.synapse_rust.push import FilteredPushRules, PushRule, PushRules
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import json_encoder, unwrapFirstError
|
||||
@@ -126,7 +126,7 @@ class PushRulesWorkerStore(
|
||||
`get_max_push_rules_stream_id` which can be called in the initializer.
|
||||
"""
|
||||
|
||||
_push_rules_stream_id_gen: StreamIdGenerator
|
||||
_push_rules_stream_id_gen: MultiWriterIdGenerator
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
@@ -140,14 +140,17 @@ class PushRulesWorkerStore(
|
||||
hs.get_instance_name() in hs.config.worker.writers.push_rules
|
||||
)
|
||||
|
||||
# In the worker store this is an ID tracker which we overwrite in the non-worker
|
||||
# class below that is used on the main process.
|
||||
self._push_rules_stream_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
"push_rules_stream",
|
||||
"stream_id",
|
||||
is_writer=self._is_push_writer,
|
||||
self._push_rules_stream_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="push_rules_stream",
|
||||
instance_name=self._instance_name,
|
||||
tables=[
|
||||
("push_rules_stream", "instance_name", "stream_id"),
|
||||
],
|
||||
sequence_name="push_rules_stream_sequence",
|
||||
writers=hs.config.worker.writers.push_rules,
|
||||
)
|
||||
|
||||
push_rules_prefill, push_rules_id = self.db_pool.get_cache_dict(
|
||||
@@ -880,6 +883,7 @@ class PushRulesWorkerStore(
|
||||
raise Exception("Not a push writer")
|
||||
|
||||
values = {
|
||||
"instance_name": self._instance_name,
|
||||
"stream_id": stream_id,
|
||||
"event_stream_ordering": event_stream_ordering,
|
||||
"user_id": user_id,
|
||||
|
||||
@@ -40,10 +40,7 @@ from synapse.storage.database import (
|
||||
LoggingDatabaseConnection,
|
||||
LoggingTransaction,
|
||||
)
|
||||
from synapse.storage.util.id_generators import (
|
||||
AbstractStreamIdGenerator,
|
||||
StreamIdGenerator,
|
||||
)
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.caches.descriptors import cached
|
||||
@@ -84,15 +81,20 @@ class PusherWorkerStore(SQLBaseStore):
|
||||
):
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
# In the worker store this is an ID tracker which we overwrite in the non-worker
|
||||
# class below that is used on the main process.
|
||||
self._pushers_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
"pushers",
|
||||
"id",
|
||||
extra_tables=[("deleted_pushers", "stream_id")],
|
||||
is_writer=hs.config.worker.worker_app is None,
|
||||
self._instance_name = hs.get_instance_name()
|
||||
|
||||
self._pushers_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="pushers",
|
||||
instance_name=self._instance_name,
|
||||
tables=[
|
||||
("pushers", "instance_name", "id"),
|
||||
("deleted_pushers", "instance_name", "stream_id"),
|
||||
],
|
||||
sequence_name="pushers_sequence",
|
||||
writers=["master"],
|
||||
)
|
||||
|
||||
self.db_pool.updates.register_background_update_handler(
|
||||
@@ -655,7 +657,7 @@ class PusherBackgroundUpdatesStore(SQLBaseStore):
|
||||
class PusherStore(PusherWorkerStore, PusherBackgroundUpdatesStore):
|
||||
# Because we have write access, this will be a StreamIdGenerator
|
||||
# (see PusherWorkerStore.__init__)
|
||||
_pushers_id_gen: AbstractStreamIdGenerator
|
||||
_pushers_id_gen: MultiWriterIdGenerator
|
||||
|
||||
async def add_pusher(
|
||||
self,
|
||||
@@ -688,6 +690,7 @@ class PusherStore(PusherWorkerStore, PusherBackgroundUpdatesStore):
|
||||
"last_stream_ordering": last_stream_ordering,
|
||||
"profile_tag": profile_tag,
|
||||
"id": stream_id,
|
||||
"instance_name": self._instance_name,
|
||||
"enabled": enabled,
|
||||
"device_id": device_id,
|
||||
# XXX(quenting): We're only really persisting the access token ID
|
||||
@@ -735,6 +738,7 @@ class PusherStore(PusherWorkerStore, PusherBackgroundUpdatesStore):
|
||||
table="deleted_pushers",
|
||||
values={
|
||||
"stream_id": stream_id,
|
||||
"instance_name": self._instance_name,
|
||||
"app_id": app_id,
|
||||
"pushkey": pushkey,
|
||||
"user_id": user_id,
|
||||
@@ -773,9 +777,15 @@ class PusherStore(PusherWorkerStore, PusherBackgroundUpdatesStore):
|
||||
self.db_pool.simple_insert_many_txn(
|
||||
txn,
|
||||
table="deleted_pushers",
|
||||
keys=("stream_id", "app_id", "pushkey", "user_id"),
|
||||
keys=("stream_id", "instance_name", "app_id", "pushkey", "user_id"),
|
||||
values=[
|
||||
(stream_id, pusher.app_id, pusher.pushkey, user_id)
|
||||
(
|
||||
stream_id,
|
||||
self._instance_name,
|
||||
pusher.app_id,
|
||||
pusher.pushkey,
|
||||
user_id,
|
||||
)
|
||||
for stream_id, pusher in zip(stream_ids, pushers)
|
||||
],
|
||||
)
|
||||
|
||||
@@ -44,12 +44,10 @@ from synapse.storage.database import (
|
||||
LoggingDatabaseConnection,
|
||||
LoggingTransaction,
|
||||
)
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.storage.engines._base import IsolationLevel
|
||||
from synapse.storage.util.id_generators import (
|
||||
AbstractStreamIdGenerator,
|
||||
MultiWriterIdGenerator,
|
||||
StreamIdGenerator,
|
||||
)
|
||||
from synapse.types import (
|
||||
JsonDict,
|
||||
@@ -80,35 +78,20 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
||||
# class below that is used on the main process.
|
||||
self._receipts_id_gen: AbstractStreamIdGenerator
|
||||
|
||||
if isinstance(database.engine, PostgresEngine):
|
||||
self._can_write_to_receipts = (
|
||||
self._instance_name in hs.config.worker.writers.receipts
|
||||
)
|
||||
self._can_write_to_receipts = (
|
||||
self._instance_name in hs.config.worker.writers.receipts
|
||||
)
|
||||
|
||||
self._receipts_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="receipts",
|
||||
instance_name=self._instance_name,
|
||||
tables=[("receipts_linearized", "instance_name", "stream_id")],
|
||||
sequence_name="receipts_sequence",
|
||||
writers=hs.config.worker.writers.receipts,
|
||||
)
|
||||
else:
|
||||
self._can_write_to_receipts = True
|
||||
|
||||
# Multiple writers are not supported for SQLite.
|
||||
#
|
||||
# We shouldn't be running in worker mode with SQLite, but its useful
|
||||
# to support it for unit tests.
|
||||
self._receipts_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
"receipts_linearized",
|
||||
"stream_id",
|
||||
is_writer=hs.get_instance_name() in hs.config.worker.writers.receipts,
|
||||
)
|
||||
self._receipts_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="receipts",
|
||||
instance_name=self._instance_name,
|
||||
tables=[("receipts_linearized", "instance_name", "stream_id")],
|
||||
sequence_name="receipts_sequence",
|
||||
writers=hs.config.worker.writers.receipts,
|
||||
)
|
||||
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
|
||||
@@ -169,9 +169,9 @@ class RelationsWorkerStore(SQLBaseStore):
|
||||
@cached(uncached_args=("event",), tree=True)
|
||||
async def get_relations_for_event(
|
||||
self,
|
||||
room_id: str,
|
||||
event_id: str,
|
||||
event: EventBase,
|
||||
room_id: str,
|
||||
relation_type: Optional[str] = None,
|
||||
event_type: Optional[str] = None,
|
||||
limit: int = 5,
|
||||
|
||||
@@ -58,13 +58,11 @@ from synapse.storage.database import (
|
||||
LoggingTransaction,
|
||||
)
|
||||
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.storage.types import Cursor
|
||||
from synapse.storage.util.id_generators import (
|
||||
AbstractStreamIdGenerator,
|
||||
IdGenerator,
|
||||
MultiWriterIdGenerator,
|
||||
StreamIdGenerator,
|
||||
)
|
||||
from synapse.types import JsonDict, RetentionPolicy, StrCollection, ThirdPartyInstanceID
|
||||
from synapse.util import json_encoder
|
||||
@@ -155,27 +153,17 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
|
||||
|
||||
self._un_partial_stated_rooms_stream_id_gen: AbstractStreamIdGenerator
|
||||
|
||||
if isinstance(database.engine, PostgresEngine):
|
||||
self._un_partial_stated_rooms_stream_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="un_partial_stated_room_stream",
|
||||
instance_name=self._instance_name,
|
||||
tables=[
|
||||
("un_partial_stated_room_stream", "instance_name", "stream_id")
|
||||
],
|
||||
sequence_name="un_partial_stated_room_stream_sequence",
|
||||
# TODO(faster_joins, multiple writers) Support multiple writers.
|
||||
writers=["master"],
|
||||
)
|
||||
else:
|
||||
self._un_partial_stated_rooms_stream_id_gen = StreamIdGenerator(
|
||||
db_conn,
|
||||
hs.get_replication_notifier(),
|
||||
"un_partial_stated_room_stream",
|
||||
"stream_id",
|
||||
)
|
||||
self._un_partial_stated_rooms_stream_id_gen = MultiWriterIdGenerator(
|
||||
db_conn=db_conn,
|
||||
db=database,
|
||||
notifier=hs.get_replication_notifier(),
|
||||
stream_name="un_partial_stated_room_stream",
|
||||
instance_name=self._instance_name,
|
||||
tables=[("un_partial_stated_room_stream", "instance_name", "stream_id")],
|
||||
sequence_name="un_partial_stated_room_stream_sequence",
|
||||
# TODO(faster_joins, multiple writers) Support multiple writers.
|
||||
writers=["master"],
|
||||
)
|
||||
|
||||
def process_replication_position(
|
||||
self, stream_name: str, instance_name: str, token: int
|
||||
|
||||
@@ -476,7 +476,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
)
|
||||
|
||||
sql = """
|
||||
SELECT room_id, e.sender, c.membership, event_id, e.stream_ordering, r.room_version
|
||||
SELECT room_id, e.sender, c.membership, event_id, e.instance_name, e.stream_ordering, r.room_version
|
||||
FROM local_current_membership AS c
|
||||
INNER JOIN events AS e USING (room_id, event_id)
|
||||
INNER JOIN rooms AS r USING (room_id)
|
||||
@@ -488,7 +488,17 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
)
|
||||
|
||||
txn.execute(sql, (user_id, *args))
|
||||
results = [RoomsForUser(*r) for r in txn]
|
||||
results = [
|
||||
RoomsForUser(
|
||||
room_id=room_id,
|
||||
sender=sender,
|
||||
membership=membership,
|
||||
event_id=event_id,
|
||||
event_pos=PersistedEventPosition(instance_name, stream_ordering),
|
||||
room_version_id=room_version,
|
||||
)
|
||||
for room_id, sender, membership, event_id, instance_name, stream_ordering, room_version in txn
|
||||
]
|
||||
|
||||
return results
|
||||
|
||||
|
||||
@@ -1281,7 +1281,7 @@ def _parse_words_with_regex(search_term: str) -> List[str]:
|
||||
Break down search term into words, when we don't have ICU available.
|
||||
See: `_parse_words`
|
||||
"""
|
||||
return re.findall(r"([\w\-]+)", search_term, re.UNICODE)
|
||||
return re.findall(r"([\w-]+)", search_term, re.UNICODE)
|
||||
|
||||
|
||||
def _parse_words_with_icu(search_term: str) -> List[str]:
|
||||
@@ -1303,15 +1303,69 @@ def _parse_words_with_icu(search_term: str) -> List[str]:
|
||||
if j < 0:
|
||||
break
|
||||
|
||||
result = search_term[i:j]
|
||||
# We want to make sure that we split on `@` and `:` specifically, as
|
||||
# they occur in user IDs.
|
||||
for result in re.split(r"[@:]+", search_term[i:j]):
|
||||
results.append(result.strip())
|
||||
|
||||
i = j
|
||||
|
||||
# libicu will break up words that have punctuation in them, but to handle
|
||||
# cases where user IDs have '-', '.' and '_' in them we want to *not* break
|
||||
# those into words and instead allow the DB to tokenise them how it wants.
|
||||
#
|
||||
# In particular, user-71 in postgres gets tokenised to "user, -71", and this
|
||||
# will not match a query for "user, 71".
|
||||
new_results: List[str] = []
|
||||
i = 0
|
||||
while i < len(results):
|
||||
curr = results[i]
|
||||
|
||||
prev = None
|
||||
next = None
|
||||
if i > 0:
|
||||
prev = results[i - 1]
|
||||
if i + 1 < len(results):
|
||||
next = results[i + 1]
|
||||
|
||||
i += 1
|
||||
|
||||
# libicu considers spaces and punctuation between words as words, but we don't
|
||||
# want to include those in results as they would result in syntax errors in SQL
|
||||
# queries (e.g. "foo bar" would result in the search query including "foo & &
|
||||
# bar").
|
||||
if len(re.findall(r"([\w\-]+)", result, re.UNICODE)):
|
||||
results.append(result)
|
||||
if not curr:
|
||||
continue
|
||||
|
||||
i = j
|
||||
if curr in ["-", ".", "_"]:
|
||||
prefix = ""
|
||||
suffix = ""
|
||||
|
||||
return results
|
||||
# Check if the next item is a word, and if so use it as the suffix.
|
||||
# We check for if its a word as we don't want to concatenate
|
||||
# multiple punctuation marks.
|
||||
if next is not None and re.match(r"\w", next):
|
||||
suffix = next
|
||||
i += 1 # We're using next, so we skip it in the outer loop.
|
||||
else:
|
||||
# We want to avoid creating terms like "user-", as we should
|
||||
# strip trailing punctuation.
|
||||
continue
|
||||
|
||||
if prev and re.match(r"\w", prev) and new_results:
|
||||
prefix = new_results[-1]
|
||||
new_results.pop()
|
||||
|
||||
# We might not have a prefix here, but that's fine as we want to
|
||||
# ensure that we don't strip preceding punctuation e.g. '-71'
|
||||
# shouldn't be converted to '71'.
|
||||
|
||||
new_results.append(f"{prefix}{curr}{suffix}")
|
||||
continue
|
||||
elif not re.match(r"\w", curr):
|
||||
# Ignore other punctuation
|
||||
continue
|
||||
|
||||
new_results.append(curr)
|
||||
|
||||
return new_results
|
||||
|
||||
@@ -142,6 +142,10 @@ class PostgresEngine(
|
||||
apply stricter checks on new databases versus existing database.
|
||||
"""
|
||||
|
||||
allow_unsafe_locale = self.config.get("allow_unsafe_locale", False)
|
||||
if allow_unsafe_locale:
|
||||
return
|
||||
|
||||
collation, ctype = self.get_db_locale(txn)
|
||||
|
||||
errors = []
|
||||
@@ -155,7 +159,9 @@ class PostgresEngine(
|
||||
if errors:
|
||||
raise IncorrectDatabaseSetup(
|
||||
"Database is incorrectly configured:\n\n%s\n\n"
|
||||
"See docs/postgres.md for more information." % ("\n".join(errors))
|
||||
"See docs/postgres.md for more information. You can override this check by"
|
||||
"setting 'allow_unsafe_locale' to true in the database config.",
|
||||
"\n".join(errors),
|
||||
)
|
||||
|
||||
def convert_param_style(self, sql: str) -> str:
|
||||
|
||||
@@ -35,7 +35,7 @@ class RoomsForUser:
|
||||
sender: str
|
||||
membership: str
|
||||
event_id: str
|
||||
stream_ordering: int
|
||||
event_pos: PersistedEventPosition
|
||||
room_version_id: str
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,27 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2024 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
-- Add `instance_name` columns to stream tables to allow them to be used with
|
||||
-- `MultiWriterIdGenerator`
|
||||
ALTER TABLE device_lists_stream ADD COLUMN instance_name TEXT;
|
||||
ALTER TABLE user_signature_stream ADD COLUMN instance_name TEXT;
|
||||
ALTER TABLE device_lists_outbound_pokes ADD COLUMN instance_name TEXT;
|
||||
ALTER TABLE device_lists_changes_in_room ADD COLUMN instance_name TEXT;
|
||||
ALTER TABLE device_lists_remote_pending ADD COLUMN instance_name TEXT;
|
||||
|
||||
ALTER TABLE e2e_cross_signing_keys ADD COLUMN instance_name TEXT;
|
||||
|
||||
ALTER TABLE push_rules_stream ADD COLUMN instance_name TEXT;
|
||||
|
||||
ALTER TABLE pushers ADD COLUMN instance_name TEXT;
|
||||
ALTER TABLE deleted_pushers ADD COLUMN instance_name TEXT;
|
||||
@@ -0,0 +1,54 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2024 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
-- Add squences for stream tables to allow them to be used with
|
||||
-- `MultiWriterIdGenerator`
|
||||
CREATE SEQUENCE IF NOT EXISTS device_lists_sequence;
|
||||
|
||||
-- We need to take the max across all the device lists tables as they share the
|
||||
-- ID generator
|
||||
SELECT setval('device_lists_sequence', (
|
||||
SELECT GREATEST(
|
||||
(SELECT COALESCE(MAX(stream_id), 1) FROM device_lists_stream),
|
||||
(SELECT COALESCE(MAX(stream_id), 1) FROM user_signature_stream),
|
||||
(SELECT COALESCE(MAX(stream_id), 1) FROM device_lists_outbound_pokes),
|
||||
(SELECT COALESCE(MAX(stream_id), 1) FROM device_lists_changes_in_room),
|
||||
(SELECT COALESCE(MAX(stream_id), 1) FROM device_lists_remote_pending),
|
||||
(SELECT COALESCE(MAX(stream_id), 1) FROM device_lists_changes_converted_stream_position)
|
||||
)
|
||||
));
|
||||
|
||||
CREATE SEQUENCE IF NOT EXISTS e2e_cross_signing_keys_sequence;
|
||||
|
||||
SELECT setval('e2e_cross_signing_keys_sequence', (
|
||||
SELECT COALESCE(MAX(stream_id), 1) FROM e2e_cross_signing_keys
|
||||
));
|
||||
|
||||
|
||||
CREATE SEQUENCE IF NOT EXISTS push_rules_stream_sequence;
|
||||
|
||||
SELECT setval('push_rules_stream_sequence', (
|
||||
SELECT COALESCE(MAX(stream_id), 1) FROM push_rules_stream
|
||||
));
|
||||
|
||||
|
||||
CREATE SEQUENCE IF NOT EXISTS pushers_sequence;
|
||||
|
||||
-- We need to take the max across all the pusher tables as they share the
|
||||
-- ID generator
|
||||
SELECT setval('pushers_sequence', (
|
||||
SELECT GREATEST(
|
||||
(SELECT COALESCE(MAX(id), 1) FROM pushers),
|
||||
(SELECT COALESCE(MAX(stream_id), 1) FROM deleted_pushers)
|
||||
)
|
||||
));
|
||||
@@ -0,0 +1,15 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2024 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||
(8504, 'cleanup_device_federation_outbox', '{}');
|
||||
@@ -23,15 +23,12 @@ import abc
|
||||
import heapq
|
||||
import logging
|
||||
import threading
|
||||
from collections import OrderedDict
|
||||
from contextlib import contextmanager
|
||||
from types import TracebackType
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
AsyncContextManager,
|
||||
ContextManager,
|
||||
Dict,
|
||||
Generator,
|
||||
Generic,
|
||||
Iterable,
|
||||
List,
|
||||
@@ -53,9 +50,11 @@ from synapse.storage.database import (
|
||||
DatabasePool,
|
||||
LoggingDatabaseConnection,
|
||||
LoggingTransaction,
|
||||
make_in_list_sql_clause,
|
||||
)
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.storage.types import Cursor
|
||||
from synapse.storage.util.sequence import PostgresSequenceGenerator
|
||||
from synapse.storage.util.sequence import build_sequence_generator
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.notifier import ReplicationNotifier
|
||||
@@ -177,161 +176,6 @@ class AbstractStreamIdGenerator(metaclass=abc.ABCMeta):
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class StreamIdGenerator(AbstractStreamIdGenerator):
|
||||
"""Generates and tracks stream IDs for a stream with a single writer.
|
||||
|
||||
This class must only be used when the current Synapse process is the sole
|
||||
writer for a stream.
|
||||
|
||||
Args:
|
||||
db_conn(connection): A database connection to use to fetch the
|
||||
initial value of the generator from.
|
||||
table(str): A database table to read the initial value of the id
|
||||
generator from.
|
||||
column(str): The column of the database table to read the initial
|
||||
value from the id generator from.
|
||||
extra_tables(list): List of pairs of database tables and columns to
|
||||
use to source the initial value of the generator from. The value
|
||||
with the largest magnitude is used.
|
||||
step(int): which direction the stream ids grow in. +1 to grow
|
||||
upwards, -1 to grow downwards.
|
||||
|
||||
Usage:
|
||||
async with stream_id_gen.get_next() as stream_id:
|
||||
# ... persist event ...
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
db_conn: LoggingDatabaseConnection,
|
||||
notifier: "ReplicationNotifier",
|
||||
table: str,
|
||||
column: str,
|
||||
extra_tables: Iterable[Tuple[str, str]] = (),
|
||||
step: int = 1,
|
||||
is_writer: bool = True,
|
||||
) -> None:
|
||||
assert step != 0
|
||||
self._lock = threading.Lock()
|
||||
self._step: int = step
|
||||
self._current: int = _load_current_id(db_conn, table, column, step)
|
||||
self._is_writer = is_writer
|
||||
for table, column in extra_tables:
|
||||
self._current = (max if step > 0 else min)(
|
||||
self._current, _load_current_id(db_conn, table, column, step)
|
||||
)
|
||||
|
||||
# We use this as an ordered set, as we want to efficiently append items,
|
||||
# remove items and get the first item. Since we insert IDs in order, the
|
||||
# insertion ordering will ensure its in the correct ordering.
|
||||
#
|
||||
# The key and values are the same, but we never look at the values.
|
||||
self._unfinished_ids: OrderedDict[int, int] = OrderedDict()
|
||||
|
||||
self._notifier = notifier
|
||||
|
||||
def advance(self, instance_name: str, new_id: int) -> None:
|
||||
# Advance should never be called on a writer instance, only over replication
|
||||
if self._is_writer:
|
||||
raise Exception("Replication is not supported by writer StreamIdGenerator")
|
||||
|
||||
self._current = (max if self._step > 0 else min)(self._current, new_id)
|
||||
|
||||
def get_next(self) -> AsyncContextManager[int]:
|
||||
with self._lock:
|
||||
self._current += self._step
|
||||
next_id = self._current
|
||||
|
||||
self._unfinished_ids[next_id] = next_id
|
||||
|
||||
@contextmanager
|
||||
def manager() -> Generator[int, None, None]:
|
||||
try:
|
||||
yield next_id
|
||||
finally:
|
||||
with self._lock:
|
||||
self._unfinished_ids.pop(next_id)
|
||||
|
||||
self._notifier.notify_replication()
|
||||
|
||||
return _AsyncCtxManagerWrapper(manager())
|
||||
|
||||
def get_next_mult(self, n: int) -> AsyncContextManager[Sequence[int]]:
|
||||
with self._lock:
|
||||
next_ids = range(
|
||||
self._current + self._step,
|
||||
self._current + self._step * (n + 1),
|
||||
self._step,
|
||||
)
|
||||
self._current += n * self._step
|
||||
|
||||
for next_id in next_ids:
|
||||
self._unfinished_ids[next_id] = next_id
|
||||
|
||||
@contextmanager
|
||||
def manager() -> Generator[Sequence[int], None, None]:
|
||||
try:
|
||||
yield next_ids
|
||||
finally:
|
||||
with self._lock:
|
||||
for next_id in next_ids:
|
||||
self._unfinished_ids.pop(next_id)
|
||||
|
||||
self._notifier.notify_replication()
|
||||
|
||||
return _AsyncCtxManagerWrapper(manager())
|
||||
|
||||
def get_next_txn(self, txn: LoggingTransaction) -> int:
|
||||
"""
|
||||
Retrieve the next stream ID from within a database transaction.
|
||||
|
||||
Clean-up functions will be called when the transaction finishes.
|
||||
|
||||
Args:
|
||||
txn: The database transaction object.
|
||||
|
||||
Returns:
|
||||
The next stream ID.
|
||||
"""
|
||||
if not self._is_writer:
|
||||
raise Exception("Tried to allocate stream ID on non-writer")
|
||||
|
||||
# Get the next stream ID.
|
||||
with self._lock:
|
||||
self._current += self._step
|
||||
next_id = self._current
|
||||
|
||||
self._unfinished_ids[next_id] = next_id
|
||||
|
||||
def clear_unfinished_id(id_to_clear: int) -> None:
|
||||
"""A function to mark processing this ID as finished"""
|
||||
with self._lock:
|
||||
self._unfinished_ids.pop(id_to_clear)
|
||||
|
||||
# Mark this ID as finished once the database transaction itself finishes.
|
||||
txn.call_after(clear_unfinished_id, next_id)
|
||||
txn.call_on_exception(clear_unfinished_id, next_id)
|
||||
|
||||
# Return the new ID.
|
||||
return next_id
|
||||
|
||||
def get_current_token(self) -> int:
|
||||
if not self._is_writer:
|
||||
return self._current
|
||||
|
||||
with self._lock:
|
||||
if self._unfinished_ids:
|
||||
return next(iter(self._unfinished_ids)) - self._step
|
||||
|
||||
return self._current
|
||||
|
||||
def get_current_token_for_writer(self, instance_name: str) -> int:
|
||||
return self.get_current_token()
|
||||
|
||||
def get_minimal_local_current_token(self) -> int:
|
||||
return self.get_current_token()
|
||||
|
||||
|
||||
class MultiWriterIdGenerator(AbstractStreamIdGenerator):
|
||||
"""Generates and tracks stream IDs for a stream with multiple writers.
|
||||
|
||||
@@ -432,7 +276,22 @@ class MultiWriterIdGenerator(AbstractStreamIdGenerator):
|
||||
# no active writes in progress.
|
||||
self._max_position_of_local_instance = self._max_seen_allocated_stream_id
|
||||
|
||||
self._sequence_gen = PostgresSequenceGenerator(sequence_name)
|
||||
# This goes and fills out the above state from the database.
|
||||
self._load_current_ids(db_conn, tables)
|
||||
|
||||
self._sequence_gen = build_sequence_generator(
|
||||
db_conn=db_conn,
|
||||
database_engine=db.engine,
|
||||
get_first_callback=lambda _: self._persisted_upto_position,
|
||||
sequence_name=sequence_name,
|
||||
# We only need to set the below if we want it to call
|
||||
# `check_consistency`, but we do that ourselves below so we can
|
||||
# leave them blank.
|
||||
table=None,
|
||||
id_column=None,
|
||||
stream_name=None,
|
||||
positive=positive,
|
||||
)
|
||||
|
||||
# We check that the table and sequence haven't diverged.
|
||||
for table, _, id_column in tables:
|
||||
@@ -444,9 +303,6 @@ class MultiWriterIdGenerator(AbstractStreamIdGenerator):
|
||||
positive=positive,
|
||||
)
|
||||
|
||||
# This goes and fills out the above state from the database.
|
||||
self._load_current_ids(db_conn, tables)
|
||||
|
||||
self._max_seen_allocated_stream_id = max(
|
||||
self._current_positions.values(), default=1
|
||||
)
|
||||
@@ -480,13 +336,17 @@ class MultiWriterIdGenerator(AbstractStreamIdGenerator):
|
||||
# important if we add back a writer after a long time; we want to
|
||||
# consider that a "new" writer, rather than using the old stale
|
||||
# entry here.
|
||||
sql = """
|
||||
clause, args = make_in_list_sql_clause(
|
||||
self._db.engine, "instance_name", self._writers, negative=True
|
||||
)
|
||||
|
||||
sql = f"""
|
||||
DELETE FROM stream_positions
|
||||
WHERE
|
||||
stream_name = ?
|
||||
AND instance_name != ALL(?)
|
||||
AND {clause}
|
||||
"""
|
||||
cur.execute(sql, (self._stream_name, self._writers))
|
||||
cur.execute(sql, [self._stream_name] + args)
|
||||
|
||||
sql = """
|
||||
SELECT instance_name, stream_id FROM stream_positions
|
||||
@@ -508,12 +368,16 @@ class MultiWriterIdGenerator(AbstractStreamIdGenerator):
|
||||
# We add a GREATEST here to ensure that the result is always
|
||||
# positive. (This can be a problem for e.g. backfill streams where
|
||||
# the server has never backfilled).
|
||||
greatest_func = (
|
||||
"GREATEST" if isinstance(self._db.engine, PostgresEngine) else "MAX"
|
||||
)
|
||||
max_stream_id = 1
|
||||
for table, _, id_column in tables:
|
||||
sql = """
|
||||
SELECT GREATEST(COALESCE(%(agg)s(%(id)s), 1), 1)
|
||||
SELECT %(greatest_func)s(COALESCE(%(agg)s(%(id)s), 1), 1)
|
||||
FROM %(table)s
|
||||
""" % {
|
||||
"greatest_func": greatest_func,
|
||||
"id": id_column,
|
||||
"table": table,
|
||||
"agg": "MAX" if self._positive else "-MIN",
|
||||
@@ -913,6 +777,11 @@ class MultiWriterIdGenerator(AbstractStreamIdGenerator):
|
||||
|
||||
# We upsert the value, ensuring on conflict that we always increase the
|
||||
# value (or decrease if stream goes backwards).
|
||||
if isinstance(self._db.engine, PostgresEngine):
|
||||
agg = "GREATEST" if self._positive else "LEAST"
|
||||
else:
|
||||
agg = "MAX" if self._positive else "MIN"
|
||||
|
||||
sql = """
|
||||
INSERT INTO stream_positions (stream_name, instance_name, stream_id)
|
||||
VALUES (?, ?, ?)
|
||||
@@ -920,10 +789,10 @@ class MultiWriterIdGenerator(AbstractStreamIdGenerator):
|
||||
DO UPDATE SET
|
||||
stream_id = %(agg)s(stream_positions.stream_id, EXCLUDED.stream_id)
|
||||
""" % {
|
||||
"agg": "GREATEST" if self._positive else "LEAST",
|
||||
"agg": agg,
|
||||
}
|
||||
|
||||
pos = (self.get_current_token_for_writer(self._instance_name),)
|
||||
pos = self.get_current_token_for_writer(self._instance_name)
|
||||
txn.execute(sql, (self._stream_name, self._instance_name, pos))
|
||||
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ import attr
|
||||
from immutabledict import immutabledict
|
||||
from signedjson.key import decode_verify_key_bytes
|
||||
from signedjson.types import VerifyKey
|
||||
from typing_extensions import TypedDict
|
||||
from typing_extensions import Self, TypedDict
|
||||
from unpaddedbase64 import decode_base64
|
||||
from zope.interface import Interface
|
||||
|
||||
@@ -515,6 +515,27 @@ class AbstractMultiWriterStreamToken(metaclass=abc.ABCMeta):
|
||||
# at `self.stream`.
|
||||
return self.instance_map.get(instance_name, self.stream)
|
||||
|
||||
def is_before_or_eq(self, other_token: Self) -> bool:
|
||||
"""Wether this token is before the other token, i.e. every constituent
|
||||
part is before the other.
|
||||
|
||||
Essentially it is `self <= other`.
|
||||
|
||||
Note: if `self.is_before_or_eq(other_token) is False` then that does not
|
||||
imply that the reverse is True.
|
||||
"""
|
||||
if self.stream > other_token.stream:
|
||||
return False
|
||||
|
||||
instances = self.instance_map.keys() | other_token.instance_map.keys()
|
||||
for instance in instances:
|
||||
if self.instance_map.get(
|
||||
instance, self.stream
|
||||
) > other_token.instance_map.get(instance, other_token.stream):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
@attr.s(frozen=True, slots=True, order=False)
|
||||
class RoomStreamToken(AbstractMultiWriterStreamToken):
|
||||
@@ -1008,6 +1029,41 @@ class StreamToken:
|
||||
"""Returns the stream ID for the given key."""
|
||||
return getattr(self, key.value)
|
||||
|
||||
def is_before_or_eq(self, other_token: "StreamToken") -> bool:
|
||||
"""Wether this token is before the other token, i.e. every constituent
|
||||
part is before the other.
|
||||
|
||||
Essentially it is `self <= other`.
|
||||
|
||||
Note: if `self.is_before_or_eq(other_token) is False` then that does not
|
||||
imply that the reverse is True.
|
||||
"""
|
||||
|
||||
for _, key in StreamKeyType.__members__.items():
|
||||
if key == StreamKeyType.TYPING:
|
||||
# Typing stream is allowed to "reset", and so comparisons don't
|
||||
# really make sense as is.
|
||||
# TODO: Figure out a better way of tracking resets.
|
||||
continue
|
||||
|
||||
self_value = self.get_field(key)
|
||||
other_value = other_token.get_field(key)
|
||||
|
||||
if isinstance(self_value, RoomStreamToken):
|
||||
assert isinstance(other_value, RoomStreamToken)
|
||||
if not self_value.is_before_or_eq(other_value):
|
||||
return False
|
||||
elif isinstance(self_value, MultiWriterStreamToken):
|
||||
assert isinstance(other_value, MultiWriterStreamToken)
|
||||
if not self_value.is_before_or_eq(other_value):
|
||||
return False
|
||||
else:
|
||||
assert isinstance(other_value, int)
|
||||
if self_value > other_value:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
StreamToken.START = StreamToken(
|
||||
RoomStreamToken(stream=0), 0, 0, MultiWriterStreamToken(stream=0), 0, 0, 0, 0, 0, 0
|
||||
@@ -1223,3 +1279,60 @@ class ScheduledTask:
|
||||
result: Optional[JsonMapping]
|
||||
# Optional error that should be assigned a value when the status is FAILED
|
||||
error: Optional[str]
|
||||
|
||||
|
||||
class ShutdownRoomParams(TypedDict):
|
||||
"""
|
||||
Attributes:
|
||||
requester_user_id:
|
||||
User who requested the action. Will be recorded as putting the room on the
|
||||
blocking list.
|
||||
new_room_user_id:
|
||||
If set, a new room will be created with this user ID
|
||||
as the creator and admin, and all users in the old room will be
|
||||
moved into that room. If not set, no new room will be created
|
||||
and the users will just be removed from the old room.
|
||||
new_room_name:
|
||||
A string representing the name of the room that new users will
|
||||
be invited to. Defaults to `Content Violation Notification`
|
||||
message:
|
||||
A string containing the first message that will be sent as
|
||||
`new_room_user_id` in the new room. Ideally this will clearly
|
||||
convey why the original room was shut down.
|
||||
Defaults to `Sharing illegal content on this server is not
|
||||
permitted and rooms in violation will be blocked.`
|
||||
block:
|
||||
If set to `true`, this room will be added to a blocking list,
|
||||
preventing future attempts to join the room. Defaults to `false`.
|
||||
purge:
|
||||
If set to `true`, purge the given room from the database.
|
||||
force_purge:
|
||||
If set to `true`, the room will be purged from database
|
||||
even if there are still users joined to the room.
|
||||
"""
|
||||
|
||||
requester_user_id: Optional[str]
|
||||
new_room_user_id: Optional[str]
|
||||
new_room_name: Optional[str]
|
||||
message: Optional[str]
|
||||
block: bool
|
||||
purge: bool
|
||||
force_purge: bool
|
||||
|
||||
|
||||
class ShutdownRoomResponse(TypedDict):
|
||||
"""
|
||||
Attributes:
|
||||
kicked_users: An array of users (`user_id`) that were kicked.
|
||||
failed_to_kick_users:
|
||||
An array of users (`user_id`) that that were not kicked.
|
||||
local_aliases:
|
||||
An array of strings representing the local aliases that were
|
||||
migrated from the old room to the new.
|
||||
new_room_id: A string representing the room ID of the new room.
|
||||
"""
|
||||
|
||||
kicked_users: List[str]
|
||||
failed_to_kick_users: List[str]
|
||||
local_aliases: List[str]
|
||||
new_room_id: Optional[str]
|
||||
|
||||
@@ -24,7 +24,12 @@ from typing import TYPE_CHECKING, Awaitable, Callable, Dict, List, Optional, Set
|
||||
|
||||
from twisted.python.failure import Failure
|
||||
|
||||
from synapse.logging.context import nested_logging_context
|
||||
from synapse.logging.context import (
|
||||
ContextResourceUsage,
|
||||
LoggingContext,
|
||||
nested_logging_context,
|
||||
set_current_context,
|
||||
)
|
||||
from synapse.metrics import LaterGauge
|
||||
from synapse.metrics.background_process_metrics import (
|
||||
run_as_background_process,
|
||||
@@ -81,6 +86,8 @@ class TaskScheduler:
|
||||
MAX_CONCURRENT_RUNNING_TASKS = 5
|
||||
# Time from the last task update after which we will log a warning
|
||||
LAST_UPDATE_BEFORE_WARNING_MS = 24 * 60 * 60 * 1000 # 24hrs
|
||||
# Report a running task's status and usage every so often.
|
||||
OCCASIONAL_REPORT_INTERVAL_MS = 5 * 60 * 1000 # 5 minutes
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self._hs = hs
|
||||
@@ -346,6 +353,33 @@ class TaskScheduler:
|
||||
assert task.id not in self._running_tasks
|
||||
await self._store.delete_scheduled_task(task.id)
|
||||
|
||||
@staticmethod
|
||||
def _log_task_usage(
|
||||
state: str, task: ScheduledTask, usage: ContextResourceUsage, active_time: float
|
||||
) -> None:
|
||||
"""
|
||||
Log a line describing the state and usage of a task.
|
||||
The log line is inspired by / a copy of the request log line format,
|
||||
but with irrelevant fields removed.
|
||||
|
||||
active_time: Time that the task has been running for, in seconds.
|
||||
"""
|
||||
|
||||
logger.info(
|
||||
"Task %s: %.3fsec (%.3fsec, %.3fsec) (%.3fsec/%.3fsec/%d)"
|
||||
" [%d dbevts] %r, %r",
|
||||
state,
|
||||
active_time,
|
||||
usage.ru_utime,
|
||||
usage.ru_stime,
|
||||
usage.db_sched_duration_sec,
|
||||
usage.db_txn_duration_sec,
|
||||
int(usage.db_txn_count),
|
||||
usage.evt_db_fetch_count,
|
||||
task.resource_id,
|
||||
task.params,
|
||||
)
|
||||
|
||||
async def _launch_task(self, task: ScheduledTask) -> None:
|
||||
"""Launch a scheduled task now.
|
||||
|
||||
@@ -360,8 +394,32 @@ class TaskScheduler:
|
||||
)
|
||||
function = self._actions[task.action]
|
||||
|
||||
def _occasional_report(
|
||||
task_log_context: LoggingContext, start_time: float
|
||||
) -> None:
|
||||
"""
|
||||
Helper to log a 'Task continuing' line every so often.
|
||||
"""
|
||||
|
||||
current_time = self._clock.time()
|
||||
calling_context = set_current_context(task_log_context)
|
||||
try:
|
||||
usage = task_log_context.get_resource_usage()
|
||||
TaskScheduler._log_task_usage(
|
||||
"continuing", task, usage, current_time - start_time
|
||||
)
|
||||
finally:
|
||||
set_current_context(calling_context)
|
||||
|
||||
async def wrapper() -> None:
|
||||
with nested_logging_context(task.id):
|
||||
with nested_logging_context(task.id) as log_context:
|
||||
start_time = self._clock.time()
|
||||
occasional_status_call = self._clock.looping_call(
|
||||
_occasional_report,
|
||||
TaskScheduler.OCCASIONAL_REPORT_INTERVAL_MS,
|
||||
log_context,
|
||||
start_time,
|
||||
)
|
||||
try:
|
||||
(status, result, error) = await function(task)
|
||||
except Exception:
|
||||
@@ -383,6 +441,13 @@ class TaskScheduler:
|
||||
)
|
||||
self._running_tasks.remove(task.id)
|
||||
|
||||
current_time = self._clock.time()
|
||||
usage = log_context.get_resource_usage()
|
||||
TaskScheduler._log_task_usage(
|
||||
status.value, task, usage, current_time - start_time
|
||||
)
|
||||
occasional_status_call.stop()
|
||||
|
||||
# Try launch a new task since we've finished with this one.
|
||||
self._clock.call_later(0.1, self._launch_scheduled_tasks)
|
||||
|
||||
|
||||
234
tests/federation/test_federation_media.py
Normal file
234
tests/federation/test_federation_media.py
Normal file
@@ -0,0 +1,234 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2024 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
# Originally licensed under the Apache License, Version 2.0:
|
||||
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||
#
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
import io
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
from typing import Optional
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
from synapse.media._base import FileInfo, Responder
|
||||
from synapse.media.filepath import MediaFilePaths
|
||||
from synapse.media.media_storage import MediaStorage
|
||||
from synapse.media.storage_provider import (
|
||||
FileStorageProviderBackend,
|
||||
StorageProviderWrapper,
|
||||
)
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.databases.main.media_repository import LocalMedia
|
||||
from synapse.types import JsonDict, UserID
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests import unittest
|
||||
from tests.test_utils import SMALL_PNG
|
||||
from tests.unittest import override_config
|
||||
|
||||
|
||||
class FederationUnstableMediaDownloadsTest(unittest.FederatingHomeserverTestCase):
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
super().prepare(reactor, clock, hs)
|
||||
self.test_dir = tempfile.mkdtemp(prefix="synapse-tests-")
|
||||
self.addCleanup(shutil.rmtree, self.test_dir)
|
||||
self.primary_base_path = os.path.join(self.test_dir, "primary")
|
||||
self.secondary_base_path = os.path.join(self.test_dir, "secondary")
|
||||
|
||||
hs.config.media.media_store_path = self.primary_base_path
|
||||
|
||||
storage_providers = [
|
||||
StorageProviderWrapper(
|
||||
FileStorageProviderBackend(hs, self.secondary_base_path),
|
||||
store_local=True,
|
||||
store_remote=False,
|
||||
store_synchronous=True,
|
||||
)
|
||||
]
|
||||
|
||||
self.filepaths = MediaFilePaths(self.primary_base_path)
|
||||
self.media_storage = MediaStorage(
|
||||
hs, self.primary_base_path, self.filepaths, storage_providers
|
||||
)
|
||||
self.media_repo = hs.get_media_repository()
|
||||
|
||||
@override_config(
|
||||
{"experimental_features": {"msc3916_authenticated_media_enabled": True}}
|
||||
)
|
||||
def test_file_download(self) -> None:
|
||||
content = io.BytesIO(b"file_to_stream")
|
||||
content_uri = self.get_success(
|
||||
self.media_repo.create_content(
|
||||
"text/plain",
|
||||
"test_upload",
|
||||
content,
|
||||
46,
|
||||
UserID.from_string("@user_id:whatever.org"),
|
||||
)
|
||||
)
|
||||
# test with a text file
|
||||
channel = self.make_signed_federation_request(
|
||||
"GET",
|
||||
f"/_matrix/federation/unstable/org.matrix.msc3916/media/download/{content_uri.media_id}",
|
||||
)
|
||||
self.pump()
|
||||
self.assertEqual(200, channel.code)
|
||||
|
||||
content_type = channel.headers.getRawHeaders("content-type")
|
||||
assert content_type is not None
|
||||
assert "multipart/mixed" in content_type[0]
|
||||
assert "boundary" in content_type[0]
|
||||
|
||||
# extract boundary
|
||||
boundary = content_type[0].split("boundary=")[1]
|
||||
# split on boundary and check that json field and expected value exist
|
||||
stripped = channel.text_body.split("\r\n" + "--" + boundary)
|
||||
# TODO: the json object expected will change once MSC3911 is implemented, currently
|
||||
# {} is returned for all requests as a placeholder (per MSC3196)
|
||||
found_json = any(
|
||||
"\r\nContent-Type: application/json\r\n{}" in field for field in stripped
|
||||
)
|
||||
self.assertTrue(found_json)
|
||||
|
||||
# check that text file and expected value exist
|
||||
found_file = any(
|
||||
"\r\nContent-Type: text/plain\r\nfile_to_stream" in field
|
||||
for field in stripped
|
||||
)
|
||||
self.assertTrue(found_file)
|
||||
|
||||
content = io.BytesIO(SMALL_PNG)
|
||||
content_uri = self.get_success(
|
||||
self.media_repo.create_content(
|
||||
"image/png",
|
||||
"test_png_upload",
|
||||
content,
|
||||
67,
|
||||
UserID.from_string("@user_id:whatever.org"),
|
||||
)
|
||||
)
|
||||
# test with an image file
|
||||
channel = self.make_signed_federation_request(
|
||||
"GET",
|
||||
f"/_matrix/federation/unstable/org.matrix.msc3916/media/download/{content_uri.media_id}",
|
||||
)
|
||||
self.pump()
|
||||
self.assertEqual(200, channel.code)
|
||||
|
||||
content_type = channel.headers.getRawHeaders("content-type")
|
||||
assert content_type is not None
|
||||
assert "multipart/mixed" in content_type[0]
|
||||
assert "boundary" in content_type[0]
|
||||
|
||||
# extract boundary
|
||||
boundary = content_type[0].split("boundary=")[1]
|
||||
# split on boundary and check that json field and expected value exist
|
||||
body = channel.result.get("body")
|
||||
assert body is not None
|
||||
stripped_bytes = body.split(b"\r\n" + b"--" + boundary.encode("utf-8"))
|
||||
found_json = any(
|
||||
b"\r\nContent-Type: application/json\r\n{}" in field
|
||||
for field in stripped_bytes
|
||||
)
|
||||
self.assertTrue(found_json)
|
||||
|
||||
# check that png file exists and matches what was uploaded
|
||||
found_file = any(SMALL_PNG in field for field in stripped_bytes)
|
||||
self.assertTrue(found_file)
|
||||
|
||||
@override_config(
|
||||
{"experimental_features": {"msc3916_authenticated_media_enabled": False}}
|
||||
)
|
||||
def test_disable_config(self) -> None:
|
||||
content = io.BytesIO(b"file_to_stream")
|
||||
content_uri = self.get_success(
|
||||
self.media_repo.create_content(
|
||||
"text/plain",
|
||||
"test_upload",
|
||||
content,
|
||||
46,
|
||||
UserID.from_string("@user_id:whatever.org"),
|
||||
)
|
||||
)
|
||||
channel = self.make_signed_federation_request(
|
||||
"GET",
|
||||
f"/_matrix/federation/unstable/org.matrix.msc3916/media/download/{content_uri.media_id}",
|
||||
)
|
||||
self.pump()
|
||||
self.assertEqual(404, channel.code)
|
||||
self.assertEqual(channel.json_body.get("errcode"), "M_UNRECOGNIZED")
|
||||
|
||||
|
||||
class FakeFileStorageProviderBackend:
|
||||
"""
|
||||
Fake storage provider stub with incompatible `fetch` signature for testing
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer", config: str):
|
||||
self.hs = hs
|
||||
self.cache_directory = hs.config.media.media_store_path
|
||||
self.base_directory = config
|
||||
|
||||
def __str__(self) -> str:
|
||||
return "FakeFileStorageProviderBackend[%s]" % (self.base_directory,)
|
||||
|
||||
async def fetch(
|
||||
self, path: str, file_info: FileInfo, media_info: Optional[LocalMedia] = None
|
||||
) -> Optional[Responder]:
|
||||
pass
|
||||
|
||||
|
||||
TEST_DIR = tempfile.mkdtemp(prefix="synapse-tests-")
|
||||
|
||||
|
||||
class FederationUnstableMediaEndpointCompatibilityTest(
|
||||
unittest.FederatingHomeserverTestCase
|
||||
):
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
super().prepare(reactor, clock, hs)
|
||||
self.test_dir = TEST_DIR
|
||||
self.addCleanup(shutil.rmtree, self.test_dir)
|
||||
self.media_repo = hs.get_media_repository()
|
||||
|
||||
def default_config(self) -> JsonDict:
|
||||
config = super().default_config()
|
||||
primary_base_path = os.path.join(TEST_DIR, "primary")
|
||||
config["media_storage_providers"] = [
|
||||
{
|
||||
"module": "tests.federation.test_federation_media.FakeFileStorageProviderBackend",
|
||||
"store_local": "True",
|
||||
"store_remote": "False",
|
||||
"store_synchronous": "False",
|
||||
"config": {"directory": primary_base_path},
|
||||
}
|
||||
]
|
||||
return config
|
||||
|
||||
@override_config(
|
||||
{"experimental_features": {"msc3916_authenticated_media_enabled": True}}
|
||||
)
|
||||
def test_incompatible_storage_provider_fails_to_load_endpoint(self) -> None:
|
||||
channel = self.make_signed_federation_request(
|
||||
"GET",
|
||||
"/_matrix/federation/unstable/org.matrix.msc3916/media/download/xyz",
|
||||
)
|
||||
self.pump()
|
||||
self.assertEqual(404, channel.code)
|
||||
self.assertEqual(channel.json_body.get("errcode"), "M_UNRECOGNIZED")
|
||||
@@ -407,3 +407,24 @@ class RoomMemberMasterHandlerTestCase(HomeserverTestCase):
|
||||
self.assertFalse(
|
||||
self.get_success(self.store.did_forget(self.alice, self.room_id))
|
||||
)
|
||||
|
||||
def test_deduplicate_joins(self) -> None:
|
||||
"""
|
||||
Test that calling /join multiple times does not store a new state group.
|
||||
"""
|
||||
|
||||
self.helper.join(self.room_id, user=self.bob, tok=self.bob_token)
|
||||
|
||||
sql = "SELECT COUNT(*) FROM state_groups WHERE room_id = ?"
|
||||
rows = self.get_success(
|
||||
self.store.db_pool.execute("test_deduplicate_joins", sql, self.room_id)
|
||||
)
|
||||
initial_count = rows[0][0]
|
||||
|
||||
self.helper.join(self.room_id, user=self.bob, tok=self.bob_token)
|
||||
rows = self.get_success(
|
||||
self.store.db_pool.execute("test_deduplicate_joins", sql, self.room_id)
|
||||
)
|
||||
new_count = rows[0][0]
|
||||
|
||||
self.assertEqual(initial_count, new_count)
|
||||
|
||||
1118
tests/handlers/test_sliding_sync.py
Normal file
1118
tests/handlers/test_sliding_sync.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -32,7 +32,7 @@ from twisted.web.resource import Resource
|
||||
from synapse.api.constants import EduTypes
|
||||
from synapse.api.errors import AuthError
|
||||
from synapse.federation.transport.server import TransportLayerServer
|
||||
from synapse.handlers.typing import TypingWriterHandler
|
||||
from synapse.handlers.typing import FORGET_TIMEOUT, TypingWriterHandler
|
||||
from synapse.http.federation.matrix_federation_agent import MatrixFederationAgent
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import JsonDict, Requester, StreamKeyType, UserID, create_requester
|
||||
@@ -501,3 +501,54 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
def test_prune_typing_replication(self) -> None:
|
||||
"""Regression test for `get_all_typing_updates` breaking when we prune
|
||||
old updates
|
||||
"""
|
||||
self.room_members = [U_APPLE, U_BANANA]
|
||||
|
||||
instance_name = self.hs.get_instance_name()
|
||||
|
||||
self.get_success(
|
||||
self.handler.started_typing(
|
||||
target_user=U_APPLE,
|
||||
requester=create_requester(U_APPLE),
|
||||
room_id=ROOM_ID,
|
||||
timeout=10000,
|
||||
)
|
||||
)
|
||||
|
||||
rows, _, _ = self.get_success(
|
||||
self.handler.get_all_typing_updates(
|
||||
instance_name=instance_name,
|
||||
last_id=0,
|
||||
current_id=self.handler.get_current_token(),
|
||||
limit=100,
|
||||
)
|
||||
)
|
||||
self.assertEqual(rows, [(1, [ROOM_ID, [U_APPLE.to_string()]])])
|
||||
|
||||
self.reactor.advance(20000)
|
||||
|
||||
rows, _, _ = self.get_success(
|
||||
self.handler.get_all_typing_updates(
|
||||
instance_name=instance_name,
|
||||
last_id=1,
|
||||
current_id=self.handler.get_current_token(),
|
||||
limit=100,
|
||||
)
|
||||
)
|
||||
self.assertEqual(rows, [(2, [ROOM_ID, []])])
|
||||
|
||||
self.reactor.advance(FORGET_TIMEOUT)
|
||||
|
||||
rows, _, _ = self.get_success(
|
||||
self.handler.get_all_typing_updates(
|
||||
instance_name=instance_name,
|
||||
last_id=1,
|
||||
current_id=self.handler.get_current_token(),
|
||||
limit=100,
|
||||
)
|
||||
)
|
||||
self.assertEqual(rows, [])
|
||||
|
||||
@@ -1061,6 +1061,45 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
|
||||
{alice: ProfileInfo(display_name=None, avatar_url=MXC_DUMMY)},
|
||||
)
|
||||
|
||||
def test_search_punctuation(self) -> None:
|
||||
"""Test that you can search for a user that includes punctuation"""
|
||||
|
||||
searching_user = self.register_user("searcher", "password")
|
||||
searching_user_tok = self.login("searcher", "password")
|
||||
|
||||
room_id = self.helper.create_room_as(
|
||||
searching_user,
|
||||
room_version=RoomVersions.V1.identifier,
|
||||
tok=searching_user_tok,
|
||||
)
|
||||
|
||||
# We want to test searching for users of the form e.g. "user-1", with
|
||||
# various punctuation. We also test both where the prefix is numeric and
|
||||
# alphanumeric, as e.g. postgres tokenises "user-1" as "user" and "-1".
|
||||
i = 1
|
||||
for char in ["-", ".", "_"]:
|
||||
for use_numeric in [False, True]:
|
||||
if use_numeric:
|
||||
prefix1 = f"{i}"
|
||||
prefix2 = f"{i+1}"
|
||||
else:
|
||||
prefix1 = f"a{i}"
|
||||
prefix2 = f"a{i+1}"
|
||||
|
||||
local_user_1 = self.register_user(f"user{char}{prefix1}", "password")
|
||||
local_user_2 = self.register_user(f"user{char}{prefix2}", "password")
|
||||
|
||||
self._add_user_to_room(room_id, RoomVersions.V1, local_user_1)
|
||||
self._add_user_to_room(room_id, RoomVersions.V1, local_user_2)
|
||||
|
||||
results = self.get_success(
|
||||
self.handler.search_users(searching_user, local_user_1, 20)
|
||||
)["results"]
|
||||
received_user_id_ordering = [result["user_id"] for result in results]
|
||||
self.assertSequenceEqual(received_user_id_ordering[:1], [local_user_1])
|
||||
|
||||
i += 2
|
||||
|
||||
|
||||
class TestUserDirSearchDisabled(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
|
||||
@@ -18,13 +18,14 @@
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
import itertools
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
from binascii import unhexlify
|
||||
from io import BytesIO
|
||||
from typing import Any, BinaryIO, ClassVar, Dict, List, Optional, Tuple, Union
|
||||
from unittest.mock import Mock
|
||||
from unittest.mock import MagicMock, Mock, patch
|
||||
from urllib import parse
|
||||
|
||||
import attr
|
||||
@@ -36,21 +37,27 @@ from twisted.internet import defer
|
||||
from twisted.internet.defer import Deferred
|
||||
from twisted.python.failure import Failure
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
from twisted.web.http_headers import Headers
|
||||
from twisted.web.iweb import UNKNOWN_LENGTH, IResponse
|
||||
from twisted.web.resource import Resource
|
||||
|
||||
from synapse.api.errors import Codes, HttpResponseException
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.events import EventBase
|
||||
from synapse.http.types import QueryParams
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.media._base import FileInfo, ThumbnailInfo
|
||||
from synapse.media.filepath import MediaFilePaths
|
||||
from synapse.media.media_storage import MediaStorage, ReadableFileWrapper
|
||||
from synapse.media.storage_provider import FileStorageProviderBackend
|
||||
from synapse.media.storage_provider import (
|
||||
FileStorageProviderBackend,
|
||||
StorageProviderWrapper,
|
||||
)
|
||||
from synapse.media.thumbnailer import ThumbnailProvider
|
||||
from synapse.module_api import ModuleApi
|
||||
from synapse.module_api.callbacks.spamchecker_callbacks import load_legacy_spam_checkers
|
||||
from synapse.rest import admin
|
||||
from synapse.rest.client import login
|
||||
from synapse.rest.media.thumbnail_resource import ThumbnailResource
|
||||
from synapse.rest.client import login, media
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import JsonDict, RoomAlias
|
||||
from synapse.util import Clock
|
||||
@@ -58,6 +65,7 @@ from synapse.util import Clock
|
||||
from tests import unittest
|
||||
from tests.server import FakeChannel
|
||||
from tests.test_utils import SMALL_PNG
|
||||
from tests.unittest import override_config
|
||||
from tests.utils import default_config
|
||||
|
||||
|
||||
@@ -73,7 +81,14 @@ class MediaStorageTests(unittest.HomeserverTestCase):
|
||||
|
||||
hs.config.media.media_store_path = self.primary_base_path
|
||||
|
||||
storage_providers = [FileStorageProviderBackend(hs, self.secondary_base_path)]
|
||||
storage_providers = [
|
||||
StorageProviderWrapper(
|
||||
FileStorageProviderBackend(hs, self.secondary_base_path),
|
||||
store_local=True,
|
||||
store_remote=False,
|
||||
store_synchronous=True,
|
||||
)
|
||||
]
|
||||
|
||||
self.filepaths = MediaFilePaths(self.primary_base_path)
|
||||
self.media_storage = MediaStorage(
|
||||
@@ -153,68 +168,54 @@ class _TestImage:
|
||||
is_inline: bool = True
|
||||
|
||||
|
||||
@parameterized_class(
|
||||
("test_image",),
|
||||
[
|
||||
# small png
|
||||
(
|
||||
_TestImage(
|
||||
SMALL_PNG,
|
||||
b"image/png",
|
||||
b".png",
|
||||
unhexlify(
|
||||
b"89504e470d0a1a0a0000000d4948445200000020000000200806"
|
||||
b"000000737a7af40000001a49444154789cedc101010000008220"
|
||||
b"ffaf6e484001000000ef0610200001194334ee0000000049454e"
|
||||
b"44ae426082"
|
||||
),
|
||||
unhexlify(
|
||||
b"89504e470d0a1a0a0000000d4948445200000001000000010806"
|
||||
b"0000001f15c4890000000d49444154789c636060606000000005"
|
||||
b"0001a5f645400000000049454e44ae426082"
|
||||
),
|
||||
),
|
||||
),
|
||||
# small png with transparency.
|
||||
(
|
||||
_TestImage(
|
||||
unhexlify(
|
||||
b"89504e470d0a1a0a0000000d49484452000000010000000101000"
|
||||
b"00000376ef9240000000274524e5300010194fdae0000000a4944"
|
||||
b"4154789c636800000082008177cd72b60000000049454e44ae426"
|
||||
b"082"
|
||||
),
|
||||
b"image/png",
|
||||
b".png",
|
||||
# Note that we don't check the output since it varies across
|
||||
# different versions of Pillow.
|
||||
),
|
||||
),
|
||||
# small lossless webp
|
||||
(
|
||||
_TestImage(
|
||||
unhexlify(
|
||||
b"524946461a000000574542505650384c0d0000002f0000001007"
|
||||
b"1011118888fe0700"
|
||||
),
|
||||
b"image/webp",
|
||||
b".webp",
|
||||
),
|
||||
),
|
||||
# an empty file
|
||||
(
|
||||
_TestImage(
|
||||
b"",
|
||||
b"image/gif",
|
||||
b".gif",
|
||||
expected_found=False,
|
||||
unable_to_thumbnail=True,
|
||||
),
|
||||
),
|
||||
# An SVG.
|
||||
(
|
||||
_TestImage(
|
||||
b"""<?xml version="1.0"?>
|
||||
small_png = _TestImage(
|
||||
SMALL_PNG,
|
||||
b"image/png",
|
||||
b".png",
|
||||
unhexlify(
|
||||
b"89504e470d0a1a0a0000000d4948445200000020000000200806"
|
||||
b"000000737a7af40000001a49444154789cedc101010000008220"
|
||||
b"ffaf6e484001000000ef0610200001194334ee0000000049454e"
|
||||
b"44ae426082"
|
||||
),
|
||||
unhexlify(
|
||||
b"89504e470d0a1a0a0000000d4948445200000001000000010806"
|
||||
b"0000001f15c4890000000d49444154789c636060606000000005"
|
||||
b"0001a5f645400000000049454e44ae426082"
|
||||
),
|
||||
)
|
||||
|
||||
small_png_with_transparency = _TestImage(
|
||||
unhexlify(
|
||||
b"89504e470d0a1a0a0000000d49484452000000010000000101000"
|
||||
b"00000376ef9240000000274524e5300010194fdae0000000a4944"
|
||||
b"4154789c636800000082008177cd72b60000000049454e44ae426"
|
||||
b"082"
|
||||
),
|
||||
b"image/png",
|
||||
b".png",
|
||||
# Note that we don't check the output since it varies across
|
||||
# different versions of Pillow.
|
||||
)
|
||||
|
||||
small_lossless_webp = _TestImage(
|
||||
unhexlify(
|
||||
b"524946461a000000574542505650384c0d0000002f0000001007" b"1011118888fe0700"
|
||||
),
|
||||
b"image/webp",
|
||||
b".webp",
|
||||
)
|
||||
|
||||
empty_file = _TestImage(
|
||||
b"",
|
||||
b"image/gif",
|
||||
b".gif",
|
||||
expected_found=False,
|
||||
unable_to_thumbnail=True,
|
||||
)
|
||||
|
||||
SVG = _TestImage(
|
||||
b"""<?xml version="1.0"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
|
||||
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
|
||||
@@ -223,19 +224,32 @@ class _TestImage:
|
||||
<circle cx="100" cy="100" r="50" stroke="black"
|
||||
stroke-width="5" fill="red" />
|
||||
</svg>""",
|
||||
b"image/svg",
|
||||
b".svg",
|
||||
expected_found=False,
|
||||
unable_to_thumbnail=True,
|
||||
is_inline=False,
|
||||
),
|
||||
),
|
||||
],
|
||||
b"image/svg",
|
||||
b".svg",
|
||||
expected_found=False,
|
||||
unable_to_thumbnail=True,
|
||||
is_inline=False,
|
||||
)
|
||||
test_images = [
|
||||
small_png,
|
||||
small_png_with_transparency,
|
||||
small_lossless_webp,
|
||||
empty_file,
|
||||
SVG,
|
||||
]
|
||||
urls = [
|
||||
"_matrix/media/r0/thumbnail",
|
||||
"_matrix/client/unstable/org.matrix.msc3916/media/thumbnail",
|
||||
]
|
||||
|
||||
|
||||
@parameterized_class(("test_image", "url"), itertools.product(test_images, urls))
|
||||
class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
servlets = [media.register_servlets]
|
||||
test_image: ClassVar[_TestImage]
|
||||
hijack_auth = True
|
||||
user_id = "@test:user"
|
||||
url: ClassVar[str]
|
||||
|
||||
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||
self.fetches: List[
|
||||
@@ -251,9 +265,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
destination: str,
|
||||
path: str,
|
||||
output_stream: BinaryIO,
|
||||
download_ratelimiter: Ratelimiter,
|
||||
ip_address: Any,
|
||||
max_size: int,
|
||||
args: Optional[QueryParams] = None,
|
||||
retry_on_dns_fail: bool = True,
|
||||
max_size: Optional[int] = None,
|
||||
ignore_backoff: bool = False,
|
||||
follow_redirects: bool = False,
|
||||
) -> "Deferred[Tuple[int, Dict[bytes, List[bytes]]]]":
|
||||
@@ -298,6 +314,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
"config": {"directory": self.storage_path},
|
||||
}
|
||||
config["media_storage_providers"] = [provider_config]
|
||||
config["experimental_features"] = {"msc3916_authenticated_media_enabled": True}
|
||||
|
||||
hs = self.setup_test_homeserver(config=config, federation_http_client=client)
|
||||
|
||||
@@ -502,7 +519,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
params = "?width=32&height=32&method=scale"
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/media/v3/thumbnail/{self.media_id}{params}",
|
||||
f"/{self.url}/{self.media_id}{params}",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -530,7 +547,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/media/v3/thumbnail/{self.media_id}{params}",
|
||||
f"/{self.url}/{self.media_id}{params}",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -566,12 +583,11 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
params = "?width=32&height=32&method=" + method
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/media/r0/thumbnail/{self.media_id}{params}",
|
||||
f"/{self.url}/{self.media_id}{params}",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
self.pump()
|
||||
|
||||
headers = {
|
||||
b"Content-Length": [b"%d" % (len(self.test_image.data))],
|
||||
b"Content-Type": [self.test_image.content_type],
|
||||
@@ -580,7 +596,6 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
(self.test_image.data, (len(self.test_image.data), headers))
|
||||
)
|
||||
self.pump()
|
||||
|
||||
if expected_found:
|
||||
self.assertEqual(channel.code, 200)
|
||||
|
||||
@@ -603,7 +618,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
channel.json_body,
|
||||
{
|
||||
"errcode": "M_UNKNOWN",
|
||||
"error": "Cannot find any thumbnails for the requested media ('/_matrix/media/r0/thumbnail/example.com/12345'). This might mean the media is not a supported_media_format=(image/jpeg, image/jpg, image/webp, image/gif, image/png) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)",
|
||||
"error": f"Cannot find any thumbnails for the requested media ('/{self.url}/example.com/12345'). This might mean the media is not a supported_media_format=(image/jpeg, image/jpg, image/webp, image/gif, image/png) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)",
|
||||
},
|
||||
)
|
||||
else:
|
||||
@@ -613,7 +628,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
channel.json_body,
|
||||
{
|
||||
"errcode": "M_NOT_FOUND",
|
||||
"error": "Not found '/_matrix/media/r0/thumbnail/example.com/12345'",
|
||||
"error": f"Not found '/{self.url}/example.com/12345'",
|
||||
},
|
||||
)
|
||||
|
||||
@@ -625,12 +640,12 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
|
||||
content_type = self.test_image.content_type.decode()
|
||||
media_repo = self.hs.get_media_repository()
|
||||
thumbnail_resouce = ThumbnailResource(
|
||||
thumbnail_provider = ThumbnailProvider(
|
||||
self.hs, media_repo, media_repo.media_storage
|
||||
)
|
||||
|
||||
self.assertIsNotNone(
|
||||
thumbnail_resouce._select_thumbnail(
|
||||
thumbnail_provider._select_thumbnail(
|
||||
desired_width=desired_size,
|
||||
desired_height=desired_size,
|
||||
desired_method=method,
|
||||
@@ -879,3 +894,218 @@ class SpamCheckerTestCase(unittest.HomeserverTestCase):
|
||||
tok=self.tok,
|
||||
expect_code=400,
|
||||
)
|
||||
|
||||
|
||||
class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
|
||||
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||
config = self.default_config()
|
||||
|
||||
self.storage_path = self.mktemp()
|
||||
self.media_store_path = self.mktemp()
|
||||
os.mkdir(self.storage_path)
|
||||
os.mkdir(self.media_store_path)
|
||||
config["media_store_path"] = self.media_store_path
|
||||
|
||||
provider_config = {
|
||||
"module": "synapse.media.storage_provider.FileStorageProviderBackend",
|
||||
"store_local": True,
|
||||
"store_synchronous": False,
|
||||
"store_remote": True,
|
||||
"config": {"directory": self.storage_path},
|
||||
}
|
||||
|
||||
config["media_storage_providers"] = [provider_config]
|
||||
|
||||
return self.setup_test_homeserver(config=config)
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.repo = hs.get_media_repository()
|
||||
self.client = hs.get_federation_http_client()
|
||||
self.store = hs.get_datastores().main
|
||||
|
||||
def create_resource_dict(self) -> Dict[str, Resource]:
|
||||
# We need to manually set the resource tree to include media, the
|
||||
# default only does `/_matrix/client` APIs.
|
||||
return {"/_matrix/media": self.hs.get_media_repository_resource()}
|
||||
|
||||
# mock actually reading file body
|
||||
def read_body_with_max_size_30MiB(*args: Any, **kwargs: Any) -> Deferred:
|
||||
d: Deferred = defer.Deferred()
|
||||
d.callback(31457280)
|
||||
return d
|
||||
|
||||
def read_body_with_max_size_50MiB(*args: Any, **kwargs: Any) -> Deferred:
|
||||
d: Deferred = defer.Deferred()
|
||||
d.callback(52428800)
|
||||
return d
|
||||
|
||||
@patch(
|
||||
"synapse.http.matrixfederationclient.read_body_with_max_size",
|
||||
read_body_with_max_size_30MiB,
|
||||
)
|
||||
def test_download_ratelimit_default(self) -> None:
|
||||
"""
|
||||
Test remote media download ratelimiting against default configuration - 500MB bucket
|
||||
and 87kb/second drain rate
|
||||
"""
|
||||
|
||||
# mock out actually sending the request, returns a 30MiB response
|
||||
async def _send_request(*args: Any, **kwargs: Any) -> IResponse:
|
||||
resp = MagicMock(spec=IResponse)
|
||||
resp.code = 200
|
||||
resp.length = 31457280
|
||||
resp.headers = Headers({"Content-Type": ["application/octet-stream"]})
|
||||
resp.phrase = b"OK"
|
||||
return resp
|
||||
|
||||
self.client._send_request = _send_request # type: ignore
|
||||
|
||||
# first request should go through
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxyz",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel.code == 200
|
||||
|
||||
# next 15 should go through
|
||||
for i in range(15):
|
||||
channel2 = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxy{i}",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel2.code == 200
|
||||
|
||||
# 17th will hit ratelimit
|
||||
channel3 = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxyx",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel3.code == 429
|
||||
|
||||
# however, a request from a different IP will go through
|
||||
channel4 = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxyz",
|
||||
shorthand=False,
|
||||
client_ip="187.233.230.159",
|
||||
)
|
||||
assert channel4.code == 200
|
||||
|
||||
# at 87Kib/s it should take about 2 minutes for enough to drain from bucket that another
|
||||
# 30MiB download is authorized - The last download was blocked at 503,316,480.
|
||||
# The next download will be authorized when bucket hits 492,830,720
|
||||
# (524,288,000 total capacity - 31,457,280 download size) so 503,316,480 - 492,830,720 ~= 10,485,760
|
||||
# needs to drain before another download will be authorized, that will take ~=
|
||||
# 2 minutes (10,485,760/89,088/60)
|
||||
self.reactor.pump([2.0 * 60.0])
|
||||
|
||||
# enough has drained and next request goes through
|
||||
channel5 = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxyb",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel5.code == 200
|
||||
|
||||
@override_config(
|
||||
{
|
||||
"remote_media_download_per_second": "50M",
|
||||
"remote_media_download_burst_count": "50M",
|
||||
}
|
||||
)
|
||||
@patch(
|
||||
"synapse.http.matrixfederationclient.read_body_with_max_size",
|
||||
read_body_with_max_size_50MiB,
|
||||
)
|
||||
def test_download_rate_limit_config(self) -> None:
|
||||
"""
|
||||
Test that download rate limit config options are correctly picked up and applied
|
||||
"""
|
||||
|
||||
async def _send_request(*args: Any, **kwargs: Any) -> IResponse:
|
||||
resp = MagicMock(spec=IResponse)
|
||||
resp.code = 200
|
||||
resp.length = 52428800
|
||||
resp.headers = Headers({"Content-Type": ["application/octet-stream"]})
|
||||
resp.phrase = b"OK"
|
||||
return resp
|
||||
|
||||
self.client._send_request = _send_request # type: ignore
|
||||
|
||||
# first request should go through
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxyz",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel.code == 200
|
||||
|
||||
# immediate second request should fail
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxy1",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel.code == 429
|
||||
|
||||
# advance half a second
|
||||
self.reactor.pump([0.5])
|
||||
|
||||
# request still fails
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxy2",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel.code == 429
|
||||
|
||||
# advance another half second
|
||||
self.reactor.pump([0.5])
|
||||
|
||||
# enough has drained from bucket and request is successful
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxy3",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel.code == 200
|
||||
|
||||
@patch(
|
||||
"synapse.http.matrixfederationclient.read_body_with_max_size",
|
||||
read_body_with_max_size_30MiB,
|
||||
)
|
||||
def test_download_ratelimit_max_size_sub(self) -> None:
|
||||
"""
|
||||
Test that if no content-length is provided, the default max size is applied instead
|
||||
"""
|
||||
|
||||
# mock out actually sending the request
|
||||
async def _send_request(*args: Any, **kwargs: Any) -> IResponse:
|
||||
resp = MagicMock(spec=IResponse)
|
||||
resp.code = 200
|
||||
resp.length = UNKNOWN_LENGTH
|
||||
resp.headers = Headers({"Content-Type": ["application/octet-stream"]})
|
||||
resp.phrase = b"OK"
|
||||
return resp
|
||||
|
||||
self.client._send_request = _send_request # type: ignore
|
||||
|
||||
# ten requests should go through using the max size (500MB/50MB)
|
||||
for i in range(10):
|
||||
channel2 = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxy{i}",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel2.code == 200
|
||||
|
||||
# eleventh will hit ratelimit
|
||||
channel3 = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/media/v3/download/remote.org/abcdefghijklmnopqrstuvwxyx",
|
||||
shorthand=False,
|
||||
)
|
||||
assert channel3.code == 429
|
||||
|
||||
@@ -154,7 +154,10 @@ class EventsWorkerStoreTestCase(BaseWorkerStoreTestCase):
|
||||
USER_ID,
|
||||
"invite",
|
||||
event.event_id,
|
||||
event.internal_metadata.stream_ordering,
|
||||
PersistedEventPosition(
|
||||
self.hs.get_instance_name(),
|
||||
event.internal_metadata.stream_ordering,
|
||||
),
|
||||
RoomVersions.V1.identifier,
|
||||
)
|
||||
],
|
||||
|
||||
@@ -24,8 +24,8 @@ from twisted.internet.defer import ensureDeferred
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
from synapse.api.errors import NotFoundError
|
||||
from synapse.rest import admin, devices, room, sync
|
||||
from synapse.rest.client import account, keys, login, register
|
||||
from synapse.rest import admin, devices, sync
|
||||
from synapse.rest.client import keys, login, register
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import JsonDict, UserID, create_requester
|
||||
from synapse.util import Clock
|
||||
@@ -33,146 +33,6 @@ from synapse.util import Clock
|
||||
from tests import unittest
|
||||
|
||||
|
||||
class DeviceListsTestCase(unittest.HomeserverTestCase):
|
||||
"""Tests regarding device list changes."""
|
||||
|
||||
servlets = [
|
||||
admin.register_servlets_for_client_rest_resource,
|
||||
login.register_servlets,
|
||||
register.register_servlets,
|
||||
account.register_servlets,
|
||||
room.register_servlets,
|
||||
sync.register_servlets,
|
||||
devices.register_servlets,
|
||||
]
|
||||
|
||||
def test_receiving_local_device_list_changes(self) -> None:
|
||||
"""Tests that a local users that share a room receive each other's device list
|
||||
changes.
|
||||
"""
|
||||
# Register two users
|
||||
test_device_id = "TESTDEVICE"
|
||||
alice_user_id = self.register_user("alice", "correcthorse")
|
||||
alice_access_token = self.login(
|
||||
alice_user_id, "correcthorse", device_id=test_device_id
|
||||
)
|
||||
|
||||
bob_user_id = self.register_user("bob", "ponyponypony")
|
||||
bob_access_token = self.login(bob_user_id, "ponyponypony")
|
||||
|
||||
# Create a room for them to coexist peacefully in
|
||||
new_room_id = self.helper.create_room_as(
|
||||
alice_user_id, is_public=True, tok=alice_access_token
|
||||
)
|
||||
self.assertIsNotNone(new_room_id)
|
||||
|
||||
# Have Bob join the room
|
||||
self.helper.invite(
|
||||
new_room_id, alice_user_id, bob_user_id, tok=alice_access_token
|
||||
)
|
||||
self.helper.join(new_room_id, bob_user_id, tok=bob_access_token)
|
||||
|
||||
# Now have Bob initiate an initial sync (in order to get a since token)
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/sync",
|
||||
access_token=bob_access_token,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
next_batch_token = channel.json_body["next_batch"]
|
||||
|
||||
# ...and then an incremental sync. This should block until the sync stream is woken up,
|
||||
# which we hope will happen as a result of Alice updating their device list.
|
||||
bob_sync_channel = self.make_request(
|
||||
"GET",
|
||||
f"/sync?since={next_batch_token}&timeout=30000",
|
||||
access_token=bob_access_token,
|
||||
# Start the request, then continue on.
|
||||
await_result=False,
|
||||
)
|
||||
|
||||
# Have alice update their device list
|
||||
channel = self.make_request(
|
||||
"PUT",
|
||||
f"/devices/{test_device_id}",
|
||||
{
|
||||
"display_name": "New Device Name",
|
||||
},
|
||||
access_token=alice_access_token,
|
||||
)
|
||||
self.assertEqual(channel.code, HTTPStatus.OK, channel.json_body)
|
||||
|
||||
# Check that bob's incremental sync contains the updated device list.
|
||||
# If not, the client would only receive the device list update on the
|
||||
# *next* sync.
|
||||
bob_sync_channel.await_result()
|
||||
self.assertEqual(bob_sync_channel.code, 200, bob_sync_channel.json_body)
|
||||
|
||||
changed_device_lists = bob_sync_channel.json_body.get("device_lists", {}).get(
|
||||
"changed", []
|
||||
)
|
||||
self.assertIn(alice_user_id, changed_device_lists, bob_sync_channel.json_body)
|
||||
|
||||
def test_not_receiving_local_device_list_changes(self) -> None:
|
||||
"""Tests a local users DO NOT receive device updates from each other if they do not
|
||||
share a room.
|
||||
"""
|
||||
# Register two users
|
||||
test_device_id = "TESTDEVICE"
|
||||
alice_user_id = self.register_user("alice", "correcthorse")
|
||||
alice_access_token = self.login(
|
||||
alice_user_id, "correcthorse", device_id=test_device_id
|
||||
)
|
||||
|
||||
bob_user_id = self.register_user("bob", "ponyponypony")
|
||||
bob_access_token = self.login(bob_user_id, "ponyponypony")
|
||||
|
||||
# These users do not share a room. They are lonely.
|
||||
|
||||
# Have Bob initiate an initial sync (in order to get a since token)
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/sync",
|
||||
access_token=bob_access_token,
|
||||
)
|
||||
self.assertEqual(channel.code, HTTPStatus.OK, channel.json_body)
|
||||
next_batch_token = channel.json_body["next_batch"]
|
||||
|
||||
# ...and then an incremental sync. This should block until the sync stream is woken up,
|
||||
# which we hope will happen as a result of Alice updating their device list.
|
||||
bob_sync_channel = self.make_request(
|
||||
"GET",
|
||||
f"/sync?since={next_batch_token}&timeout=1000",
|
||||
access_token=bob_access_token,
|
||||
# Start the request, then continue on.
|
||||
await_result=False,
|
||||
)
|
||||
|
||||
# Have alice update their device list
|
||||
channel = self.make_request(
|
||||
"PUT",
|
||||
f"/devices/{test_device_id}",
|
||||
{
|
||||
"display_name": "New Device Name",
|
||||
},
|
||||
access_token=alice_access_token,
|
||||
)
|
||||
self.assertEqual(channel.code, HTTPStatus.OK, channel.json_body)
|
||||
|
||||
# Check that bob's incremental sync does not contain the updated device list.
|
||||
bob_sync_channel.await_result()
|
||||
self.assertEqual(
|
||||
bob_sync_channel.code, HTTPStatus.OK, bob_sync_channel.json_body
|
||||
)
|
||||
|
||||
changed_device_lists = bob_sync_channel.json_body.get("device_lists", {}).get(
|
||||
"changed", []
|
||||
)
|
||||
self.assertNotIn(
|
||||
alice_user_id, changed_device_lists, bob_sync_channel.json_body
|
||||
)
|
||||
|
||||
|
||||
class DevicesTestCase(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
|
||||
1609
tests/rest/client/test_media.py
Normal file
1609
tests/rest/client/test_media.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -18,15 +18,39 @@
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
from parameterized import parameterized_class
|
||||
|
||||
from synapse.api.constants import EduTypes
|
||||
from synapse.rest import admin
|
||||
from synapse.rest.client import login, sendtodevice, sync
|
||||
from synapse.types import JsonDict
|
||||
|
||||
from tests.unittest import HomeserverTestCase, override_config
|
||||
|
||||
|
||||
@parameterized_class(
|
||||
("sync_endpoint", "experimental_features"),
|
||||
[
|
||||
("/sync", {}),
|
||||
(
|
||||
"/_matrix/client/unstable/org.matrix.msc3575/sync/e2ee",
|
||||
# Enable sliding sync
|
||||
{"msc3575_enabled": True},
|
||||
),
|
||||
],
|
||||
)
|
||||
class SendToDeviceTestCase(HomeserverTestCase):
|
||||
"""
|
||||
Test `/sendToDevice` will deliver messages across to people receiving them over `/sync`.
|
||||
|
||||
Attributes:
|
||||
sync_endpoint: The endpoint under test to use for syncing.
|
||||
experimental_features: The experimental features homeserver config to use.
|
||||
"""
|
||||
|
||||
sync_endpoint: str
|
||||
experimental_features: JsonDict
|
||||
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
login.register_servlets,
|
||||
@@ -34,6 +58,11 @@ class SendToDeviceTestCase(HomeserverTestCase):
|
||||
sync.register_servlets,
|
||||
]
|
||||
|
||||
def default_config(self) -> JsonDict:
|
||||
config = super().default_config()
|
||||
config["experimental_features"] = self.experimental_features
|
||||
return config
|
||||
|
||||
def test_user_to_user(self) -> None:
|
||||
"""A to-device message from one user to another should get delivered"""
|
||||
|
||||
@@ -54,7 +83,7 @@ class SendToDeviceTestCase(HomeserverTestCase):
|
||||
self.assertEqual(chan.code, 200, chan.result)
|
||||
|
||||
# check it appears
|
||||
channel = self.make_request("GET", "/sync", access_token=user2_tok)
|
||||
channel = self.make_request("GET", self.sync_endpoint, access_token=user2_tok)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
expected_result = {
|
||||
"events": [
|
||||
@@ -67,15 +96,19 @@ class SendToDeviceTestCase(HomeserverTestCase):
|
||||
}
|
||||
self.assertEqual(channel.json_body["to_device"], expected_result)
|
||||
|
||||
# it should re-appear if we do another sync
|
||||
channel = self.make_request("GET", "/sync", access_token=user2_tok)
|
||||
# it should re-appear if we do another sync because the to-device message is not
|
||||
# deleted until we acknowledge it by sending a `?since=...` parameter in the
|
||||
# next sync request corresponding to the `next_batch` value from the response.
|
||||
channel = self.make_request("GET", self.sync_endpoint, access_token=user2_tok)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
self.assertEqual(channel.json_body["to_device"], expected_result)
|
||||
|
||||
# it should *not* appear if we do an incremental sync
|
||||
sync_token = channel.json_body["next_batch"]
|
||||
channel = self.make_request(
|
||||
"GET", f"/sync?since={sync_token}", access_token=user2_tok
|
||||
"GET",
|
||||
f"{self.sync_endpoint}?since={sync_token}",
|
||||
access_token=user2_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
self.assertEqual(channel.json_body.get("to_device", {}).get("events", []), [])
|
||||
@@ -99,15 +132,19 @@ class SendToDeviceTestCase(HomeserverTestCase):
|
||||
)
|
||||
self.assertEqual(chan.code, 200, chan.result)
|
||||
|
||||
# now sync: we should get two of the three
|
||||
channel = self.make_request("GET", "/sync", access_token=user2_tok)
|
||||
# now sync: we should get two of the three (because burst_count=2)
|
||||
channel = self.make_request("GET", self.sync_endpoint, access_token=user2_tok)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
msgs = channel.json_body["to_device"]["events"]
|
||||
self.assertEqual(len(msgs), 2)
|
||||
for i in range(2):
|
||||
self.assertEqual(
|
||||
msgs[i],
|
||||
{"sender": user1, "type": "m.room_key_request", "content": {"idx": i}},
|
||||
{
|
||||
"sender": user1,
|
||||
"type": "m.room_key_request",
|
||||
"content": {"idx": i},
|
||||
},
|
||||
)
|
||||
sync_token = channel.json_body["next_batch"]
|
||||
|
||||
@@ -125,7 +162,9 @@ class SendToDeviceTestCase(HomeserverTestCase):
|
||||
|
||||
# ... which should arrive
|
||||
channel = self.make_request(
|
||||
"GET", f"/sync?since={sync_token}", access_token=user2_tok
|
||||
"GET",
|
||||
f"{self.sync_endpoint}?since={sync_token}",
|
||||
access_token=user2_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
msgs = channel.json_body["to_device"]["events"]
|
||||
@@ -159,7 +198,7 @@ class SendToDeviceTestCase(HomeserverTestCase):
|
||||
)
|
||||
|
||||
# now sync: we should get two of the three
|
||||
channel = self.make_request("GET", "/sync", access_token=user2_tok)
|
||||
channel = self.make_request("GET", self.sync_endpoint, access_token=user2_tok)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
msgs = channel.json_body["to_device"]["events"]
|
||||
self.assertEqual(len(msgs), 2)
|
||||
@@ -193,7 +232,9 @@ class SendToDeviceTestCase(HomeserverTestCase):
|
||||
|
||||
# ... which should arrive
|
||||
channel = self.make_request(
|
||||
"GET", f"/sync?since={sync_token}", access_token=user2_tok
|
||||
"GET",
|
||||
f"{self.sync_endpoint}?since={sync_token}",
|
||||
access_token=user2_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
msgs = channel.json_body["to_device"]["events"]
|
||||
@@ -217,7 +258,7 @@ class SendToDeviceTestCase(HomeserverTestCase):
|
||||
user2_tok = self.login("u2", "pass", "d2")
|
||||
|
||||
# Do an initial sync
|
||||
channel = self.make_request("GET", "/sync", access_token=user2_tok)
|
||||
channel = self.make_request("GET", self.sync_endpoint, access_token=user2_tok)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
sync_token = channel.json_body["next_batch"]
|
||||
|
||||
@@ -233,7 +274,9 @@ class SendToDeviceTestCase(HomeserverTestCase):
|
||||
self.assertEqual(chan.code, 200, chan.result)
|
||||
|
||||
channel = self.make_request(
|
||||
"GET", f"/sync?since={sync_token}&timeout=300000", access_token=user2_tok
|
||||
"GET",
|
||||
f"{self.sync_endpoint}?since={sync_token}&timeout=300000",
|
||||
access_token=user2_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
messages = channel.json_body.get("to_device", {}).get("events", [])
|
||||
@@ -241,7 +284,9 @@ class SendToDeviceTestCase(HomeserverTestCase):
|
||||
sync_token = channel.json_body["next_batch"]
|
||||
|
||||
channel = self.make_request(
|
||||
"GET", f"/sync?since={sync_token}&timeout=300000", access_token=user2_tok
|
||||
"GET",
|
||||
f"{self.sync_endpoint}?since={sync_token}&timeout=300000",
|
||||
access_token=user2_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
messages = channel.json_body.get("to_device", {}).get("events", [])
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
import json
|
||||
from typing import List
|
||||
|
||||
from parameterized import parameterized
|
||||
from parameterized import parameterized, parameterized_class
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
@@ -34,7 +34,7 @@ from synapse.api.constants import (
|
||||
)
|
||||
from synapse.rest.client import devices, knock, login, read_marker, receipts, room, sync
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import JsonDict
|
||||
from synapse.types import JsonDict, RoomStreamToken, StreamKeyType
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests import unittest
|
||||
@@ -688,24 +688,180 @@ class SyncCacheTestCase(unittest.HomeserverTestCase):
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
|
||||
@parameterized_class(
|
||||
("sync_endpoint", "experimental_features"),
|
||||
[
|
||||
("/sync", {}),
|
||||
(
|
||||
"/_matrix/client/unstable/org.matrix.msc3575/sync/e2ee",
|
||||
# Enable sliding sync
|
||||
{"msc3575_enabled": True},
|
||||
),
|
||||
],
|
||||
)
|
||||
class DeviceListSyncTestCase(unittest.HomeserverTestCase):
|
||||
"""
|
||||
Tests regarding device list (`device_lists`) changes.
|
||||
|
||||
Attributes:
|
||||
sync_endpoint: The endpoint under test to use for syncing.
|
||||
experimental_features: The experimental features homeserver config to use.
|
||||
"""
|
||||
|
||||
sync_endpoint: str
|
||||
experimental_features: JsonDict
|
||||
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets,
|
||||
login.register_servlets,
|
||||
room.register_servlets,
|
||||
sync.register_servlets,
|
||||
devices.register_servlets,
|
||||
]
|
||||
|
||||
def default_config(self) -> JsonDict:
|
||||
config = super().default_config()
|
||||
config["experimental_features"] = self.experimental_features
|
||||
return config
|
||||
|
||||
def test_receiving_local_device_list_changes(self) -> None:
|
||||
"""Tests that a local users that share a room receive each other's device list
|
||||
changes.
|
||||
"""
|
||||
# Register two users
|
||||
test_device_id = "TESTDEVICE"
|
||||
alice_user_id = self.register_user("alice", "correcthorse")
|
||||
alice_access_token = self.login(
|
||||
alice_user_id, "correcthorse", device_id=test_device_id
|
||||
)
|
||||
|
||||
bob_user_id = self.register_user("bob", "ponyponypony")
|
||||
bob_access_token = self.login(bob_user_id, "ponyponypony")
|
||||
|
||||
# Create a room for them to coexist peacefully in
|
||||
new_room_id = self.helper.create_room_as(
|
||||
alice_user_id, is_public=True, tok=alice_access_token
|
||||
)
|
||||
self.assertIsNotNone(new_room_id)
|
||||
|
||||
# Have Bob join the room
|
||||
self.helper.invite(
|
||||
new_room_id, alice_user_id, bob_user_id, tok=alice_access_token
|
||||
)
|
||||
self.helper.join(new_room_id, bob_user_id, tok=bob_access_token)
|
||||
|
||||
# Now have Bob initiate an initial sync (in order to get a since token)
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
self.sync_endpoint,
|
||||
access_token=bob_access_token,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
next_batch_token = channel.json_body["next_batch"]
|
||||
|
||||
# ...and then an incremental sync. This should block until the sync stream is woken up,
|
||||
# which we hope will happen as a result of Alice updating their device list.
|
||||
bob_sync_channel = self.make_request(
|
||||
"GET",
|
||||
f"{self.sync_endpoint}?since={next_batch_token}&timeout=30000",
|
||||
access_token=bob_access_token,
|
||||
# Start the request, then continue on.
|
||||
await_result=False,
|
||||
)
|
||||
|
||||
# Have alice update their device list
|
||||
channel = self.make_request(
|
||||
"PUT",
|
||||
f"/devices/{test_device_id}",
|
||||
{
|
||||
"display_name": "New Device Name",
|
||||
},
|
||||
access_token=alice_access_token,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Check that bob's incremental sync contains the updated device list.
|
||||
# If not, the client would only receive the device list update on the
|
||||
# *next* sync.
|
||||
bob_sync_channel.await_result()
|
||||
self.assertEqual(bob_sync_channel.code, 200, bob_sync_channel.json_body)
|
||||
|
||||
changed_device_lists = bob_sync_channel.json_body.get("device_lists", {}).get(
|
||||
"changed", []
|
||||
)
|
||||
self.assertIn(alice_user_id, changed_device_lists, bob_sync_channel.json_body)
|
||||
|
||||
def test_not_receiving_local_device_list_changes(self) -> None:
|
||||
"""Tests a local users DO NOT receive device updates from each other if they do not
|
||||
share a room.
|
||||
"""
|
||||
# Register two users
|
||||
test_device_id = "TESTDEVICE"
|
||||
alice_user_id = self.register_user("alice", "correcthorse")
|
||||
alice_access_token = self.login(
|
||||
alice_user_id, "correcthorse", device_id=test_device_id
|
||||
)
|
||||
|
||||
bob_user_id = self.register_user("bob", "ponyponypony")
|
||||
bob_access_token = self.login(bob_user_id, "ponyponypony")
|
||||
|
||||
# These users do not share a room. They are lonely.
|
||||
|
||||
# Have Bob initiate an initial sync (in order to get a since token)
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
self.sync_endpoint,
|
||||
access_token=bob_access_token,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
next_batch_token = channel.json_body["next_batch"]
|
||||
|
||||
# ...and then an incremental sync. This should block until the sync stream is woken up,
|
||||
# which we hope will happen as a result of Alice updating their device list.
|
||||
bob_sync_channel = self.make_request(
|
||||
"GET",
|
||||
f"{self.sync_endpoint}?since={next_batch_token}&timeout=1000",
|
||||
access_token=bob_access_token,
|
||||
# Start the request, then continue on.
|
||||
await_result=False,
|
||||
)
|
||||
|
||||
# Have alice update their device list
|
||||
channel = self.make_request(
|
||||
"PUT",
|
||||
f"/devices/{test_device_id}",
|
||||
{
|
||||
"display_name": "New Device Name",
|
||||
},
|
||||
access_token=alice_access_token,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Check that bob's incremental sync does not contain the updated device list.
|
||||
bob_sync_channel.await_result()
|
||||
self.assertEqual(bob_sync_channel.code, 200, bob_sync_channel.json_body)
|
||||
|
||||
changed_device_lists = bob_sync_channel.json_body.get("device_lists", {}).get(
|
||||
"changed", []
|
||||
)
|
||||
self.assertNotIn(
|
||||
alice_user_id, changed_device_lists, bob_sync_channel.json_body
|
||||
)
|
||||
|
||||
def test_user_with_no_rooms_receives_self_device_list_updates(self) -> None:
|
||||
"""Tests that a user with no rooms still receives their own device list updates"""
|
||||
device_id = "TESTDEVICE"
|
||||
test_device_id = "TESTDEVICE"
|
||||
|
||||
# Register a user and login, creating a device
|
||||
self.user_id = self.register_user("kermit", "monkey")
|
||||
self.tok = self.login("kermit", "monkey", device_id=device_id)
|
||||
alice_user_id = self.register_user("alice", "correcthorse")
|
||||
alice_access_token = self.login(
|
||||
alice_user_id, "correcthorse", device_id=test_device_id
|
||||
)
|
||||
|
||||
# Request an initial sync
|
||||
channel = self.make_request("GET", "/sync", access_token=self.tok)
|
||||
channel = self.make_request(
|
||||
"GET", self.sync_endpoint, access_token=alice_access_token
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
next_batch = channel.json_body["next_batch"]
|
||||
|
||||
@@ -713,19 +869,19 @@ class DeviceListSyncTestCase(unittest.HomeserverTestCase):
|
||||
# It won't return until something has happened
|
||||
incremental_sync_channel = self.make_request(
|
||||
"GET",
|
||||
f"/sync?since={next_batch}&timeout=30000",
|
||||
access_token=self.tok,
|
||||
f"{self.sync_endpoint}?since={next_batch}&timeout=30000",
|
||||
access_token=alice_access_token,
|
||||
await_result=False,
|
||||
)
|
||||
|
||||
# Change our device's display name
|
||||
channel = self.make_request(
|
||||
"PUT",
|
||||
f"devices/{device_id}",
|
||||
f"devices/{test_device_id}",
|
||||
{
|
||||
"display_name": "freeze ray",
|
||||
},
|
||||
access_token=self.tok,
|
||||
access_token=alice_access_token,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
@@ -739,7 +895,230 @@ class DeviceListSyncTestCase(unittest.HomeserverTestCase):
|
||||
).get("changed", [])
|
||||
|
||||
self.assertIn(
|
||||
self.user_id, device_list_changes, incremental_sync_channel.json_body
|
||||
alice_user_id, device_list_changes, incremental_sync_channel.json_body
|
||||
)
|
||||
|
||||
|
||||
@parameterized_class(
|
||||
("sync_endpoint", "experimental_features"),
|
||||
[
|
||||
("/sync", {}),
|
||||
(
|
||||
"/_matrix/client/unstable/org.matrix.msc3575/sync/e2ee",
|
||||
# Enable sliding sync
|
||||
{"msc3575_enabled": True},
|
||||
),
|
||||
],
|
||||
)
|
||||
class DeviceOneTimeKeysSyncTestCase(unittest.HomeserverTestCase):
|
||||
"""
|
||||
Tests regarding device one time keys (`device_one_time_keys_count`) changes.
|
||||
|
||||
Attributes:
|
||||
sync_endpoint: The endpoint under test to use for syncing.
|
||||
experimental_features: The experimental features homeserver config to use.
|
||||
"""
|
||||
|
||||
sync_endpoint: str
|
||||
experimental_features: JsonDict
|
||||
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets,
|
||||
login.register_servlets,
|
||||
sync.register_servlets,
|
||||
devices.register_servlets,
|
||||
]
|
||||
|
||||
def default_config(self) -> JsonDict:
|
||||
config = super().default_config()
|
||||
config["experimental_features"] = self.experimental_features
|
||||
return config
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.e2e_keys_handler = hs.get_e2e_keys_handler()
|
||||
|
||||
def test_no_device_one_time_keys(self) -> None:
|
||||
"""
|
||||
Tests when no one time keys set, it still has the default `signed_curve25519` in
|
||||
`device_one_time_keys_count`
|
||||
"""
|
||||
test_device_id = "TESTDEVICE"
|
||||
|
||||
alice_user_id = self.register_user("alice", "correcthorse")
|
||||
alice_access_token = self.login(
|
||||
alice_user_id, "correcthorse", device_id=test_device_id
|
||||
)
|
||||
|
||||
# Request an initial sync
|
||||
channel = self.make_request(
|
||||
"GET", self.sync_endpoint, access_token=alice_access_token
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Check for those one time key counts
|
||||
self.assertDictEqual(
|
||||
channel.json_body["device_one_time_keys_count"],
|
||||
# Note that "signed_curve25519" is always returned in key count responses
|
||||
# regardless of whether we uploaded any keys for it. This is necessary until
|
||||
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
|
||||
{"signed_curve25519": 0},
|
||||
channel.json_body["device_one_time_keys_count"],
|
||||
)
|
||||
|
||||
def test_returns_device_one_time_keys(self) -> None:
|
||||
"""
|
||||
Tests that one time keys for the device/user are counted correctly in the `/sync`
|
||||
response
|
||||
"""
|
||||
test_device_id = "TESTDEVICE"
|
||||
|
||||
alice_user_id = self.register_user("alice", "correcthorse")
|
||||
alice_access_token = self.login(
|
||||
alice_user_id, "correcthorse", device_id=test_device_id
|
||||
)
|
||||
|
||||
# Upload one time keys for the user/device
|
||||
keys: JsonDict = {
|
||||
"alg1:k1": "key1",
|
||||
"alg2:k2": {"key": "key2", "signatures": {"k1": "sig1"}},
|
||||
"alg2:k3": {"key": "key3"},
|
||||
}
|
||||
res = self.get_success(
|
||||
self.e2e_keys_handler.upload_keys_for_user(
|
||||
alice_user_id, test_device_id, {"one_time_keys": keys}
|
||||
)
|
||||
)
|
||||
# Note that "signed_curve25519" is always returned in key count responses
|
||||
# regardless of whether we uploaded any keys for it. This is necessary until
|
||||
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
|
||||
self.assertDictEqual(
|
||||
res,
|
||||
{"one_time_key_counts": {"alg1": 1, "alg2": 2, "signed_curve25519": 0}},
|
||||
)
|
||||
|
||||
# Request an initial sync
|
||||
channel = self.make_request(
|
||||
"GET", self.sync_endpoint, access_token=alice_access_token
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Check for those one time key counts
|
||||
self.assertDictEqual(
|
||||
channel.json_body["device_one_time_keys_count"],
|
||||
{"alg1": 1, "alg2": 2, "signed_curve25519": 0},
|
||||
channel.json_body["device_one_time_keys_count"],
|
||||
)
|
||||
|
||||
|
||||
@parameterized_class(
|
||||
("sync_endpoint", "experimental_features"),
|
||||
[
|
||||
("/sync", {}),
|
||||
(
|
||||
"/_matrix/client/unstable/org.matrix.msc3575/sync/e2ee",
|
||||
# Enable sliding sync
|
||||
{"msc3575_enabled": True},
|
||||
),
|
||||
],
|
||||
)
|
||||
class DeviceUnusedFallbackKeySyncTestCase(unittest.HomeserverTestCase):
|
||||
"""
|
||||
Tests regarding device one time keys (`device_unused_fallback_key_types`) changes.
|
||||
|
||||
Attributes:
|
||||
sync_endpoint: The endpoint under test to use for syncing.
|
||||
experimental_features: The experimental features homeserver config to use.
|
||||
"""
|
||||
|
||||
sync_endpoint: str
|
||||
experimental_features: JsonDict
|
||||
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets,
|
||||
login.register_servlets,
|
||||
sync.register_servlets,
|
||||
devices.register_servlets,
|
||||
]
|
||||
|
||||
def default_config(self) -> JsonDict:
|
||||
config = super().default_config()
|
||||
config["experimental_features"] = self.experimental_features
|
||||
return config
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.store = self.hs.get_datastores().main
|
||||
self.e2e_keys_handler = hs.get_e2e_keys_handler()
|
||||
|
||||
def test_no_device_unused_fallback_key(self) -> None:
|
||||
"""
|
||||
Test when no unused fallback key is set, it just returns an empty list. The MSC
|
||||
says "The device_unused_fallback_key_types parameter must be present if the
|
||||
server supports fallback keys.",
|
||||
https://github.com/matrix-org/matrix-spec-proposals/blob/54255851f642f84a4f1aaf7bc063eebe3d76752b/proposals/2732-olm-fallback-keys.md
|
||||
"""
|
||||
test_device_id = "TESTDEVICE"
|
||||
|
||||
alice_user_id = self.register_user("alice", "correcthorse")
|
||||
alice_access_token = self.login(
|
||||
alice_user_id, "correcthorse", device_id=test_device_id
|
||||
)
|
||||
|
||||
# Request an initial sync
|
||||
channel = self.make_request(
|
||||
"GET", self.sync_endpoint, access_token=alice_access_token
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Check for those one time key counts
|
||||
self.assertListEqual(
|
||||
channel.json_body["device_unused_fallback_key_types"],
|
||||
[],
|
||||
channel.json_body["device_unused_fallback_key_types"],
|
||||
)
|
||||
|
||||
def test_returns_device_one_time_keys(self) -> None:
|
||||
"""
|
||||
Tests that device unused fallback key type is returned correctly in the `/sync`
|
||||
"""
|
||||
test_device_id = "TESTDEVICE"
|
||||
|
||||
alice_user_id = self.register_user("alice", "correcthorse")
|
||||
alice_access_token = self.login(
|
||||
alice_user_id, "correcthorse", device_id=test_device_id
|
||||
)
|
||||
|
||||
# We shouldn't have any unused fallback keys yet
|
||||
res = self.get_success(
|
||||
self.store.get_e2e_unused_fallback_key_types(alice_user_id, test_device_id)
|
||||
)
|
||||
self.assertEqual(res, [])
|
||||
|
||||
# Upload a fallback key for the user/device
|
||||
fallback_key = {"alg1:k1": "fallback_key1"}
|
||||
self.get_success(
|
||||
self.e2e_keys_handler.upload_keys_for_user(
|
||||
alice_user_id,
|
||||
test_device_id,
|
||||
{"fallback_keys": fallback_key},
|
||||
)
|
||||
)
|
||||
# We should now have an unused alg1 key
|
||||
fallback_res = self.get_success(
|
||||
self.store.get_e2e_unused_fallback_key_types(alice_user_id, test_device_id)
|
||||
)
|
||||
self.assertEqual(fallback_res, ["alg1"], fallback_res)
|
||||
|
||||
# Request an initial sync
|
||||
channel = self.make_request(
|
||||
"GET", self.sync_endpoint, access_token=alice_access_token
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Check for the unused fallback key types
|
||||
self.assertListEqual(
|
||||
channel.json_body["device_unused_fallback_key_types"],
|
||||
["alg1"],
|
||||
channel.json_body["device_unused_fallback_key_types"],
|
||||
)
|
||||
|
||||
|
||||
@@ -825,3 +1204,135 @@ class ExcludeRoomTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
self.assertNotIn(self.excluded_room_id, channel.json_body["rooms"]["join"])
|
||||
self.assertIn(self.included_room_id, channel.json_body["rooms"]["join"])
|
||||
|
||||
|
||||
class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||
"""
|
||||
Tests regarding MSC3575 Sliding Sync `/sync` endpoint.
|
||||
"""
|
||||
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets,
|
||||
login.register_servlets,
|
||||
room.register_servlets,
|
||||
sync.register_servlets,
|
||||
devices.register_servlets,
|
||||
]
|
||||
|
||||
def default_config(self) -> JsonDict:
|
||||
config = super().default_config()
|
||||
# Enable sliding sync
|
||||
config["experimental_features"] = {"msc3575_enabled": True}
|
||||
return config
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.sync_endpoint = "/_matrix/client/unstable/org.matrix.msc3575/sync"
|
||||
self.store = hs.get_datastores().main
|
||||
self.event_sources = hs.get_event_sources()
|
||||
|
||||
def test_sync_list(self) -> None:
|
||||
"""
|
||||
Test that room IDs show up in the Sliding Sync lists
|
||||
"""
|
||||
alice_user_id = self.register_user("alice", "correcthorse")
|
||||
alice_access_token = self.login(alice_user_id, "correcthorse")
|
||||
|
||||
room_id = self.helper.create_room_as(
|
||||
alice_user_id, tok=alice_access_token, is_public=True
|
||||
)
|
||||
|
||||
# Make the Sliding Sync request
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint,
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 99]],
|
||||
"sort": ["by_notification_level", "by_recency", "by_name"],
|
||||
"required_state": [
|
||||
["m.room.join_rules", ""],
|
||||
["m.room.history_visibility", ""],
|
||||
["m.space.child", "*"],
|
||||
],
|
||||
"timeline_limit": 1,
|
||||
}
|
||||
}
|
||||
},
|
||||
access_token=alice_access_token,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Make sure it has the foo-list we requested
|
||||
self.assertListEqual(
|
||||
list(channel.json_body["lists"].keys()),
|
||||
["foo-list"],
|
||||
channel.json_body["lists"].keys(),
|
||||
)
|
||||
|
||||
# Make sure the list includes the room we are joined to
|
||||
self.assertListEqual(
|
||||
list(channel.json_body["lists"]["foo-list"]["ops"]),
|
||||
[
|
||||
{
|
||||
"op": "SYNC",
|
||||
"range": [0, 99],
|
||||
"room_ids": [room_id],
|
||||
}
|
||||
],
|
||||
channel.json_body["lists"]["foo-list"],
|
||||
)
|
||||
|
||||
def test_wait_for_sync_token(self) -> None:
|
||||
"""
|
||||
Test that worker will wait until it catches up to the given token
|
||||
"""
|
||||
alice_user_id = self.register_user("alice", "correcthorse")
|
||||
alice_access_token = self.login(alice_user_id, "correcthorse")
|
||||
|
||||
# Create a future token that will cause us to wait. Since we never send a new
|
||||
# event to reach that future stream_ordering, the worker will wait until the
|
||||
# full timeout.
|
||||
current_token = self.event_sources.get_current_token()
|
||||
future_position_token = current_token.copy_and_replace(
|
||||
StreamKeyType.ROOM,
|
||||
RoomStreamToken(stream=current_token.room_key.stream + 1),
|
||||
)
|
||||
|
||||
future_position_token_serialized = self.get_success(
|
||||
future_position_token.to_string(self.store)
|
||||
)
|
||||
|
||||
# Make the Sliding Sync request
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint + f"?pos={future_position_token_serialized}",
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 99]],
|
||||
"sort": ["by_notification_level", "by_recency", "by_name"],
|
||||
"required_state": [
|
||||
["m.room.join_rules", ""],
|
||||
["m.room.history_visibility", ""],
|
||||
["m.space.child", "*"],
|
||||
],
|
||||
"timeline_limit": 1,
|
||||
}
|
||||
}
|
||||
},
|
||||
access_token=alice_access_token,
|
||||
await_result=False,
|
||||
)
|
||||
# Block for 10 seconds to make `notifier.wait_for_stream_token(from_token)`
|
||||
# timeout
|
||||
with self.assertRaises(TimedOutException):
|
||||
channel.await_result(timeout_ms=9900)
|
||||
channel.await_result(timeout_ms=200)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# We expect the `next_pos` in the result to be the same as what we requested
|
||||
# with because we weren't able to find anything new yet.
|
||||
self.assertEqual(
|
||||
channel.json_body["next_pos"], future_position_token_serialized
|
||||
)
|
||||
|
||||
@@ -330,9 +330,12 @@ class RestHelper:
|
||||
data,
|
||||
)
|
||||
|
||||
assert channel.code == expect_code, "Expected: %d, got: %d, resp: %r" % (
|
||||
assert (
|
||||
channel.code == expect_code
|
||||
), "Expected: %d, got: %d, PUT %s -> resp: %r" % (
|
||||
expect_code,
|
||||
channel.code,
|
||||
path,
|
||||
channel.result["body"],
|
||||
)
|
||||
|
||||
|
||||
@@ -44,13 +44,13 @@ class MediaDomainBlockingTests(unittest.HomeserverTestCase):
|
||||
# from a regular 404.
|
||||
file_id = "abcdefg12345"
|
||||
file_info = FileInfo(server_name=self.remote_server_name, file_id=file_id)
|
||||
with hs.get_media_repository().media_storage.store_into_file(file_info) as (
|
||||
f,
|
||||
fname,
|
||||
finish,
|
||||
):
|
||||
f.write(SMALL_PNG)
|
||||
self.get_success(finish())
|
||||
|
||||
media_storage = hs.get_media_repository().media_storage
|
||||
|
||||
ctx = media_storage.store_into_file(file_info)
|
||||
(f, fname) = self.get_success(ctx.__aenter__())
|
||||
f.write(SMALL_PNG)
|
||||
self.get_success(ctx.__aexit__(None, None, None))
|
||||
|
||||
self.get_success(
|
||||
self.store.store_cached_remote_media(
|
||||
|
||||
@@ -30,163 +30,34 @@ from synapse.storage.database import (
|
||||
)
|
||||
from synapse.storage.engines import IncorrectDatabaseSetup
|
||||
from synapse.storage.types import Cursor
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator, StreamIdGenerator
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||
from synapse.storage.util.sequence import (
|
||||
LocalSequenceGenerator,
|
||||
PostgresSequenceGenerator,
|
||||
SequenceGenerator,
|
||||
)
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests.unittest import HomeserverTestCase
|
||||
from tests.utils import USE_POSTGRES_FOR_TESTS
|
||||
|
||||
|
||||
class StreamIdGeneratorTestCase(HomeserverTestCase):
|
||||
class MultiWriterIdGeneratorBase(HomeserverTestCase):
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.store = hs.get_datastores().main
|
||||
self.db_pool: DatabasePool = self.store.db_pool
|
||||
|
||||
self.get_success(self.db_pool.runInteraction("_setup_db", self._setup_db))
|
||||
|
||||
def _setup_db(self, txn: LoggingTransaction) -> None:
|
||||
txn.execute(
|
||||
"""
|
||||
CREATE TABLE foobar (
|
||||
stream_id BIGINT NOT NULL,
|
||||
data TEXT
|
||||
);
|
||||
"""
|
||||
)
|
||||
txn.execute("INSERT INTO foobar VALUES (123, 'hello world');")
|
||||
|
||||
def _create_id_generator(self) -> StreamIdGenerator:
|
||||
def _create(conn: LoggingDatabaseConnection) -> StreamIdGenerator:
|
||||
return StreamIdGenerator(
|
||||
db_conn=conn,
|
||||
notifier=self.hs.get_replication_notifier(),
|
||||
table="foobar",
|
||||
column="stream_id",
|
||||
)
|
||||
|
||||
return self.get_success_or_raise(self.db_pool.runWithConnection(_create))
|
||||
|
||||
def test_initial_value(self) -> None:
|
||||
"""Check that we read the current token from the DB."""
|
||||
id_gen = self._create_id_generator()
|
||||
self.assertEqual(id_gen.get_current_token(), 123)
|
||||
|
||||
def test_single_gen_next(self) -> None:
|
||||
"""Check that we correctly increment the current token from the DB."""
|
||||
id_gen = self._create_id_generator()
|
||||
|
||||
async def test_gen_next() -> None:
|
||||
async with id_gen.get_next() as next_id:
|
||||
# We haven't persisted `next_id` yet; current token is still 123
|
||||
self.assertEqual(id_gen.get_current_token(), 123)
|
||||
# But we did learn what the next value is
|
||||
self.assertEqual(next_id, 124)
|
||||
|
||||
# Once the context manager closes we assume that the `next_id` has been
|
||||
# written to the DB.
|
||||
self.assertEqual(id_gen.get_current_token(), 124)
|
||||
|
||||
self.get_success(test_gen_next())
|
||||
|
||||
def test_multiple_gen_nexts(self) -> None:
|
||||
"""Check that we handle overlapping calls to gen_next sensibly."""
|
||||
id_gen = self._create_id_generator()
|
||||
|
||||
async def test_gen_next() -> None:
|
||||
ctx1 = id_gen.get_next()
|
||||
ctx2 = id_gen.get_next()
|
||||
ctx3 = id_gen.get_next()
|
||||
|
||||
# Request three new stream IDs.
|
||||
self.assertEqual(await ctx1.__aenter__(), 124)
|
||||
self.assertEqual(await ctx2.__aenter__(), 125)
|
||||
self.assertEqual(await ctx3.__aenter__(), 126)
|
||||
|
||||
# None are persisted: current token unchanged.
|
||||
self.assertEqual(id_gen.get_current_token(), 123)
|
||||
|
||||
# Persist each in turn.
|
||||
await ctx1.__aexit__(None, None, None)
|
||||
self.assertEqual(id_gen.get_current_token(), 124)
|
||||
await ctx2.__aexit__(None, None, None)
|
||||
self.assertEqual(id_gen.get_current_token(), 125)
|
||||
await ctx3.__aexit__(None, None, None)
|
||||
self.assertEqual(id_gen.get_current_token(), 126)
|
||||
|
||||
self.get_success(test_gen_next())
|
||||
|
||||
def test_multiple_gen_nexts_closed_in_different_order(self) -> None:
|
||||
"""Check that we handle overlapping calls to gen_next, even when their IDs
|
||||
created and persisted in different orders."""
|
||||
id_gen = self._create_id_generator()
|
||||
|
||||
async def test_gen_next() -> None:
|
||||
ctx1 = id_gen.get_next()
|
||||
ctx2 = id_gen.get_next()
|
||||
ctx3 = id_gen.get_next()
|
||||
|
||||
# Request three new stream IDs.
|
||||
self.assertEqual(await ctx1.__aenter__(), 124)
|
||||
self.assertEqual(await ctx2.__aenter__(), 125)
|
||||
self.assertEqual(await ctx3.__aenter__(), 126)
|
||||
|
||||
# None are persisted: current token unchanged.
|
||||
self.assertEqual(id_gen.get_current_token(), 123)
|
||||
|
||||
# Persist them in a different order, starting with 126 from ctx3.
|
||||
await ctx3.__aexit__(None, None, None)
|
||||
# We haven't persisted 124 from ctx1 yet---current token is still 123.
|
||||
self.assertEqual(id_gen.get_current_token(), 123)
|
||||
|
||||
# Now persist 124 from ctx1.
|
||||
await ctx1.__aexit__(None, None, None)
|
||||
# Current token is then 124, waiting for 125 to be persisted.
|
||||
self.assertEqual(id_gen.get_current_token(), 124)
|
||||
|
||||
# Finally persist 125 from ctx2.
|
||||
await ctx2.__aexit__(None, None, None)
|
||||
# Current token is then 126 (skipping over 125).
|
||||
self.assertEqual(id_gen.get_current_token(), 126)
|
||||
|
||||
self.get_success(test_gen_next())
|
||||
|
||||
def test_gen_next_while_still_waiting_for_persistence(self) -> None:
|
||||
"""Check that we handle overlapping calls to gen_next."""
|
||||
id_gen = self._create_id_generator()
|
||||
|
||||
async def test_gen_next() -> None:
|
||||
ctx1 = id_gen.get_next()
|
||||
ctx2 = id_gen.get_next()
|
||||
ctx3 = id_gen.get_next()
|
||||
|
||||
# Request two new stream IDs.
|
||||
self.assertEqual(await ctx1.__aenter__(), 124)
|
||||
self.assertEqual(await ctx2.__aenter__(), 125)
|
||||
|
||||
# Persist ctx2 first.
|
||||
await ctx2.__aexit__(None, None, None)
|
||||
# Still waiting on ctx1's ID to be persisted.
|
||||
self.assertEqual(id_gen.get_current_token(), 123)
|
||||
|
||||
# Now request a third stream ID. It should be 126 (the smallest ID that
|
||||
# we've not yet handed out.)
|
||||
self.assertEqual(await ctx3.__aenter__(), 126)
|
||||
|
||||
self.get_success(test_gen_next())
|
||||
|
||||
|
||||
class MultiWriterIdGeneratorTestCase(HomeserverTestCase):
|
||||
if not USE_POSTGRES_FOR_TESTS:
|
||||
skip = "Requires Postgres"
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.store = hs.get_datastores().main
|
||||
self.db_pool: DatabasePool = self.store.db_pool
|
||||
|
||||
self.get_success(self.db_pool.runInteraction("_setup_db", self._setup_db))
|
||||
if USE_POSTGRES_FOR_TESTS:
|
||||
self.seq_gen: SequenceGenerator = PostgresSequenceGenerator("foobar_seq")
|
||||
else:
|
||||
self.seq_gen = LocalSequenceGenerator(lambda _: 0)
|
||||
|
||||
def _setup_db(self, txn: LoggingTransaction) -> None:
|
||||
txn.execute("CREATE SEQUENCE foobar_seq")
|
||||
if USE_POSTGRES_FOR_TESTS:
|
||||
txn.execute("CREATE SEQUENCE foobar_seq")
|
||||
|
||||
txn.execute(
|
||||
"""
|
||||
CREATE TABLE foobar (
|
||||
@@ -221,44 +92,27 @@ class MultiWriterIdGeneratorTestCase(HomeserverTestCase):
|
||||
|
||||
def _insert(txn: LoggingTransaction) -> None:
|
||||
for _ in range(number):
|
||||
next_val = self.seq_gen.get_next_id_txn(txn)
|
||||
txn.execute(
|
||||
"INSERT INTO foobar VALUES (nextval('foobar_seq'), ?)",
|
||||
(instance_name,),
|
||||
"INSERT INTO foobar (stream_id, instance_name) VALUES (?, ?)",
|
||||
(
|
||||
next_val,
|
||||
instance_name,
|
||||
),
|
||||
)
|
||||
|
||||
txn.execute(
|
||||
"""
|
||||
INSERT INTO stream_positions VALUES ('test_stream', ?, lastval())
|
||||
ON CONFLICT (stream_name, instance_name) DO UPDATE SET stream_id = lastval()
|
||||
INSERT INTO stream_positions VALUES ('test_stream', ?, ?)
|
||||
ON CONFLICT (stream_name, instance_name) DO UPDATE SET stream_id = ?
|
||||
""",
|
||||
(instance_name,),
|
||||
(instance_name, next_val, next_val),
|
||||
)
|
||||
|
||||
self.get_success(self.db_pool.runInteraction("_insert_rows", _insert))
|
||||
|
||||
def _insert_row_with_id(self, instance_name: str, stream_id: int) -> None:
|
||||
"""Insert one row as the given instance with given stream_id, updating
|
||||
the postgres sequence position to match.
|
||||
"""
|
||||
|
||||
def _insert(txn: LoggingTransaction) -> None:
|
||||
txn.execute(
|
||||
"INSERT INTO foobar VALUES (?, ?)",
|
||||
(
|
||||
stream_id,
|
||||
instance_name,
|
||||
),
|
||||
)
|
||||
txn.execute("SELECT setval('foobar_seq', ?)", (stream_id,))
|
||||
txn.execute(
|
||||
"""
|
||||
INSERT INTO stream_positions VALUES ('test_stream', ?, ?)
|
||||
ON CONFLICT (stream_name, instance_name) DO UPDATE SET stream_id = ?
|
||||
""",
|
||||
(instance_name, stream_id, stream_id),
|
||||
)
|
||||
|
||||
self.get_success(self.db_pool.runInteraction("_insert_row_with_id", _insert))
|
||||
|
||||
class MultiWriterIdGeneratorTestCase(MultiWriterIdGeneratorBase):
|
||||
def test_empty(self) -> None:
|
||||
"""Test an ID generator against an empty database gives sensible
|
||||
current positions.
|
||||
@@ -347,6 +201,176 @@ class MultiWriterIdGeneratorTestCase(HomeserverTestCase):
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 11})
|
||||
self.assertEqual(id_gen.get_current_token_for_writer("master"), 11)
|
||||
|
||||
def test_get_next_txn(self) -> None:
|
||||
"""Test that the `get_next_txn` function works correctly."""
|
||||
|
||||
# Prefill table with 7 rows written by 'master'
|
||||
self._insert_rows("master", 7)
|
||||
|
||||
id_gen = self._create_id_generator()
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 7})
|
||||
self.assertEqual(id_gen.get_current_token_for_writer("master"), 7)
|
||||
|
||||
# Try allocating a new ID gen and check that we only see position
|
||||
# advanced after we leave the context manager.
|
||||
|
||||
def _get_next_txn(txn: LoggingTransaction) -> None:
|
||||
stream_id = id_gen.get_next_txn(txn)
|
||||
self.assertEqual(stream_id, 8)
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 7})
|
||||
self.assertEqual(id_gen.get_current_token_for_writer("master"), 7)
|
||||
|
||||
self.get_success(self.db_pool.runInteraction("test", _get_next_txn))
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 8})
|
||||
self.assertEqual(id_gen.get_current_token_for_writer("master"), 8)
|
||||
|
||||
def test_restart_during_out_of_order_persistence(self) -> None:
|
||||
"""Test that restarting a process while another process is writing out
|
||||
of order updates are handled correctly.
|
||||
"""
|
||||
|
||||
# Prefill table with 7 rows written by 'master'
|
||||
self._insert_rows("master", 7)
|
||||
|
||||
id_gen = self._create_id_generator()
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 7})
|
||||
self.assertEqual(id_gen.get_current_token_for_writer("master"), 7)
|
||||
|
||||
# Persist two rows at once
|
||||
ctx1 = id_gen.get_next()
|
||||
ctx2 = id_gen.get_next()
|
||||
|
||||
s1 = self.get_success(ctx1.__aenter__())
|
||||
s2 = self.get_success(ctx2.__aenter__())
|
||||
|
||||
self.assertEqual(s1, 8)
|
||||
self.assertEqual(s2, 9)
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 7})
|
||||
self.assertEqual(id_gen.get_current_token_for_writer("master"), 7)
|
||||
|
||||
# We finish persisting the second row before restart
|
||||
self.get_success(ctx2.__aexit__(None, None, None))
|
||||
|
||||
# We simulate a restart of another worker by just creating a new ID gen.
|
||||
id_gen_worker = self._create_id_generator("worker")
|
||||
|
||||
# Restarted worker should not see the second persisted row
|
||||
self.assertEqual(id_gen_worker.get_positions(), {"master": 7})
|
||||
self.assertEqual(id_gen_worker.get_current_token_for_writer("master"), 7)
|
||||
|
||||
# Now if we persist the first row then both instances should jump ahead
|
||||
# correctly.
|
||||
self.get_success(ctx1.__aexit__(None, None, None))
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 9})
|
||||
id_gen_worker.advance("master", 9)
|
||||
self.assertEqual(id_gen_worker.get_positions(), {"master": 9})
|
||||
|
||||
|
||||
class WorkerMultiWriterIdGeneratorTestCase(MultiWriterIdGeneratorBase):
|
||||
if not USE_POSTGRES_FOR_TESTS:
|
||||
skip = "Requires Postgres"
|
||||
|
||||
def _insert_row_with_id(self, instance_name: str, stream_id: int) -> None:
|
||||
"""Insert one row as the given instance with given stream_id, updating
|
||||
the postgres sequence position to match.
|
||||
"""
|
||||
|
||||
def _insert(txn: LoggingTransaction) -> None:
|
||||
txn.execute(
|
||||
"INSERT INTO foobar (stream_id, instance_name) VALUES (?, ?)",
|
||||
(
|
||||
stream_id,
|
||||
instance_name,
|
||||
),
|
||||
)
|
||||
|
||||
txn.execute("SELECT setval('foobar_seq', ?)", (stream_id,))
|
||||
|
||||
txn.execute(
|
||||
"""
|
||||
INSERT INTO stream_positions VALUES ('test_stream', ?, ?)
|
||||
ON CONFLICT (stream_name, instance_name) DO UPDATE SET stream_id = ?
|
||||
""",
|
||||
(instance_name, stream_id, stream_id),
|
||||
)
|
||||
|
||||
self.get_success(self.db_pool.runInteraction("_insert_row_with_id", _insert))
|
||||
|
||||
def test_get_persisted_upto_position(self) -> None:
|
||||
"""Test that `get_persisted_upto_position` correctly tracks updates to
|
||||
positions.
|
||||
"""
|
||||
|
||||
# The following tests are a bit cheeky in that we notify about new
|
||||
# positions via `advance` without *actually* advancing the postgres
|
||||
# sequence.
|
||||
|
||||
self._insert_row_with_id("first", 3)
|
||||
self._insert_row_with_id("second", 5)
|
||||
|
||||
id_gen = self._create_id_generator("worker", writers=["first", "second"])
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"first": 3, "second": 5})
|
||||
|
||||
# Min is 3 and there is a gap between 5, so we expect it to be 3.
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 3)
|
||||
|
||||
# We advance "first" straight to 6. Min is now 5 but there is no gap so
|
||||
# we expect it to be 6
|
||||
id_gen.advance("first", 6)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 6)
|
||||
|
||||
# No gap, so we expect 7.
|
||||
id_gen.advance("second", 7)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 7)
|
||||
|
||||
# We haven't seen 8 yet, so we expect 7 still.
|
||||
id_gen.advance("second", 9)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 7)
|
||||
|
||||
# Now that we've seen 7, 8 and 9 we can got straight to 9.
|
||||
id_gen.advance("first", 8)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 9)
|
||||
|
||||
# Jump forward with gaps. The minimum is 11, even though we haven't seen
|
||||
# 10 we know that everything before 11 must be persisted.
|
||||
id_gen.advance("first", 11)
|
||||
id_gen.advance("second", 15)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 11)
|
||||
|
||||
def test_get_persisted_upto_position_get_next(self) -> None:
|
||||
"""Test that `get_persisted_upto_position` correctly tracks updates to
|
||||
positions when `get_next` is called.
|
||||
"""
|
||||
|
||||
self._insert_row_with_id("first", 3)
|
||||
self._insert_row_with_id("second", 5)
|
||||
|
||||
id_gen = self._create_id_generator("first", writers=["first", "second"])
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"first": 3, "second": 5})
|
||||
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 5)
|
||||
|
||||
async def _get_next_async() -> None:
|
||||
async with id_gen.get_next() as stream_id:
|
||||
self.assertEqual(stream_id, 6)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 5)
|
||||
|
||||
self.get_success(_get_next_async())
|
||||
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 6)
|
||||
|
||||
# We assume that so long as `get_next` does correctly advance the
|
||||
# `persisted_upto_position` in this case, then it will be correct in the
|
||||
# other cases that are tested above (since they'll hit the same code).
|
||||
|
||||
def test_multi_instance(self) -> None:
|
||||
"""Test that reads and writes from multiple processes are handled
|
||||
correctly.
|
||||
@@ -453,145 +477,6 @@ class MultiWriterIdGeneratorTestCase(HomeserverTestCase):
|
||||
third_id_gen.get_positions(), {"first": 3, "second": 7, "third": 8}
|
||||
)
|
||||
|
||||
def test_get_next_txn(self) -> None:
|
||||
"""Test that the `get_next_txn` function works correctly."""
|
||||
|
||||
# Prefill table with 7 rows written by 'master'
|
||||
self._insert_rows("master", 7)
|
||||
|
||||
id_gen = self._create_id_generator()
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 7})
|
||||
self.assertEqual(id_gen.get_current_token_for_writer("master"), 7)
|
||||
|
||||
# Try allocating a new ID gen and check that we only see position
|
||||
# advanced after we leave the context manager.
|
||||
|
||||
def _get_next_txn(txn: LoggingTransaction) -> None:
|
||||
stream_id = id_gen.get_next_txn(txn)
|
||||
self.assertEqual(stream_id, 8)
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 7})
|
||||
self.assertEqual(id_gen.get_current_token_for_writer("master"), 7)
|
||||
|
||||
self.get_success(self.db_pool.runInteraction("test", _get_next_txn))
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 8})
|
||||
self.assertEqual(id_gen.get_current_token_for_writer("master"), 8)
|
||||
|
||||
def test_get_persisted_upto_position(self) -> None:
|
||||
"""Test that `get_persisted_upto_position` correctly tracks updates to
|
||||
positions.
|
||||
"""
|
||||
|
||||
# The following tests are a bit cheeky in that we notify about new
|
||||
# positions via `advance` without *actually* advancing the postgres
|
||||
# sequence.
|
||||
|
||||
self._insert_row_with_id("first", 3)
|
||||
self._insert_row_with_id("second", 5)
|
||||
|
||||
id_gen = self._create_id_generator("worker", writers=["first", "second"])
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"first": 3, "second": 5})
|
||||
|
||||
# Min is 3 and there is a gap between 5, so we expect it to be 3.
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 3)
|
||||
|
||||
# We advance "first" straight to 6. Min is now 5 but there is no gap so
|
||||
# we expect it to be 6
|
||||
id_gen.advance("first", 6)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 6)
|
||||
|
||||
# No gap, so we expect 7.
|
||||
id_gen.advance("second", 7)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 7)
|
||||
|
||||
# We haven't seen 8 yet, so we expect 7 still.
|
||||
id_gen.advance("second", 9)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 7)
|
||||
|
||||
# Now that we've seen 7, 8 and 9 we can got straight to 9.
|
||||
id_gen.advance("first", 8)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 9)
|
||||
|
||||
# Jump forward with gaps. The minimum is 11, even though we haven't seen
|
||||
# 10 we know that everything before 11 must be persisted.
|
||||
id_gen.advance("first", 11)
|
||||
id_gen.advance("second", 15)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 11)
|
||||
|
||||
def test_get_persisted_upto_position_get_next(self) -> None:
|
||||
"""Test that `get_persisted_upto_position` correctly tracks updates to
|
||||
positions when `get_next` is called.
|
||||
"""
|
||||
|
||||
self._insert_row_with_id("first", 3)
|
||||
self._insert_row_with_id("second", 5)
|
||||
|
||||
id_gen = self._create_id_generator("first", writers=["first", "second"])
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"first": 3, "second": 5})
|
||||
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 5)
|
||||
|
||||
async def _get_next_async() -> None:
|
||||
async with id_gen.get_next() as stream_id:
|
||||
self.assertEqual(stream_id, 6)
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 5)
|
||||
|
||||
self.get_success(_get_next_async())
|
||||
|
||||
self.assertEqual(id_gen.get_persisted_upto_position(), 6)
|
||||
|
||||
# We assume that so long as `get_next` does correctly advance the
|
||||
# `persisted_upto_position` in this case, then it will be correct in the
|
||||
# other cases that are tested above (since they'll hit the same code).
|
||||
|
||||
def test_restart_during_out_of_order_persistence(self) -> None:
|
||||
"""Test that restarting a process while another process is writing out
|
||||
of order updates are handled correctly.
|
||||
"""
|
||||
|
||||
# Prefill table with 7 rows written by 'master'
|
||||
self._insert_rows("master", 7)
|
||||
|
||||
id_gen = self._create_id_generator()
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 7})
|
||||
self.assertEqual(id_gen.get_current_token_for_writer("master"), 7)
|
||||
|
||||
# Persist two rows at once
|
||||
ctx1 = id_gen.get_next()
|
||||
ctx2 = id_gen.get_next()
|
||||
|
||||
s1 = self.get_success(ctx1.__aenter__())
|
||||
s2 = self.get_success(ctx2.__aenter__())
|
||||
|
||||
self.assertEqual(s1, 8)
|
||||
self.assertEqual(s2, 9)
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 7})
|
||||
self.assertEqual(id_gen.get_current_token_for_writer("master"), 7)
|
||||
|
||||
# We finish persisting the second row before restart
|
||||
self.get_success(ctx2.__aexit__(None, None, None))
|
||||
|
||||
# We simulate a restart of another worker by just creating a new ID gen.
|
||||
id_gen_worker = self._create_id_generator("worker")
|
||||
|
||||
# Restarted worker should not see the second persisted row
|
||||
self.assertEqual(id_gen_worker.get_positions(), {"master": 7})
|
||||
self.assertEqual(id_gen_worker.get_current_token_for_writer("master"), 7)
|
||||
|
||||
# Now if we persist the first row then both instances should jump ahead
|
||||
# correctly.
|
||||
self.get_success(ctx1.__aexit__(None, None, None))
|
||||
|
||||
self.assertEqual(id_gen.get_positions(), {"master": 9})
|
||||
id_gen_worker.advance("master", 9)
|
||||
self.assertEqual(id_gen_worker.get_positions(), {"master": 9})
|
||||
|
||||
def test_writer_config_change(self) -> None:
|
||||
"""Test that changing the writer config correctly works."""
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user