Compare commits

..

1 Commits

Author SHA1 Message Date
Erik Johnston
99cf1e7c1c Faster sliding sync sorting 2024-07-11 15:40:46 +01:00
188 changed files with 6689 additions and 27458 deletions

View File

@@ -30,7 +30,7 @@ jobs:
run: docker buildx inspect
- name: Install Cosign
uses: sigstore/cosign-installer@v3.6.0
uses: sigstore/cosign-installer@v3.5.0
- name: Checkout repository
uses: actions/checkout@v4

View File

@@ -29,9 +29,17 @@ jobs:
with:
install-project: "false"
- name: Run ruff
- name: Import order (isort)
continue-on-error: true
run: poetry run ruff check --fix .
run: poetry run isort .
- name: Code style (black)
continue-on-error: true
run: poetry run black .
- name: Semantic checks (ruff)
continue-on-error: true
run: poetry run ruff --fix .
- run: cargo clippy --all-features --fix -- -D warnings
continue-on-error: true
@@ -41,4 +49,4 @@ jobs:
- uses: stefanzweifel/git-auto-commit-action@v5
with:
commit_message: "Attempt to fix linting"
commit_message: "Attempt to fix linting"

View File

@@ -131,8 +131,15 @@ jobs:
with:
install-project: "false"
- name: Check style
run: poetry run ruff check --output-format=github .
- name: Import order (isort)
run: poetry run isort --check --diff .
- name: Code style (black)
run: poetry run black --check --diff .
- name: Semantic checks (ruff)
# --quiet suppresses the update check.
run: poetry run ruff check --quiet .
lint-mypy:
runs-on: ubuntu-latest
@@ -298,7 +305,7 @@ jobs:
- lint-readme
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@v3
- uses: matrix-org/done-action@v2
with:
needs: ${{ toJSON(needs) }}
@@ -730,7 +737,7 @@ jobs:
- linting-done
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@v3
- uses: matrix-org/done-action@v2
with:
needs: ${{ toJSON(needs) }}

View File

@@ -1,219 +1,3 @@
# Synapse 1.114.0rc1 (2024-08-20)
### Features
- Add a flag to `/versions`, `org.matrix.simplified_msc3575`, to indicate whether experimental sliding sync support has been enabled. ([\#17571](https://github.com/element-hq/synapse/issues/17571))
- Handle changes in `timeline_limit` in experimental sliding sync. ([\#17579](https://github.com/element-hq/synapse/issues/17579))
- Correctly track read receipts that should be sent down in experimental sliding sync. ([\#17575](https://github.com/element-hq/synapse/issues/17575), [\#17589](https://github.com/element-hq/synapse/issues/17589), [\#17592](https://github.com/element-hq/synapse/issues/17592))
### Bugfixes
- Start handlers for new media endpoints when media resource configured. ([\#17483](https://github.com/element-hq/synapse/issues/17483))
- Fix timeline ordering (using `stream_ordering` instead of topological ordering) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17510](https://github.com/element-hq/synapse/issues/17510))
- Fix experimental sliding sync implementation to remember any updates in rooms that were not sent down immediately. ([\#17535](https://github.com/element-hq/synapse/issues/17535))
- Better exclude partially stated rooms if we must await full state in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17538](https://github.com/element-hq/synapse/issues/17538))
- Handle lower-case http headers in `_Mulitpart_Parser_Protocol`. ([\#17545](https://github.com/element-hq/synapse/issues/17545))
- Fix fetching federation signing keys from servers that omit `old_verify_keys`. Contributed by @tulir @ Beeper. ([\#17568](https://github.com/element-hq/synapse/issues/17568))
- Fix bug where we would respond with an error when a remote server asked for media that had a length of 0, using the new multipart federation media endpoint. ([\#17570](https://github.com/element-hq/synapse/issues/17570))
### Improved Documentation
- Clarify default behaviour of the
[`auto_accept_invites.worker_to_run_on`](https://element-hq.github.io/synapse/develop/usage/configuration/config_documentation.html#auto-accept-invites)
option. ([\#17515](https://github.com/element-hq/synapse/issues/17515))
- Improve docstrings for profile methods. ([\#17559](https://github.com/element-hq/synapse/issues/17559))
### Internal Changes
- Add more tracing to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17514](https://github.com/element-hq/synapse/issues/17514))
- Fixup comment in sliding sync implementation. ([\#17531](https://github.com/element-hq/synapse/issues/17531))
- Replace override of deprecated method `HTTPAdapter.get_connection` with `get_connection_with_tls_context`. ([\#17536](https://github.com/element-hq/synapse/issues/17536))
- Fix performance of device lists in `/key/changes` and sliding sync. ([\#17537](https://github.com/element-hq/synapse/issues/17537), [\#17548](https://github.com/element-hq/synapse/issues/17548))
- Bump setuptools from 67.6.0 to 72.1.0. ([\#17542](https://github.com/element-hq/synapse/issues/17542))
- Add a utility function for generating random event IDs. ([\#17557](https://github.com/element-hq/synapse/issues/17557))
- Speed up responding to media requests. ([\#17558](https://github.com/element-hq/synapse/issues/17558), [\#17561](https://github.com/element-hq/synapse/issues/17561), [\#17564](https://github.com/element-hq/synapse/issues/17564), [\#17566](https://github.com/element-hq/synapse/issues/17566), [\#17567](https://github.com/element-hq/synapse/issues/17567), [\#17569](https://github.com/element-hq/synapse/issues/17569))
- Test github token before running release script steps. ([\#17562](https://github.com/element-hq/synapse/issues/17562))
- Reduce log spam of multipart files. ([\#17563](https://github.com/element-hq/synapse/issues/17563))
- Refactor per-connection state in experimental sliding sync handler. ([\#17574](https://github.com/element-hq/synapse/issues/17574))
- Add histogram metrics for sliding sync processing time. ([\#17593](https://github.com/element-hq/synapse/issues/17593))
### Updates to locked dependencies
* Bump bytes from 1.6.1 to 1.7.1. ([\#17526](https://github.com/element-hq/synapse/issues/17526))
* Bump lxml from 5.2.2 to 5.3.0. ([\#17550](https://github.com/element-hq/synapse/issues/17550))
* Bump phonenumbers from 8.13.42 to 8.13.43. ([\#17551](https://github.com/element-hq/synapse/issues/17551))
* Bump regex from 1.10.5 to 1.10.6. ([\#17527](https://github.com/element-hq/synapse/issues/17527))
* Bump sentry-sdk from 2.10.0 to 2.12.0. ([\#17553](https://github.com/element-hq/synapse/issues/17553))
* Bump serde from 1.0.204 to 1.0.206. ([\#17556](https://github.com/element-hq/synapse/issues/17556))
* Bump serde_json from 1.0.122 to 1.0.124. ([\#17555](https://github.com/element-hq/synapse/issues/17555))
* Bump sigstore/cosign-installer from 3.5.0 to 3.6.0. ([\#17549](https://github.com/element-hq/synapse/issues/17549))
* Bump types-pyyaml from 6.0.12.20240311 to 6.0.12.20240808. ([\#17552](https://github.com/element-hq/synapse/issues/17552))
* Bump types-requests from 2.31.0.20240406 to 2.32.0.20240712. ([\#17524](https://github.com/element-hq/synapse/issues/17524))
# Synapse 1.113.0 (2024-08-13)
No significant changes since 1.113.0rc1.
# Synapse 1.113.0rc1 (2024-08-06)
### Features
- Track which rooms have been sent to clients in the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17447](https://github.com/element-hq/synapse/issues/17447))
- Add Account Data extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17477](https://github.com/element-hq/synapse/issues/17477))
- Add receipts extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17489](https://github.com/element-hq/synapse/issues/17489))
- Add typing notification extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17505](https://github.com/element-hq/synapse/issues/17505))
### Bugfixes
- Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to handle invite/knock rooms when filtering. ([\#17450](https://github.com/element-hq/synapse/issues/17450))
- Fix a bug introduced in v1.110.0 which caused `/keys/query` to return incomplete results, leading to high network activity and CPU usage on Matrix clients. ([\#17499](https://github.com/element-hq/synapse/issues/17499))
### Improved Documentation
- Update the [`allowed_local_3pids`](https://element-hq.github.io/synapse/v1.112/usage/configuration/config_documentation.html#allowed_local_3pids) config option's msisdn address to a working example. ([\#17476](https://github.com/element-hq/synapse/issues/17476))
### Internal Changes
- Change sliding sync to use their own token format in preparation for storing per-connection state. ([\#17452](https://github.com/element-hq/synapse/issues/17452))
- Ensure we don't send down negative `bump_stamp` in experimental sliding sync endpoint. ([\#17478](https://github.com/element-hq/synapse/issues/17478))
- Do not send down empty room entries down experimental sliding sync endpoint. ([\#17479](https://github.com/element-hq/synapse/issues/17479))
- Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`. ([\#17481](https://github.com/element-hq/synapse/issues/17481), [\#17482](https://github.com/element-hq/synapse/issues/17482))
- Add some opentracing tags and logging to the experimental sliding sync implementation. ([\#17501](https://github.com/element-hq/synapse/issues/17501))
- Split and move Sliding Sync tests so we have some more sane test file sizes. ([\#17504](https://github.com/element-hq/synapse/issues/17504))
- Update the `limited` field description in the Sliding Sync response to accurately describe what it actually represents. ([\#17507](https://github.com/element-hq/synapse/issues/17507))
- Easier to understand `timeline` assertions in Sliding Sync tests. ([\#17511](https://github.com/element-hq/synapse/issues/17511))
- Reset the sliding sync connection if we don't recognize the per-connection state position. ([\#17529](https://github.com/element-hq/synapse/issues/17529))
### Updates to locked dependencies
* Bump bcrypt from 4.1.3 to 4.2.0. ([\#17495](https://github.com/element-hq/synapse/issues/17495))
* Bump black from 24.4.2 to 24.8.0. ([\#17522](https://github.com/element-hq/synapse/issues/17522))
* Bump phonenumbers from 8.13.39 to 8.13.42. ([\#17521](https://github.com/element-hq/synapse/issues/17521))
* Bump ruff from 0.5.4 to 0.5.5. ([\#17494](https://github.com/element-hq/synapse/issues/17494))
* Bump serde_json from 1.0.120 to 1.0.121. ([\#17493](https://github.com/element-hq/synapse/issues/17493))
* Bump serde_json from 1.0.121 to 1.0.122. ([\#17525](https://github.com/element-hq/synapse/issues/17525))
* Bump towncrier from 23.11.0 to 24.7.1. ([\#17523](https://github.com/element-hq/synapse/issues/17523))
* Bump types-pyopenssl from 24.1.0.20240425 to 24.1.0.20240722. ([\#17496](https://github.com/element-hq/synapse/issues/17496))
* Bump types-setuptools from 70.1.0.20240627 to 71.1.0.20240726. ([\#17497](https://github.com/element-hq/synapse/issues/17497))
# Synapse 1.112.0 (2024-07-30)
This security release is to update our locked dependency on Twisted to 24.7.0rc1, which includes a security fix for [CVE-2024-41671 / GHSA-c8m8-j448-xjx7: Disordered HTTP pipeline response in twisted.web, again](https://github.com/twisted/twisted/security/advisories/GHSA-c8m8-j448-xjx7).
Note that this security fix is also available as **Synapse 1.111.1**, which does not include the rest of the changes in Synapse 1.112.0.
This issue means that, if multiple HTTP requests are pipelined in the same TCP connection, Synapse can send responses to the wrong HTTP request.
If a reverse proxy was configured to use HTTP pipelining, this could result in responses being sent to the wrong user, severely harming confidentiality.
With that said, despite being a high severity issue, **we consider it unlikely that Synapse installations will be affected**.
The use of HTTP pipelining in this fashion would cause worse performance for clients (request-response latencies would be increased as users' responses would be artificially blocked behind other users' slow requests). Further, Nginx and Haproxy, two common reverse proxies, do not appear to support configuring their upstreams to use HTTP pipelining and thus would not be affected. For both of these reasons, we consider it unlikely that a Synapse deployment would be set up in such a configuration.
Despite that, we cannot rule out that some installations may exist with this unusual setup and so we are releasing this security update today.
**pip users:** Note that by default, upgrading Synapse using pip will not automatically upgrade Twisted. **Please manually install the new version of Twisted** using `pip install Twisted==24.7.0rc1`. Note also that even the `--upgrade-strategy=eager` flag to `pip install -U matrix-synapse` will not upgrade Twisted to a patched version because it is only a release candidate at this time.
### Internal Changes
- Upgrade locked dependency on Twisted to 24.7.0rc1. ([\#17502](https://github.com/element-hq/synapse/issues/17502))
# Synapse 1.112.0rc1 (2024-07-23)
Please note that this release candidate does not include the security dependency update
included in version 1.111.1 as this version was released before 1.111.1.
The same security fix can be found in the full release of 1.112.0.
### Features
- Add to-device extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17416](https://github.com/element-hq/synapse/issues/17416))
- Populate `name`/`avatar` fields in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17418](https://github.com/element-hq/synapse/issues/17418))
- Populate `heroes` and room summary fields (`joined_count`, `invited_count`) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17419](https://github.com/element-hq/synapse/issues/17419))
- Populate `is_dm` room field in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17429](https://github.com/element-hq/synapse/issues/17429))
- Add room subscriptions to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17432](https://github.com/element-hq/synapse/issues/17432))
- Prepare for authenticated media freeze. ([\#17433](https://github.com/element-hq/synapse/issues/17433))
- Add E2EE extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17454](https://github.com/element-hq/synapse/issues/17454))
### Bugfixes
- Add configurable option to always include offline users in presence sync results. Contributed by @Michael-Hollister. ([\#17231](https://github.com/element-hq/synapse/issues/17231))
- Fix bug in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint when using room type filters and the user has one or more remote invites. ([\#17434](https://github.com/element-hq/synapse/issues/17434))
- Order `heroes` by `stream_ordering` as the Matrix specification states (applies to `/sync`). ([\#17435](https://github.com/element-hq/synapse/issues/17435))
- Fix rare bug where `/sync` would break for a user when using workers with multiple stream writers. ([\#17438](https://github.com/element-hq/synapse/issues/17438))
### Improved Documentation
- Update the readme image to have a white background, so that it is readable in dark mode. ([\#17387](https://github.com/element-hq/synapse/issues/17387))
- Add Red Hat Enterprise Linux and Rocky Linux 8 and 9 installation instructions. ([\#17423](https://github.com/element-hq/synapse/issues/17423))
- Improve documentation for the [`default_power_level_content_override`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#default_power_level_content_override) config option. ([\#17451](https://github.com/element-hq/synapse/issues/17451))
### Internal Changes
- Make sure we always use the right logic for enabling the media repo. ([\#17424](https://github.com/element-hq/synapse/issues/17424))
- Fix argument documentation for method `RateLimiter.record_action`. ([\#17426](https://github.com/element-hq/synapse/issues/17426))
- Reduce volume of 'Waiting for current token' logs, which were introduced in v1.109.0. ([\#17428](https://github.com/element-hq/synapse/issues/17428))
- Limit concurrent remote downloads to 6 per IP address, and decrement remote downloads without a content-length from the ratelimiter after the download is complete. ([\#17439](https://github.com/element-hq/synapse/issues/17439))
- Remove unnecessary call to resume producing in fake channel. ([\#17449](https://github.com/element-hq/synapse/issues/17449))
- Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to bump room when it is created. ([\#17453](https://github.com/element-hq/synapse/issues/17453))
- Speed up generating sliding sync responses. ([\#17458](https://github.com/element-hq/synapse/issues/17458))
- Add cache to `get_rooms_for_local_user_where_membership_is` to speed up sliding sync. ([\#17460](https://github.com/element-hq/synapse/issues/17460))
- Speed up fetching room keys from backup. ([\#17461](https://github.com/element-hq/synapse/issues/17461))
- Speed up sorting of the room list in sliding sync. ([\#17468](https://github.com/element-hq/synapse/issues/17468))
- Implement handling of `$ME` as a state key in sliding sync. ([\#17469](https://github.com/element-hq/synapse/issues/17469))
### Updates to locked dependencies
* Bump bytes from 1.6.0 to 1.6.1. ([\#17441](https://github.com/element-hq/synapse/issues/17441))
* Bump hiredis from 2.3.2 to 3.0.0. ([\#17464](https://github.com/element-hq/synapse/issues/17464))
* Bump jsonschema from 4.22.0 to 4.23.0. ([\#17444](https://github.com/element-hq/synapse/issues/17444))
* Bump matrix-org/done-action from 2 to 3. ([\#17440](https://github.com/element-hq/synapse/issues/17440))
* Bump mypy from 1.9.0 to 1.10.1. ([\#17445](https://github.com/element-hq/synapse/issues/17445))
* Bump pyopenssl from 24.1.0 to 24.2.1. ([\#17465](https://github.com/element-hq/synapse/issues/17465))
* Bump ruff from 0.5.0 to 0.5.4. ([\#17466](https://github.com/element-hq/synapse/issues/17466))
* Bump sentry-sdk from 2.6.0 to 2.8.0. ([\#17456](https://github.com/element-hq/synapse/issues/17456))
* Bump sentry-sdk from 2.8.0 to 2.10.0. ([\#17467](https://github.com/element-hq/synapse/issues/17467))
* Bump setuptools from 67.6.0 to 70.0.0. ([\#17448](https://github.com/element-hq/synapse/issues/17448))
* Bump twine from 5.1.0 to 5.1.1. ([\#17443](https://github.com/element-hq/synapse/issues/17443))
* Bump types-jsonschema from 4.22.0.20240610 to 4.23.0.20240712. ([\#17446](https://github.com/element-hq/synapse/issues/17446))
* Bump ulid from 1.1.2 to 1.1.3. ([\#17442](https://github.com/element-hq/synapse/issues/17442))
* Bump zipp from 3.15.0 to 3.19.1. ([\#17427](https://github.com/element-hq/synapse/issues/17427))
# Synapse 1.111.1 (2024-07-30)
This security release is to update our locked dependency on Twisted to 24.7.0rc1, which includes a security fix for [CVE-2024-41671 / GHSA-c8m8-j448-xjx7: Disordered HTTP pipeline response in twisted.web, again](https://github.com/twisted/twisted/security/advisories/GHSA-c8m8-j448-xjx7).
This issue means that, if multiple HTTP requests are pipelined in the same TCP connection, Synapse can send responses to the wrong HTTP request.
If a reverse proxy was configured to use HTTP pipelining, this could result in responses being sent to the wrong user, severely harming confidentiality.
With that said, despite being a high severity issue, **we consider it unlikely that Synapse installations will be affected**.
The use of HTTP pipelining in this fashion would cause worse performance for clients (request-response latencies would be increased as users' responses would be artificially blocked behind other users' slow requests). Further, Nginx and Haproxy, two common reverse proxies, do not appear to support configuring their upstreams to use HTTP pipelining and thus would not be affected. For both of these reasons, we consider it unlikely that a Synapse deployment would be set up in such a configuration.
Despite that, we cannot rule out that some installations may exist with this unusual setup and so we are releasing this security update today.
**pip users:** Note that by default, upgrading Synapse using pip will not automatically upgrade Twisted. **Please manually install the new version of Twisted** using `pip install Twisted==24.7.0rc1`. Note also that even the `--upgrade-strategy=eager` flag to `pip install -U matrix-synapse` will not upgrade Twisted to a patched version because it is only a release candidate at this time.
### Internal Changes
- Upgrade locked dependency on Twisted to 24.7.0rc1. ([\#17502](https://github.com/element-hq/synapse/issues/17502))
# Synapse 1.111.0 (2024-07-16)
No significant changes since 1.111.0rc2.
# Synapse 1.111.0rc2 (2024-07-10)
### Bugfixes

25
Cargo.lock generated
View File

@@ -67,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
[[package]]
name = "bytes"
version = "1.7.1"
version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8318a53db07bb3f8dca91a600466bdb3f2eaadeedfdbcf02e1accbad9271ba50"
checksum = "514de17de45fdb8dc022b1a7975556c53c86f9f0aa5f534b98977b171857c2c9"
[[package]]
name = "cfg-if"
@@ -444,9 +444,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.10.6"
version = "1.10.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4219d74c6b67a3654a9fbebc4b419e22126d13d2f3c4a07ee0cb61ff79a79619"
checksum = "b91213439dad192326a0d7c6ee3955910425f441d7038e0d6933b0aec5c4517f"
dependencies = [
"aho-corasick",
"memchr",
@@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]]
name = "serde"
version = "1.0.209"
version = "1.0.204"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "99fce0ffe7310761ca6bf9faf5115afbc19688edd00171d81b1bb1b116c63e09"
checksum = "bc76f558e0cbb2a839d37354c575f1dc3fdc6546b5be373ba43d95f231bf7c12"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.209"
version = "1.0.204"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a5831b979fd7b5439637af1752d535ff49f4860c0f341d1baeb6faf0f4242170"
checksum = "e0cd7e117be63d3c3678776753929474f3b04a43a080c744d6b0ae2a8c28e222"
dependencies = [
"proc-macro2",
"quote",
@@ -505,12 +505,11 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.127"
version = "1.0.120"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8043c06d9f82bd7271361ed64f415fe5e12a77fdb52e573e7f06a516dea329ad"
checksum = "4e0d21c9a8cae1235ad58a00c11cb40d4b1e5c784f1ef2c537876ed6ffd8b7c5"
dependencies = [
"itoa",
"memchr",
"ryu",
"serde",
]
@@ -598,9 +597,9 @@ checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825"
[[package]]
name = "ulid"
version = "1.1.3"
version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04f903f293d11f31c0c29e4148f6dc0d033a7f80cebc0282bea147611667d289"
checksum = "34778c17965aa2a08913b57e1f34db9b4a63f5de31768b55bf20d2795f921259"
dependencies = [
"getrandom",
"rand",

View File

@@ -1,4 +1,4 @@
.. image:: ./docs/element_logo_white_bg.svg
.. image:: https://github.com/element-hq/product/assets/87339233/7abf477a-5277-47f3-be44-ea44917d8ed7
:height: 60px
**Element Synapse - Matrix homeserver implementation**

View File

@@ -1 +0,0 @@
Fix hierarchy returning 403 when room is accessible through federation. Contributed by Krishan (@kfiven).

View File

@@ -1 +0,0 @@
MSC3861: load the issuer and account management URLs from OIDC discovery.

View File

@@ -0,0 +1 @@
Add to-device extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -0,0 +1 @@
Populate `name`/`avatar` fields in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -1 +0,0 @@
Improve cross-signing upload when using [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861) to use a custom UIA flow stage, with web fallback support.

View File

@@ -1 +0,0 @@
Pre-populate room data used in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint for quick filtering/sorting.

View File

@@ -1 +0,0 @@
Fix content-length on federation /thumbnail responses.

View File

@@ -1 +0,0 @@
Fix authenticated media responses using a wrong limit when following redirects over federation.

View File

@@ -1 +0,0 @@
Clarify that the admin api resource is only loaded on the main process and not workers.

View File

@@ -1 +0,0 @@
Fixed typo in `saml2_config` config [example](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#saml2_config).

View File

@@ -1 +0,0 @@
Refactor sliding sync class into multiple files.

View File

@@ -1 +0,0 @@
Store sliding sync per-connection state in the database.

View File

@@ -1 +0,0 @@
Make the sliding sync `PerConnectionState` class immutable.

View File

@@ -1 +0,0 @@
Add support to `@tag_args` for standalone functions.

View File

@@ -1 +0,0 @@
Speed up incremental syncs in sliding sync by adding some more caching.

View File

@@ -1 +0,0 @@
Return `400 M_BAD_JSON` upon attempting to complete various room actions with a non-local user ID and unknown room ID, rather than an internal server error.

View File

@@ -1 +0,0 @@
Make `hash_password` accept password input from stdin.

View File

@@ -1 +0,0 @@
Always return the user's own read receipts in sliding sync.

View File

@@ -1 +0,0 @@
Replace `isort` and `black with `ruff`.

View File

@@ -1 +0,0 @@
Refactor sliding sync code to move room list logic out into a separate class.

View File

@@ -1 +0,0 @@
Fix authenticated media responses using a wrong limit when following redirects over federation.

View File

@@ -1 +0,0 @@
Use new database tables for sliding sync.

View File

@@ -1 +0,0 @@
Store sliding sync per-connection state in the database.

View File

@@ -1 +0,0 @@
Pre-populate room data used in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint for quick filtering/sorting.

View File

@@ -1 +0,0 @@
Pre-populate room data used in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint for quick filtering/sorting.

View File

@@ -1 +0,0 @@
Pre-populate room data used in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint for quick filtering/sorting.

View File

@@ -1 +0,0 @@
Pre-populate room data used in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint for quick filtering/sorting.

42
debian/changelog vendored
View File

@@ -1,45 +1,3 @@
matrix-synapse-py3 (1.114.0~rc1) stable; urgency=medium
* New synapse release 1.114.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 20 Aug 2024 12:55:28 +0000
matrix-synapse-py3 (1.113.0) stable; urgency=medium
* New Synapse release 1.113.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 13 Aug 2024 14:36:56 +0100
matrix-synapse-py3 (1.113.0~rc1) stable; urgency=medium
* New Synapse release 1.113.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 06 Aug 2024 12:23:23 +0100
matrix-synapse-py3 (1.112.0) stable; urgency=medium
* New Synapse release 1.112.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Jul 2024 17:15:48 +0100
matrix-synapse-py3 (1.112.0~rc1) stable; urgency=medium
* New Synapse release 1.112.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Jul 2024 08:58:55 -0600
matrix-synapse-py3 (1.111.1) stable; urgency=medium
* New Synapse release 1.111.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Jul 2024 16:13:52 +0100
matrix-synapse-py3 (1.111.0) stable; urgency=medium
* New Synapse release 1.111.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Jul 2024 12:42:46 +0200
matrix-synapse-py3 (1.111.0~rc2) stable; urgency=medium
* New synapse release 1.111.0rc2.

View File

@@ -1,13 +1,10 @@
.\" generated with Ronn-NG/v0.10.1
.\" http://github.com/apjanke/ronn-ng/tree/0.10.1
.TH "HASH_PASSWORD" "1" "August 2024" ""
.\" generated with Ronn-NG/v0.8.0
.\" http://github.com/apjanke/ronn-ng/tree/0.8.0
.TH "HASH_PASSWORD" "1" "July 2021" "" ""
.SH "NAME"
\fBhash_password\fR \- Calculate the hash of a new password, so that passwords can be reset
.SH "SYNOPSIS"
.TS
allbox;
\fBhash_password\fR [\fB\-p\fR \fB\-\-password\fR [password]] [\fB\-c\fR \fB\-\-config\fR \fIfile\fR]
.TE
\fBhash_password\fR [\fB\-p\fR|\fB\-\-password\fR [password]] [\fB\-c\fR|\fB\-\-config\fR \fIfile\fR]
.SH "DESCRIPTION"
\fBhash_password\fR calculates the hash of a supplied password using bcrypt\.
.P
@@ -23,7 +20,7 @@ bcrypt_rounds: 17 password_config: pepper: "random hashing pepper"
.SH "OPTIONS"
.TP
\fB\-p\fR, \fB\-\-password\fR
Read the password form the command line if [password] is supplied, or from \fBSTDIN\fR\. If not, prompt the user and read the password from the tty prompt\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
Read the password form the command line if [password] is supplied\. If not, prompt the user and read the password form the \fBSTDIN\fR\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
.TP
\fB\-c\fR, \fB\-\-config\fR
Read the supplied YAML \fIfile\fR containing the options \fBbcrypt_rounds\fR and the \fBpassword_config\fR section containing the \fBpepper\fR value\.
@@ -36,17 +33,7 @@ $2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8\.X8fWFpum7SxZ9MFe
.fi
.IP "" 0
.P
Hash from the stdin:
.IP "" 4
.nf
$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX\.rcuAbM8ErLoUhybG
.fi
.IP "" 0
.P
Hash from the prompt:
Hash from the STDIN:
.IP "" 4
.nf
$ hash_password
@@ -66,6 +53,6 @@ $2b$12$CwI\.wBNr\.w3kmiUlV3T5s\.GT2wH7uebDCovDrCOh18dFedlANK99O
.fi
.IP "" 0
.SH "COPYRIGHT"
This man page was written by Rahul De «rahulde@swecha\.net» for Debian GNU/Linux distribution\.
This man page was written by Rahul De <\fI\%mailto:rahulde@swecha\.net\fR> for Debian GNU/Linux distribution\.
.SH "SEE ALSO"
synctl(1), synapse_port_db(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

View File

@@ -1,182 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta http-equiv='content-type' content='text/html;charset=utf-8'>
<meta name='generator' content='Ronn-NG/v0.10.1 (http://github.com/apjanke/ronn-ng/tree/0.10.1)'>
<title>hash_password(1) - Calculate the hash of a new password, so that passwords can be reset</title>
<style type='text/css' media='all'>
/* style: man */
body#manpage {margin:0}
.mp {max-width:100ex;padding:0 9ex 1ex 4ex}
.mp p,.mp pre,.mp ul,.mp ol,.mp dl {margin:0 0 20px 0}
.mp h2 {margin:10px 0 0 0}
.mp > p,.mp > pre,.mp > ul,.mp > ol,.mp > dl {margin-left:8ex}
.mp h3 {margin:0 0 0 4ex}
.mp dt {margin:0;clear:left}
.mp dt.flush {float:left;width:8ex}
.mp dd {margin:0 0 0 9ex}
.mp h1,.mp h2,.mp h3,.mp h4 {clear:left}
.mp pre {margin-bottom:20px}
.mp pre+h2,.mp pre+h3 {margin-top:22px}
.mp h2+pre,.mp h3+pre {margin-top:5px}
.mp img {display:block;margin:auto}
.mp h1.man-title {display:none}
.mp,.mp code,.mp pre,.mp tt,.mp kbd,.mp samp,.mp h3,.mp h4 {font-family:monospace;font-size:14px;line-height:1.42857142857143}
.mp h2 {font-size:16px;line-height:1.25}
.mp h1 {font-size:20px;line-height:2}
.mp {text-align:justify;background:#fff}
.mp,.mp code,.mp pre,.mp pre code,.mp tt,.mp kbd,.mp samp {color:#131211}
.mp h1,.mp h2,.mp h3,.mp h4 {color:#030201}
.mp u {text-decoration:underline}
.mp code,.mp strong,.mp b {font-weight:bold;color:#131211}
.mp em,.mp var {font-style:italic;color:#232221;text-decoration:none}
.mp a,.mp a:link,.mp a:hover,.mp a code,.mp a pre,.mp a tt,.mp a kbd,.mp a samp {color:#0000ff}
.mp b.man-ref {font-weight:normal;color:#434241}
.mp pre {padding:0 4ex}
.mp pre code {font-weight:normal;color:#434241}
.mp h2+pre,h3+pre {padding-left:0}
ol.man-decor,ol.man-decor li {margin:3px 0 10px 0;padding:0;float:left;width:33%;list-style-type:none;text-transform:uppercase;color:#999;letter-spacing:1px}
ol.man-decor {width:100%}
ol.man-decor li.tl {text-align:left}
ol.man-decor li.tc {text-align:center;letter-spacing:4px}
ol.man-decor li.tr {text-align:right;float:right}
</style>
</head>
<!--
The following styles are deprecated and will be removed at some point:
div#man, div#man ol.man, div#man ol.head, div#man ol.man.
The .man-page, .man-decor, .man-head, .man-foot, .man-title, and
.man-navigation should be used instead.
-->
<body id='manpage'>
<div class='mp' id='man'>
<div class='man-navigation' style='display:none'>
<a href="#NAME">NAME</a>
<a href="#SYNOPSIS">SYNOPSIS</a>
<a href="#DESCRIPTION">DESCRIPTION</a>
<a href="#FILES">FILES</a>
<a href="#OPTIONS">OPTIONS</a>
<a href="#EXAMPLES">EXAMPLES</a>
<a href="#COPYRIGHT">COPYRIGHT</a>
<a href="#SEE-ALSO">SEE ALSO</a>
</div>
<ol class='man-decor man-head man head'>
<li class='tl'>hash_password(1)</li>
<li class='tc'></li>
<li class='tr'>hash_password(1)</li>
</ol>
<h2 id="NAME">NAME</h2>
<p class="man-name">
<code>hash_password</code> - <span class="man-whatis">Calculate the hash of a new password, so that passwords can be reset</span>
</p>
<h2 id="SYNOPSIS">SYNOPSIS</h2>
<table>
<tbody>
<tr>
<td>
<code>hash_password</code> [<code>-p</code>
</td>
<td>
<code>--password</code> [password]] [<code>-c</code>
</td>
<td>
<code>--config</code> <var>file</var>]</td>
</tr>
</tbody>
</table>
<h2 id="DESCRIPTION">DESCRIPTION</h2>
<p><strong>hash_password</strong> calculates the hash of a supplied password using bcrypt.</p>
<p><code>hash_password</code> takes a password as an parameter either on the command line
or the <code>STDIN</code> if not supplied.</p>
<p>It accepts an YAML file which can be used to specify parameters like the
number of rounds for bcrypt and password_config section having the pepper
value used for the hashing. By default <code>bcrypt_rounds</code> is set to <strong>12</strong>.</p>
<p>The hashed password is written on the <code>STDOUT</code>.</p>
<h2 id="FILES">FILES</h2>
<p>A sample YAML file accepted by <code>hash_password</code> is described below:</p>
<p>bcrypt_rounds: 17
password_config:
pepper: "random hashing pepper"</p>
<h2 id="OPTIONS">OPTIONS</h2>
<dl>
<dt>
<code>-p</code>, <code>--password</code>
</dt>
<dd>Read the password form the command line if [password] is supplied, or from <code>STDIN</code>.
If not, prompt the user and read the password from the tty prompt.
It is not recommended to type the password on the command line
directly. Use the STDIN instead.</dd>
<dt>
<code>-c</code>, <code>--config</code>
</dt>
<dd>Read the supplied YAML <var>file</var> containing the options <code>bcrypt_rounds</code>
and the <code>password_config</code> section containing the <code>pepper</code> value.</dd>
</dl>
<h2 id="EXAMPLES">EXAMPLES</h2>
<p>Hash from the command line:</p>
<pre><code>$ hash_password -p "p@ssw0rd"
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe
</code></pre>
<p>Hash from the stdin:</p>
<pre><code>$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
</code></pre>
<p>Hash from the prompt:</p>
<pre><code>$ hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
</code></pre>
<p>Using a config file:</p>
<pre><code>$ hash_password -c config.yml
Password:
Confirm password:
$2b$12$CwI.wBNr.w3kmiUlV3T5s.GT2wH7uebDCovDrCOh18dFedlANK99O
</code></pre>
<h2 id="COPYRIGHT">COPYRIGHT</h2>
<p>This man page was written by Rahul De «rahulde@swecha.net»
for Debian GNU/Linux distribution.</p>
<h2 id="SEE-ALSO">SEE ALSO</h2>
<p><span class="man-ref">synctl<span class="s">(1)</span></span>, <span class="man-ref">synapse_port_db<span class="s">(1)</span></span>, <span class="man-ref">register_new_matrix_user<span class="s">(1)</span></span>, <span class="man-ref">synapse_review_recent_signups<span class="s">(1)</span></span></p>
<ol class='man-decor man-foot man foot'>
<li class='tl'></li>
<li class='tc'>August 2024</li>
<li class='tr'>hash_password(1)</li>
</ol>
</div>
</body>
</html>

View File

@@ -29,8 +29,8 @@ A sample YAML file accepted by `hash_password` is described below:
## OPTIONS
* `-p`, `--password`:
Read the password form the command line if [password] is supplied, or from `STDIN`.
If not, prompt the user and read the password from the tty prompt.
Read the password form the command line if [password] is supplied.
If not, prompt the user and read the password form the `STDIN`.
It is not recommended to type the password on the command line
directly. Use the STDIN instead.
@@ -45,14 +45,7 @@ Hash from the command line:
$ hash_password -p "p@ssw0rd"
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe
Hash from the stdin:
$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
Hash from the prompt:
Hash from the STDIN:
$ hash_password
Password:

2
debian/templates vendored
View File

@@ -5,7 +5,7 @@ _Description: Name of the server:
servers via federation. This is normally the public hostname of the
server running synapse, but can be different if you set up delegation.
Please refer to the delegation documentation in this case:
https://element-hq.github.io/synapse/latest/delegate.html.
https://github.com/element-hq/synapse/blob/master/docs/delegate.md.
Template: matrix-synapse/report-stats
Type: boolean

View File

@@ -27,7 +27,7 @@ ARG PYTHON_VERSION=3.11
###
# We hardcode the use of Debian bookworm here because this could change upstream
# and other Dockerfiles used for testing are expecting bookworm.
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm AS requirements
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm as requirements
# RUN --mount is specific to buildkit and is documented at
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
@@ -87,7 +87,7 @@ RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
###
### Stage 1: builder
###
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm AS builder
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm as builder
# install the OS build deps
RUN \

View File

@@ -24,7 +24,7 @@ ARG distro=""
# https://launchpad.net/~jyrki-pulliainen/+archive/ubuntu/dh-virtualenv, but
# it's not obviously easier to use that than to build our own.)
FROM docker.io/library/${distro} AS builder
FROM docker.io/library/${distro} as builder
RUN apt-get update -qq -o Acquire::Languages=none
RUN env DEBIAN_FRONTEND=noninteractive apt-get install \

View File

@@ -8,7 +8,9 @@ errors in code.
The necessary tools are:
- [ruff](https://github.com/charliermarsh/ruff), which can spot common errors and enforce a consistent style; and
- [black](https://black.readthedocs.io/en/stable/), a source code formatter;
- [isort](https://pycqa.github.io/isort/), which organises each file's imports;
- [ruff](https://github.com/charliermarsh/ruff), which can spot common errors; and
- [mypy](https://mypy.readthedocs.io/en/stable/), a type checker.
See [the contributing guide](development/contributing_guide.md#run-the-linters) for instructions

View File

@@ -21,10 +21,8 @@ incrementing integer, but backfilled events start with `stream_ordering=-1` and
---
- Incremental `/sync?since=xxx` returns things in the order they arrive at the server
(`stream_ordering`).
- Initial `/sync`, `/messages` (and `/backfill` in the federation API) return them in
the order determined by the event graph `(topological_ordering, stream_ordering)`.
- `/sync` returns things in the order they arrive at the server (`stream_ordering`).
- `/messages` (and `/backfill` in the federation API) return them in the order determined by the event graph `(topological_ordering, stream_ordering)`.
The general idea is that, if you're following a room in real-time (i.e.
`/sync`), you probably want to see the messages as they arrive at your server,

View File

@@ -1,94 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="41.440346mm"
height="10.383124mm"
viewBox="0 0 41.440346 10.383125"
version="1.1"
id="svg1"
xml:space="preserve"
sodipodi:docname="element_logo_white_bg.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><sodipodi:namedview
id="namedview1"
pagecolor="#ffffff"
bordercolor="#000000"
borderopacity="0.25"
inkscape:showpageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1"
inkscape:document-units="mm"
showgrid="false"
inkscape:export-bgcolor="#ffffffff" /><defs
id="defs1" /><g
id="layer1"
transform="translate(-84.803844,-143.2075)"
inkscape:export-filename="element_logo_white_bg.svg"
inkscape:export-xdpi="96"
inkscape:export-ydpi="96"><g
style="fill:none"
id="g1"
transform="matrix(0.26458333,0,0,0.26458333,85.841658,144.26667)"><rect
style="display:inline;fill:#ffffff;fill-opacity:1;stroke:#ffffff;stroke-width:1.31041;stroke-dasharray:none;stroke-opacity:1"
id="rect20"
width="155.31451"
height="37.932892"
x="-3.2672384"
y="-3.3479743"
rx="3.3718522"
ry="3.7915266"
transform="translate(-2.1259843e-6)"
inkscape:label="rect20"
inkscape:export-filename="rect20.svg"
inkscape:export-xdpi="96"
inkscape:export-ydpi="96" /><path
fill-rule="evenodd"
clip-rule="evenodd"
d="M 16,32 C 24.8366,32 32,24.8366 32,16 32,7.16344 24.8366,0 16,0 7.16344,0 0,7.16344 0,16 0,24.8366 7.16344,32 16,32 Z"
fill="#0dbd8b"
id="path1" /><path
fill-rule="evenodd"
clip-rule="evenodd"
d="m 13.0756,7.455 c 0,-0.64584 0.5247,-1.1694 1.1719,-1.1694 4.3864,0 7.9423,3.54853 7.9423,7.9259 0,0.6458 -0.5246,1.1694 -1.1718,1.1694 -0.6472,0 -1.1719,-0.5236 -1.1719,-1.1694 0,-3.0857 -2.5066,-5.58711 -5.5986,-5.58711 -0.6472,0 -1.1719,-0.52355 -1.1719,-1.16939 z"
fill="#ffffff"
id="path2" /><path
fill-rule="evenodd"
clip-rule="evenodd"
d="m 24.5424,13.042 c 0.6472,0 1.1719,0.5235 1.1719,1.1694 0,4.3773 -3.5559,7.9258 -7.9424,7.9258 -0.6472,0 -1.1718,-0.5235 -1.1718,-1.1693 0,-0.6459 0.5246,-1.1694 1.1718,-1.1694 3.0921,0 5.5987,-2.5015 5.5987,-5.5871 0,-0.6459 0.5247,-1.1694 1.1718,-1.1694 z"
fill="#ffffff"
id="path3" /><path
fill-rule="evenodd"
clip-rule="evenodd"
d="m 18.9446,24.5446 c 0,0.6459 -0.5247,1.1694 -1.1718,1.1694 -4.3865,0 -7.94239,-3.5485 -7.94239,-7.9258 0,-0.6459 0.52469,-1.1694 1.17179,-1.1694 0.6472,0 1.1719,0.5235 1.1719,1.1694 0,3.0856 2.5066,5.587 5.5987,5.587 0.6471,0 1.1718,0.5236 1.1718,1.1694 z"
fill="#ffffff"
id="path4" /><path
fill-rule="evenodd"
clip-rule="evenodd"
d="m 7.45823,18.9576 c -0.64718,0 -1.17183,-0.5235 -1.17183,-1.1694 0,-4.3773 3.55591,-7.92581 7.9423,-7.92581 0.6472,0 1.1719,0.52351 1.1719,1.16941 0,0.6458 -0.5247,1.1694 -1.1719,1.1694 -3.092,0 -5.59864,2.5014 -5.59864,5.587 0,0.6459 -0.52465,1.1694 -1.17183,1.1694 z"
fill="#ffffff"
id="path5" /><path
d="M 56.2856,18.1428 H 44.9998 c 0.1334,1.181 0.5619,2.1238 1.2858,2.8286 0.7238,0.6857 1.6761,1.0286 2.8571,1.0286 0.7809,0 1.4857,-0.1905 2.1143,-0.5715 0.6286,-0.3809 1.0762,-0.8952 1.3428,-1.5428 h 3.4286 c -0.4571,1.5047 -1.3143,2.7238 -2.5714,3.6571 -1.2381,0.9143 -2.7048,1.3715 -4.4,1.3715 -2.2095,0 -4,-0.7334 -5.3714,-2.2 -1.3524,-1.4667 -2.0286,-3.3239 -2.0286,-5.5715 0,-2.1905 0.6857,-4.0285 2.0571,-5.5143 1.3715,-1.4857 3.1429,-2.22853 5.3143,-2.22853 2.1714,0 3.9238,0.73333 5.2572,2.20003 1.3523,1.4476 2.0285,3.2762 2.0285,5.4857 z m -7.2572,-5.9714 c -1.0667,0 -1.9524,0.3143 -2.6571,0.9429 -0.7048,0.6285 -1.1429,1.4666 -1.3143,2.5142 h 7.8857 c -0.1524,-1.0476 -0.5714,-1.8857 -1.2571,-2.5142 -0.6858,-0.6286 -1.5715,-0.9429 -2.6572,-0.9429 z"
fill="#000000"
id="path6" /><path
d="M 58.6539,20.1428 V 3.14282 h 3.4 V 20.2 c 0,0.7619 0.419,1.1428 1.2571,1.1428 l 0.6,-0.0285 v 3.2285 c -0.3238,0.0572 -0.6667,0.0857 -1.0286,0.0857 -1.4666,0 -2.5428,-0.3714 -3.2285,-1.1142 -0.6667,-0.7429 -1,-1.8667 -1,-3.3715 z"
fill="#000000"
id="path7" /><path
d="M 79.7454,18.1428 H 68.4597 c 0.1333,1.181 0.5619,2.1238 1.2857,2.8286 0.7238,0.6857 1.6762,1.0286 2.8571,1.0286 0.781,0 1.4857,-0.1905 2.1143,-0.5715 0.6286,-0.3809 1.0762,-0.8952 1.3429,-1.5428 h 3.4285 c -0.4571,1.5047 -1.3143,2.7238 -2.5714,3.6571 -1.2381,0.9143 -2.7048,1.3715 -4.4,1.3715 -2.2095,0 -4,-0.7334 -5.3714,-2.2 -1.3524,-1.4667 -2.0286,-3.3239 -2.0286,-5.5715 0,-2.1905 0.6857,-4.0285 2.0571,-5.5143 1.3715,-1.4857 3.1429,-2.22853 5.3143,-2.22853 2.1715,0 3.9238,0.73333 5.2572,2.20003 1.3524,1.4476 2.0285,3.2762 2.0285,5.4857 z m -7.2572,-5.9714 c -1.0666,0 -1.9524,0.3143 -2.6571,0.9429 -0.7048,0.6285 -1.1429,1.4666 -1.3143,2.5142 h 7.8857 c -0.1524,-1.0476 -0.5714,-1.8857 -1.2571,-2.5142 -0.6857,-0.6286 -1.5715,-0.9429 -2.6572,-0.9429 z"
fill="#000000"
id="path8" /><path
d="m 95.0851,16.0571 v 8.5143 h -3.4 v -8.8857 c 0,-2.2476 -0.9333,-3.3714 -2.8,-3.3714 -1.0095,0 -1.819,0.3238 -2.4286,0.9714 -0.5904,0.6476 -0.8857,1.5333 -0.8857,2.6571 v 8.6286 h -3.4 V 9.74282 h 3.1429 v 1.97148 c 0.3619,-0.6667 0.9143,-1.2191 1.6571,-1.6572 0.7429,-0.43809 1.6667,-0.65713 2.7714,-0.65713 2.0572,0 3.5429,0.78093 4.4572,2.34283 1.2571,-1.5619 2.9333,-2.34283 5.0286,-2.34283 1.733,0 3.067,0.54285 4,1.62853 0.933,1.0667 1.4,2.4762 1.4,4.2286 v 9.3143 h -3.4 v -8.8857 c 0,-2.2476 -0.933,-3.3714 -2.8,-3.3714 -1.0286,0 -1.8477,0.3333 -2.4572,1 -0.5905,0.6476 -0.8857,1.5619 -0.8857,2.7428 z"
fill="#000000"
id="path9" /><path
d="m 121.537,18.1428 h -11.286 c 0.133,1.181 0.562,2.1238 1.286,2.8286 0.723,0.6857 1.676,1.0286 2.857,1.0286 0.781,0 1.486,-0.1905 2.114,-0.5715 0.629,-0.3809 1.076,-0.8952 1.343,-1.5428 h 3.429 c -0.458,1.5047 -1.315,2.7238 -2.572,3.6571 -1.238,0.9143 -2.705,1.3715 -4.4,1.3715 -2.209,0 -4,-0.7334 -5.371,-2.2 -1.353,-1.4667 -2.029,-3.3239 -2.029,-5.5715 0,-2.1905 0.686,-4.0285 2.057,-5.5143 1.372,-1.4857 3.143,-2.22853 5.315,-2.22853 2.171,0 3.923,0.73333 5.257,2.20003 1.352,1.4476 2.028,3.2762 2.028,5.4857 z m -7.257,-5.9714 c -1.067,0 -1.953,0.3143 -2.658,0.9429 -0.704,0.6285 -1.142,1.4666 -1.314,2.5142 h 7.886 c -0.153,-1.0476 -0.572,-1.8857 -1.257,-2.5142 -0.686,-0.6286 -1.572,-0.9429 -2.657,-0.9429 z"
fill="#000000"
id="path10" /><path
d="m 127.105,9.74282 v 1.97148 c 0.343,-0.6477 0.905,-1.1905 1.686,-1.6286 0.8,-0.45716 1.762,-0.68573 2.885,-0.68573 1.753,0 3.105,0.53333 4.058,1.60003 0.971,1.0666 1.457,2.4857 1.457,4.2571 v 9.3143 h -3.4 v -8.8857 c 0,-1.0476 -0.248,-1.8667 -0.743,-2.4572 -0.476,-0.6095 -1.21,-0.9142 -2.2,-0.9142 -1.086,0 -1.943,0.3238 -2.572,0.9714 -0.609,0.6476 -0.914,1.5428 -0.914,2.6857 v 8.6 h -3.4 V 9.74282 Z"
fill="#000000"
id="path11" /><path
d="m 147.12,21.5428 v 2.9429 c -0.419,0.1143 -1.009,0.1714 -1.771,0.1714 -2.895,0 -4.343,-1.4571 -4.343,-4.3714 v -7.8286 h -2.257 V 9.74282 h 2.257 V 5.88568 h 3.4 v 3.85714 h 2.772 v 2.71428 h -2.772 v 7.4857 c 0,1.1619 0.552,1.7429 1.657,1.7429 z"
fill="#000000"
id="path12" /></g></g></svg>

Before

Width:  |  Height:  |  Size: 7.5 KiB

View File

@@ -67,7 +67,7 @@ in Synapse can be deactivated.
**NOTE**: This has an impact on security and is for testing purposes only!
To deactivate the certificate validation, the following setting must be added to
your [homeserver.yaml](../usage/configuration/homeserver_sample_config.md).
your [homserver.yaml](../usage/configuration/homeserver_sample_config.md).
```yaml
use_insecure_ssl_client_just_for_testing_do_not_use: true

View File

@@ -309,62 +309,7 @@ sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
libwebp-devel libxml2-devel libxslt-devel libpq-devel \
python3-virtualenv libffi-devel openssl-devel python3-devel \
libicu-devel
sudo dnf group install "Development Tools"
```
##### Red Hat Enterprise Linux / Rocky Linux
*Note: The term "RHEL" below refers to both Red Hat Enterprise Linux and Rocky Linux. The distributions are 1:1 binary compatible.*
It's recommended to use the latest Python versions.
RHEL 8 in particular ships with Python 3.6 by default which is EOL and therefore no longer supported by Synapse. RHEL 9 ship with Python 3.9 which is still supported by the Python core team as of this writing. However, newer Python versions provide significant performance improvements and they're available in official distributions' repositories. Therefore it's recommended to use them.
Python 3.11 and 3.12 are available for both RHEL 8 and 9.
These commands should be run as root user.
RHEL 8
```bash
# Enable PowerTools repository
dnf config-manager --set-enabled powertools
```
RHEL 9
```bash
# Enable CodeReady Linux Builder repository
crb enable
```
Install new version of Python. You only need one of these:
```bash
# Python 3.11
dnf install python3.11 python3.11-devel
```
```bash
# Python 3.12
dnf install python3.12 python3.12-devel
```
Finally, install common prerequisites
```bash
dnf install libicu libicu-devel libpq5 libpq5-devel lz4 pkgconf
dnf group install "Development Tools"
```
###### Using venv module instead of virtualenv command
It's recommended to use Python venv module directly rather than the virtualenv command.
* On RHEL 9, virtualenv is only available on [EPEL](https://docs.fedoraproject.org/en-US/epel/).
* On RHEL 8, virtualenv is based on Python 3.6. It does not support creating 3.11/3.12 virtual environments.
Here's an example of creating Python 3.12 virtual environment and installing Synapse from PyPI.
```bash
mkdir -p ~/synapse
# To use Python 3.11, simply use the command "python3.11" instead.
python3.12 -m venv ~/synapse/env
source ~/synapse/env/bin/activate
pip install --upgrade pip
pip install --upgrade setuptools
pip install matrix-synapse
sudo dnf groupinstall "Development Tools"
```
##### macOS

View File

@@ -246,7 +246,6 @@ Example configuration:
```yaml
presence:
enabled: false
include_offline_users_on_sync: false
```
`enabled` can also be set to a special value of "untracked" which ignores updates
@@ -255,10 +254,6 @@ received via clients and federation, while still accepting updates from the
*The "untracked" option was added in Synapse 1.96.0.*
When clients perform an initial or `full_state` sync, presence results for offline users are
not included by default. Setting `include_offline_users_on_sync` to `true` will always include
offline users in the results. Defaults to false.
---
### `require_auth_for_profile_requests`
@@ -509,8 +504,7 @@ Unix socket support (_Added in Synapse 1.89.0_):
Valid resource names are:
* `client`: the client-server API (/_matrix/client). Also implies `media` and `static`.
If configuring the main process, the Synapse Admin API (/_synapse/admin) is also implied.
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies `media` and `static`.
* `consent`: user consent forms (/_matrix/consent). See [here](../../consent_tracking.md) for more.
@@ -1766,7 +1760,7 @@ rc_3pid_validation:
This option sets ratelimiting how often invites can be sent in a room or to a
specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10`,
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`, and `per_issuer`
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`, and `per_issuer`
defaults to `per_second: 0.3`, `burst_count: 10`.
Client requests that invite user(s) when [creating a
@@ -1869,18 +1863,6 @@ federation_rr_transactions_per_room_per_second: 40
## Media Store
Config options related to Synapse's media store.
---
### `enable_authenticated_media`
When set to true, all subsequent media uploads will be marked as authenticated, and will not be available over legacy
unauthenticated media endpoints (`/_matrix/media/(r0|v3|v1)/download` and `/_matrix/media/(r0|v3|v1)/thumbnail`) - requests for authenticated media over these endpoints will result in a 404. All media, including authenticated media, will be available over the authenticated media endpoints `_matrix/client/v1/media/download` and `_matrix/client/v1/media/thumbnail`. Media uploaded prior to setting this option to true will still be available over the legacy endpoints. Note if the setting is switched to false
after enabling, media marked as authenticated will be available over legacy endpoints. Defaults to false, but
this will change to true in a future Synapse release.
Example configuration:
```yaml
enable_authenticated_media: true
```
---
### `enable_media_repo`
@@ -1967,7 +1949,7 @@ max_image_pixels: 35M
---
### `remote_media_download_burst_count`
Remote media downloads are ratelimited using a [leaky bucket algorithm](https://en.wikipedia.org/wiki/Leaky_bucket), where a given "bucket" is keyed to the IP address of the requester when requesting remote media downloads. This configuration option sets the size of the bucket against which the size in bytes of downloads are penalized - if the bucket is full, ie a given number of bytes have already been downloaded, further downloads will be denied until the bucket drains. Defaults to 500MiB. See also `remote_media_download_per_second` which determines the rate at which the "bucket" is emptied and thus has available space to authorize new requests.
Remote media downloads are ratelimited using a [leaky bucket algorithm](https://en.wikipedia.org/wiki/Leaky_bucket), where a given "bucket" is keyed to the IP address of the requester when requesting remote media downloads. This configuration option sets the size of the bucket against which the size in bytes of downloads are penalized - if the bucket is full, ie a given number of bytes have already been downloaded, further downloads will be denied until the bucket drains. Defaults to 500MiB. See also `remote_media_download_per_second` which determines the rate at which the "bucket" is emptied and thus has available space to authorize new requests.
Example configuration:
```yaml
@@ -2387,7 +2369,7 @@ enable_registration_without_verification: true
---
### `registrations_require_3pid`
If this is set, users must provide all of the specified types of [3PID](https://spec.matrix.org/latest/appendices/#3pid-types) when registering an account.
If this is set, users must provide all of the specified types of 3PID when registering an account.
Note that [`enable_registration`](#enable_registration) must also be set to allow account registration.
@@ -2412,9 +2394,6 @@ disable_msisdn_registration: true
Mandate that users are only allowed to associate certain formats of
3PIDs with accounts on this server, as specified by the `medium` and `pattern` sub-options.
`pattern` is a [Perl-like regular expression](https://docs.python.org/3/library/re.html#module-re).
More information about 3PIDs, allowed `medium` types and their `address` syntax can be found [in the Matrix spec](https://spec.matrix.org/latest/appendices/#3pid-types).
Example configuration:
```yaml
@@ -2424,7 +2403,7 @@ allowed_local_3pids:
- medium: email
pattern: '^[^@]+@vector\.im$'
- medium: msisdn
pattern: '^44\d{10}$'
pattern: '\+44'
```
---
### `enable_3pid_lookup`
@@ -3303,8 +3282,8 @@ saml2_config:
contact_person:
- given_name: Bob
sur_name: "the Sysadmin"
email_address: ["admin@example.com"]
contact_type: technical
email_address": ["admin@example.com"]
contact_type": technical
saml_session_lifetime: 5m
@@ -4155,38 +4134,6 @@ default_power_level_content_override:
trusted_private_chat: null
public_chat: null
```
The default power levels for each preset are:
```yaml
"m.room.name": 50
"m.room.power_levels": 100
"m.room.history_visibility": 100
"m.room.canonical_alias": 50
"m.room.avatar": 50
"m.room.tombstone": 100
"m.room.server_acl": 100
"m.room.encryption": 100
```
So a complete example where the default power-levels for a preset are maintained
but the power level for a new key is set is:
```yaml
default_power_level_content_override:
private_chat:
events:
"com.example.foo": 0
"m.room.name": 50
"m.room.power_levels": 100
"m.room.history_visibility": 100
"m.room.canonical_alias": 50
"m.room.avatar": 50
"m.room.tombstone": 100
"m.room.server_acl": 100
"m.room.encryption": 100
trusted_private_chat: null
public_chat: null
```
---
### `forget_rooms_on_leave`
@@ -4686,9 +4633,7 @@ This setting has the following sub-options:
* `only_for_direct_messages`: Whether invites should be automatically accepted for all room types, or only
for direct messages. Defaults to false.
* `only_from_local_users`: Whether to only automatically accept invites from users on this homeserver. Defaults to false.
* `worker_to_run_on`: Which worker to run this module on. This must match
the "worker_name". If not set or `null`, invites will be accepted on the
main process.
* `worker_to_run_on`: Which worker to run this module on. This must match the "worker_name".
NOTE: Care should be taken not to enable this setting if the `synapse_auto_accept_invite` module is enabled and installed.
The two modules will compete to perform the same task and may result in undesired behaviour. For example, multiple join

1092
poetry.lock generated

File diff suppressed because it is too large Load Diff

280
pylint.cfg Normal file
View File

@@ -0,0 +1,280 @@
[MASTER]
# Specify a configuration file.
#rcfile=
# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
#init-hook=
# Profiled execution.
profile=no
# Add files or directories to the blacklist. They should be base names, not
# paths.
ignore=CVS
# Pickle collected data for later comparisons.
persistent=yes
# List of plugins (as comma separated values of python modules names) to load,
# usually to register additional checkers.
load-plugins=
[MESSAGES CONTROL]
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time. See also the "--disable" option for examples.
#enable=
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifiers separated by comma (,) or put this
# option multiple times (only on the command line, not in the configuration
# file where it should appear only once).You can also use "--disable=all" to
# disable everything first and then reenable specific checks. For example, if
# you want to run only the similarities checker, you can use "--disable=all
# --enable=similarities". If you want to run only the classes checker, but have
# no Warning level messages displayed, use"--disable=all --enable=classes
# --disable=W"
disable=missing-docstring
[REPORTS]
# Set the output format. Available formats are text, parseable, colorized, msvs
# (visual studio) and html. You can also give a reporter class, eg
# mypackage.mymodule.MyReporterClass.
output-format=text
# Put messages in a separate file for each module / package specified on the
# command line instead of printing them on stdout. Reports (if any) will be
# written in a file name "pylint_global.[txt|html]".
files-output=no
# Tells whether to display a full report or only the messages
reports=yes
# Python expression which should return a note less than 10 (10 is the highest
# note). You have access to the variables errors warning, statement which
# respectively contain the number of errors / warnings messages and the total
# number of statements analyzed. This is used by the global evaluation report
# (RP0004).
evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
# Add a comment according to your evaluation note. This is used by the global
# evaluation report (RP0004).
comment=no
# Template used to display messages. This is a python new-style format string
# used to format the message information. See doc for all details
#msg-template=
[TYPECHECK]
# Tells whether missing members accessed in mixin class should be ignored. A
# mixin class is detected if its name ends with "mixin" (case insensitive).
ignore-mixin-members=yes
# List of classes names for which member attributes should not be checked
# (useful for classes with attributes dynamically set).
ignored-classes=SQLObject
# When zope mode is activated, add a predefined set of Zope acquired attributes
# to generated-members.
zope=no
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E0201 when accessed. Python regular
# expressions are accepted.
generated-members=REQUEST,acl_users,aq_parent
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,XXX,TODO
[SIMILARITIES]
# Minimum lines number of a similarity.
min-similarity-lines=4
# Ignore comments when computing similarities.
ignore-comments=yes
# Ignore docstrings when computing similarities.
ignore-docstrings=yes
# Ignore imports when computing similarities.
ignore-imports=no
[VARIABLES]
# Tells whether we should check for unused import in __init__ files.
init-import=no
# A regular expression matching the beginning of the name of dummy variables
# (i.e. not used).
dummy-variables-rgx=_$|dummy
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid to define new builtins when possible.
additional-builtins=
[BASIC]
# Required attributes for module, separated by a comma
required-attributes=
# List of builtins function names that should not be used, separated by a comma
bad-functions=map,filter,apply,input
# Regular expression which should only match correct module names
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
# Regular expression which should only match correct module level names
const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$
# Regular expression which should only match correct class names
class-rgx=[A-Z_][a-zA-Z0-9]+$
# Regular expression which should only match correct function names
function-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct method names
method-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct instance attribute names
attr-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct argument names
argument-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct variable names
variable-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct attribute names in class
# bodies
class-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$
# Regular expression which should only match correct list comprehension /
# generator expression variable names
inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$
# Good variable names which should always be accepted, separated by a comma
good-names=i,j,k,ex,Run,_
# Bad variable names which should always be refused, separated by a comma
bad-names=foo,bar,baz,toto,tutu,tata
# Regular expression which should only match function or class names that do
# not require a docstring.
no-docstring-rgx=__.*__
# Minimum line length for functions/classes that require docstrings, shorter
# ones are exempt.
docstring-min-length=-1
[FORMAT]
# Maximum number of characters on a single line.
max-line-length=80
# Regexp for a line that is allowed to be longer than the limit.
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
# Allow the body of an if to be on the same line as the test if there is no
# else.
single-line-if-stmt=no
# List of optional constructs for which whitespace checking is disabled
no-space-check=trailing-comma,dict-separator
# Maximum number of lines in a module
max-module-lines=1000
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
# tab).
indent-string=' '
[DESIGN]
# Maximum number of arguments for function / method
max-args=5
# Argument names that match this expression will be ignored. Default to name
# with leading underscore
ignored-argument-names=_.*
# Maximum number of locals for function / method body
max-locals=15
# Maximum number of return / yield for function / method body
max-returns=6
# Maximum number of branch for function / method body
max-branches=12
# Maximum number of statements in function / method body
max-statements=50
# Maximum number of parents for a class (see R0901).
max-parents=7
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
[IMPORTS]
# Deprecated modules which should not be used, separated by a comma
deprecated-modules=regsub,TERMIOS,Bastion,rexec
# Create a graph of every (i.e. internal and external) dependencies in the
# given file (report RP0402 must not be disabled)
import-graph=
# Create a graph of external dependencies in the given file (report RP0402 must
# not be disabled)
ext-import-graph=
# Create a graph of internal dependencies in the given file (report RP0402 must
# not be disabled)
int-import-graph=
[CLASSES]
# List of interface methods to ignore, separated by a comma. This is used for
# instance to not check methods defines in Zope's Interface base class.
ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,__new__,setUp
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls
# List of valid names for the first argument in a metaclass class method.
valid-metaclass-classmethod-first-arg=mcs
[EXCEPTIONS]
# Exceptions that will emit a warning when being caught. Defaults to
# "Exception"
overgeneral-exceptions=Exception

View File

@@ -34,9 +34,14 @@
name = "Internal Changes"
showcontent = true
[tool.black]
target-version = ['py38', 'py39', 'py310', 'py311']
# black ignores everything in .gitignore by default, see
# https://black.readthedocs.io/en/stable/usage_and_configuration/file_collection_and_discovery.html#gitignore
# Use `extend-exclude` if you want to exclude something in addition to this.
[tool.ruff]
line-length = 88
target-version = "py38"
[tool.ruff.lint]
# See https://beta.ruff.rs/docs/rules/#error-e
@@ -58,8 +63,6 @@ select = [
"W",
# pyflakes
"F",
# isort
"I001",
# flake8-bugbear
"B0",
# flake8-comprehensions
@@ -76,20 +79,17 @@ select = [
"EXE",
]
[tool.ruff.lint.isort]
combine-as-imports = true
section-order = ["future", "standard-library", "third-party", "twisted", "first-party", "testing", "local-folder"]
known-first-party = ["synapse"]
[tool.ruff.lint.isort.sections]
twisted = ["twisted", "OpenSSL"]
testing = ["tests"]
[tool.ruff.format]
quote-style = "double"
indent-style = "space"
skip-magic-trailing-comma = false
line-ending = "auto"
[tool.isort]
line_length = 88
sections = ["FUTURE", "STDLIB", "THIRDPARTY", "TWISTED", "FIRSTPARTY", "TESTS", "LOCALFOLDER"]
default_section = "THIRDPARTY"
known_first_party = ["synapse"]
known_tests = ["tests"]
known_twisted = ["twisted", "OpenSSL"]
multi_line_output = 3
include_trailing_comma = true
combine_as_imports = true
skip_gitignore = true
[tool.maturin]
manifest-path = "rust/Cargo.toml"
@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.114.0rc1"
version = "1.111.0rc2"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"
@@ -201,8 +201,8 @@ netaddr = ">=0.7.18"
# add a lower bound to the Jinja2 dependency.
Jinja2 = ">=3.0"
bleach = ">=1.4.3"
# We use `assert_never`, which were added in `typing-extensions` 4.1.
typing-extensions = ">=4.1"
# We use `Self`, which were added in `typing-extensions` 4.0.
typing-extensions = ">=4.0"
# We enforce that we have a `cryptography` version that bundles an `openssl`
# with the latest security patches.
cryptography = ">=3.4.7"
@@ -320,7 +320,9 @@ all = [
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.6.2"
isort = ">=5.10.1"
black = ">=22.7.0"
ruff = "0.5.0"
# Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2"

View File

@@ -43,7 +43,7 @@ import argparse
import base64
import json
import sys
from typing import Any, Dict, Mapping, Optional, Tuple, Union
from typing import Any, Dict, Optional, Tuple
from urllib import parse as urlparse
import requests
@@ -75,7 +75,7 @@ def encode_canonical_json(value: object) -> bytes:
value,
# Encode code-points outside of ASCII as UTF-8 rather than \u escapes
ensure_ascii=False,
# Remove unnecessary white space.
# Remove unecessary white space.
separators=(",", ":"),
# Sort the keys of dictionaries.
sort_keys=True,
@@ -298,23 +298,12 @@ class MatrixConnectionAdapter(HTTPAdapter):
return super().send(request, *args, **kwargs)
def get_connection_with_tls_context(
self,
request: PreparedRequest,
verify: Optional[Union[bool, str]],
proxies: Optional[Mapping[str, str]] = None,
cert: Optional[Union[Tuple[str, str], str]] = None,
def get_connection(
self, url: str, proxies: Optional[Dict[str, str]] = None
) -> HTTPConnectionPool:
# overrides the get_connection_with_tls_context() method in the base class
parsed = urlparse.urlsplit(request.url)
# Extract the server name from the request URL, and ensure it's a str.
hostname = parsed.netloc
if isinstance(hostname, bytes):
hostname = hostname.decode("utf-8")
assert isinstance(hostname, str)
(host, port, ssl_server_name) = self._lookup(hostname)
# overrides the get_connection() method in the base class
parsed = urlparse.urlsplit(url)
(host, port, ssl_server_name) = self._lookup(parsed.netloc)
print(
f"Connecting to {host}:{port} with SNI {ssl_server_name}", file=sys.stderr
)

View File

@@ -1,9 +1,8 @@
#!/usr/bin/env bash
#
# Runs linting scripts over the local Synapse checkout
# black - opinionated code formatter
# ruff - lints and finds mistakes
# mypy - typechecks python code
# cargo clippy - lints rust code
set -e
@@ -102,6 +101,12 @@ echo
# Print out the commands being run
set -x
# Ensure the sort order of imports.
isort "${files[@]}"
# Ensure Python code conforms to an opinionated style.
python3 -m black "${files[@]}"
# Ensure the sample configuration file conforms to style checks.
./scripts-dev/config-lint.sh

View File

@@ -38,7 +38,6 @@ from mypy.types import (
NoneType,
TupleType,
TypeAliasType,
TypeVarType,
UninhabitedType,
UnionType,
)
@@ -234,7 +233,6 @@ IMMUTABLE_CUSTOM_TYPES = {
"synapse.synapse_rust.push.FilteredPushRules",
# This is technically not immutable, but close enough.
"signedjson.types.VerifyKey",
"synapse.types.StrCollection",
}
# Immutable containers only if the values are also immutable.
@@ -300,7 +298,7 @@ def is_cacheable(
elif rt.type.fullname in MUTABLE_CONTAINER_TYPES:
# Mutable containers are mutable regardless of their underlying type.
return False, f"container {rt.type.fullname} is mutable"
return False, None
elif "attrs" in rt.type.metadata:
# attrs classes are only cachable iff it is frozen (immutable itself)
@@ -320,9 +318,6 @@ def is_cacheable(
else:
return False, "non-frozen attrs class"
elif rt.type.is_enum:
# We assume Enum values are immutable
return True, None
else:
# Ensure we fail for unknown types, these generally means that the
# above code is not complete.
@@ -331,18 +326,6 @@ def is_cacheable(
f"Don't know how to handle {rt.type.fullname} return type instance",
)
elif isinstance(rt, TypeVarType):
# We consider TypeVars immutable if they are bound to a set of immutable
# types.
if rt.values:
for value in rt.values:
ok, note = is_cacheable(value, signature, verbose)
if not ok:
return False, f"TypeVar bound not cacheable {value}"
return True, None
return False, "TypeVar is unbound"
elif isinstance(rt, NoneType):
# None is cachable.
return True, None

View File

@@ -324,11 +324,6 @@ def tag(gh_token: Optional[str]) -> None:
def _tag(gh_token: Optional[str]) -> None:
"""Tags the release and generates a draft GitHub release"""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Make sure we're in a git repo.
repo = get_repo_and_check_clean_checkout()
@@ -423,11 +418,6 @@ def publish(gh_token: str) -> None:
def _publish(gh_token: str) -> None:
"""Publish release on GitHub."""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Make sure we're in a git repo.
get_repo_and_check_clean_checkout()
@@ -470,11 +460,6 @@ def upload(gh_token: Optional[str]) -> None:
def _upload(gh_token: Optional[str]) -> None:
"""Upload release to pypi."""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
current_version = get_package_version()
tag_name = f"v{current_version}"
@@ -570,11 +555,6 @@ def wait_for_actions(gh_token: Optional[str]) -> None:
def _wait_for_actions(gh_token: Optional[str]) -> None:
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Find out the version and tag name.
current_version = get_package_version()
tag_name = f"v{current_version}"
@@ -731,11 +711,6 @@ Ask the designated people to do the blog and tweets."""
@cli.command()
@click.option("--gh-token", envvar=["GH_TOKEN", "GITHUB_TOKEN"], required=True)
def full(gh_token: str) -> None:
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
click.echo("1. If this is a security release, read the security wiki page.")
click.echo("2. Check for any release blockers before proceeding.")
click.echo(" https://github.com/element-hq/synapse/labels/X-Release-Blocker")

View File

@@ -56,9 +56,7 @@ def main() -> None:
password_pepper = password_config.get("pepper", password_pepper)
password = args.password
if not password and not sys.stdin.isatty():
password = sys.stdin.readline().strip()
elif not password:
if not password:
password = prompt_for_pass()
# On Python 2, make sure we decode it to Unicode before we normalise it

View File

@@ -119,24 +119,18 @@ BOOLEAN_COLUMNS = {
"e2e_room_keys": ["is_verified"],
"event_edges": ["is_state"],
"events": ["processed", "outlier", "contains_url"],
"local_media_repository": ["safe_from_quarantine", "authenticated"],
"per_user_experimental_features": ["enabled"],
"local_media_repository": ["safe_from_quarantine"],
"presence_list": ["accepted"],
"presence_stream": ["currently_active"],
"public_room_list_stream": ["visibility"],
"pushers": ["enabled"],
"redactions": ["have_censored"],
"remote_media_cache": ["authenticated"],
"room_stats_state": ["is_federatable"],
"rooms": ["is_public", "has_auth_chain_index"],
"sliding_sync_joined_rooms": ["is_encrypted"],
"sliding_sync_membership_snapshots": [
"has_known_state",
"is_encrypted",
],
"users": ["shadow_banned", "approved", "locked", "suspended"],
"un_partial_stated_event_stream": ["rejection_status_changed"],
"users_who_share_rooms": ["share_private"],
"per_user_experimental_features": ["enabled"],
}

View File

@@ -121,9 +121,7 @@ class MSC3861DelegatedAuth(BaseAuth):
self._hostname = hs.hostname
self._admin_token = self._config.admin_token
self._issuer_metadata = RetryOnExceptionCachedCall[OpenIDProviderMetadata](
self._load_metadata
)
self._issuer_metadata = RetryOnExceptionCachedCall(self._load_metadata)
if isinstance(auth_method, PrivateKeyJWTWithKid):
# Use the JWK as the client secret when using the private_key_jwt method
@@ -147,33 +145,6 @@ class MSC3861DelegatedAuth(BaseAuth):
# metadata.validate_introspection_endpoint()
return metadata
async def issuer(self) -> str:
"""
Get the configured issuer
This will use the issuer value set in the metadata,
falling back to the one set in the config if not set in the metadata
"""
metadata = await self._issuer_metadata.get()
return metadata.issuer or self._config.issuer
async def account_management_url(self) -> Optional[str]:
"""
Get the configured account management URL
This will discover the account management URL from the issuer if it's not set in the config
"""
if self._config.account_management_url is not None:
return self._config.account_management_url
try:
metadata = await self._issuer_metadata.get()
return metadata.get("account_management_uri", None)
# We don't want to raise here if we can't load the metadata
except Exception:
logger.warning("Failed to load metadata:", exc_info=True)
return None
async def _introspection_endpoint(self) -> str:
"""
Returns the introspection endpoint of the issuer
@@ -183,7 +154,7 @@ class MSC3861DelegatedAuth(BaseAuth):
if self._config.introspection_endpoint is not None:
return self._config.introspection_endpoint
metadata = await self._issuer_metadata.get()
metadata = await self._load_metadata()
return metadata.get("introspection_endpoint")
async def _introspect_token(self, token: str) -> IntrospectionToken:

View File

@@ -50,7 +50,7 @@ class Membership:
KNOCK: Final = "knock"
LEAVE: Final = "leave"
BAN: Final = "ban"
LIST: Final = frozenset((INVITE, JOIN, KNOCK, LEAVE, BAN))
LIST: Final = {INVITE, JOIN, KNOCK, LEAVE, BAN}
class PresenceState:
@@ -225,13 +225,6 @@ class EventContentFields:
# This is deprecated in MSC2175.
ROOM_CREATOR: Final = "creator"
# The version of the room for `m.room.create` events.
ROOM_VERSION: Final = "room_version"
ROOM_NAME: Final = "name"
MEMBERSHIP: Final = "membership"
# Used in m.room.guest_access events.
GUEST_ACCESS: Final = "guest_access"
@@ -244,11 +237,6 @@ class EventContentFields:
# an unspecced field added to to-device messages to identify them uniquely-ish
TO_DEVICE_MSGID: Final = "org.matrix.msgid"
# `m.room.encryption`` algorithm field
ENCRYPTION_ALGORITHM: Final = "algorithm"
TOMBSTONE_SUCCESSOR_ROOM: Final = "replacement_room"
class EventUnsignedContentFields:
"""Fields found inside the 'unsigned' data on events"""

View File

@@ -128,10 +128,6 @@ class Codes(str, Enum):
# MSC2677
DUPLICATE_ANNOTATION = "M_DUPLICATE_ANNOTATION"
# MSC3575 we are telling the client they need to expire their sliding sync
# connection.
UNKNOWN_POS = "M_UNKNOWN_POS"
class CodeMessageException(RuntimeError):
"""An exception with integer code, a message string attributes and optional headers.
@@ -851,17 +847,3 @@ class PartialStateConflictError(SynapseError):
msg=PartialStateConflictError.message(),
errcode=Codes.UNKNOWN,
)
class SlidingSyncUnknownPosition(SynapseError):
"""An error that Synapse can return to signal to the client to expire their
sliding sync connection (i.e. send a new request without a `?since=`
param).
"""
def __init__(self) -> None:
super().__init__(
HTTPStatus.BAD_REQUEST,
msg="Unknown position",
errcode=Codes.UNKNOWN_POS,
)

View File

@@ -236,8 +236,9 @@ class Ratelimiter:
requester: The requester that is doing the action, if any.
key: An arbitrary key used to classify an action. Defaults to the
requester's user ID.
n_actions: The number of times the user performed the action. May be negative
to "refund" the rate limit.
n_actions: The number of times the user wants to do this action. If the user
cannot do all of the actions, the user's action count is not incremented
at all.
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
"""

View File

@@ -98,7 +98,6 @@ from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
from synapse.storage.databases.main.search import SearchStore
from synapse.storage.databases.main.session import SessionStore
from synapse.storage.databases.main.signatures import SignatureWorkerStore
from synapse.storage.databases.main.sliding_sync import SlidingSyncStore
from synapse.storage.databases.main.state import StateGroupWorkerStore
from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.stream import StreamWorkerStore
@@ -160,7 +159,6 @@ class GenericWorkerStore(
SessionStore,
TaskSchedulerWorkerStore,
ExperimentalFeaturesStore,
SlidingSyncStore,
):
# Properties that multiple storage classes define. Tell mypy what the
# expected type is.
@@ -208,21 +206,6 @@ class GenericWorkerServer(HomeServer):
"/_synapse/admin": admin_resource,
}
)
if "federation" not in res.names:
# Only load the federation media resource separately if federation
# resource is not specified since federation resource includes media
# resource.
resources[FEDERATION_PREFIX] = TransportLayerServer(
self, servlet_groups=["media"]
)
if "client" not in res.names:
# Only load the client media resource separately if client
# resource is not specified since client resource includes media
# resource.
resources[CLIENT_API_PREFIX] = ClientRestResource(
self, servlet_groups=["media"]
)
else:
logger.warning(
"A 'media' listener is configured but the media"

View File

@@ -101,12 +101,6 @@ class SynapseHomeServer(HomeServer):
# Skip loading openid resource if federation is defined
# since federation resource will include openid
continue
if name == "media" and (
"federation" in res.names or "client" in res.names
):
# Skip loading media resource if federation or client are defined
# since federation & client resources will include media
continue
if name == "health":
# Skip loading, health resource is always included
continue
@@ -223,7 +217,7 @@ class SynapseHomeServer(HomeServer):
)
if name in ["media", "federation", "client"]:
if self.config.media.can_load_media_repo:
if self.config.server.enable_media_repo:
media_repo = self.get_media_repository_resource()
resources.update(
{
@@ -237,14 +231,6 @@ class SynapseHomeServer(HomeServer):
"'media' resource conflicts with enable_media_repo=False"
)
if name == "media":
resources[FEDERATION_PREFIX] = TransportLayerServer(
self, servlet_groups=["media"]
)
resources[CLIENT_API_PREFIX] = ClientRestResource(
self, servlet_groups=["media"]
)
if name in ["keys", "federation"]:
resources[SERVER_KEY_PREFIX] = KeyResource(self)

View File

@@ -126,7 +126,7 @@ class ContentRepositoryConfig(Config):
# Only enable the media repo if either the media repo is enabled or the
# current worker app is the media repo.
if (
config.get("enable_media_repo", True) is False
self.root.server.enable_media_repo is False
and config.get("worker_app") != "synapse.app.media_repository"
):
self.can_load_media_repo = False
@@ -272,10 +272,6 @@ class ContentRepositoryConfig(Config):
remote_media_lifetime
)
self.enable_authenticated_media = config.get(
"enable_authenticated_media", False
)
def generate_config_section(self, data_dir_path: str, **kwargs: Any) -> str:
assert data_dir_path is not None
media_store = os.path.join(data_dir_path, "media_store")

View File

@@ -384,11 +384,6 @@ class ServerConfig(Config):
# Whether to internally track presence, requires that presence is enabled,
self.track_presence = self.presence_enabled and presence_enabled != "untracked"
# Determines if presence results for offline users are included on initial/full sync
self.presence_include_offline_users_on_sync = presence_config.get(
"include_offline_users_on_sync", False
)
# Custom presence router module
# This is the legacy way of configuring it (the config should now be put in the modules section)
self.presence_router_module_class = None
@@ -400,6 +395,12 @@ class ServerConfig(Config):
self.presence_router_config,
) = load_module(presence_router_config, ("presence", "presence_router"))
# whether to enable the media repository endpoints. This should be set
# to false if the media repository is running as a separate endpoint;
# doing so ensures that we will not run cache cleanup jobs on the
# master, potentially causing inconsistency.
self.enable_media_repo = config.get("enable_media_repo", True)
# Whether to require authentication to retrieve profile data (avatars,
# display names) of other users through the client API.
self.require_auth_for_profile_requests = config.get(

View File

@@ -589,7 +589,7 @@ class BaseV2KeyFetcher(KeyFetcher):
% (server_name,)
)
for key_id, key_data in response_json.get("old_verify_keys", {}).items():
for key_id, key_data in response_json["old_verify_keys"].items():
if is_signing_algorithm_supported(key_id):
key_base64 = key_data["key"]
key_bytes = decode_base64(key_base64)

View File

@@ -554,22 +554,3 @@ def relation_from_event(event: EventBase) -> Optional[_EventRelation]:
aggregation_key = None
return _EventRelation(parent_id, rel_type, aggregation_key)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class StrippedStateEvent:
"""
A stripped down state event. Usually used for remote invite/knocks so the user can
make an informed decision on whether they want to join.
Attributes:
type: Event `type`
state_key: Event `state_key`
sender: Event `sender`
content: Event `content`
"""
type: str
state_key: str
sender: str
content: Dict[str, Any]

View File

@@ -49,7 +49,7 @@ from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersion
from synapse.types import JsonDict, Requester
from . import EventBase, StrippedStateEvent, make_event_from_dict
from . import EventBase, make_event_from_dict
if TYPE_CHECKING:
from synapse.handlers.relations import BundledAggregations
@@ -854,30 +854,3 @@ def strip_event(event: EventBase) -> JsonDict:
"content": event.content,
"sender": event.sender,
}
def parse_stripped_state_event(raw_stripped_event: Any) -> Optional[StrippedStateEvent]:
"""
Given a raw value from an event's `unsigned` field, attempt to parse it into a
`StrippedStateEvent`.
"""
if isinstance(raw_stripped_event, dict):
# All of these fields are required
type = raw_stripped_event.get("type")
state_key = raw_stripped_event.get("state_key")
sender = raw_stripped_event.get("sender")
content = raw_stripped_event.get("content")
if (
isinstance(type, str)
and isinstance(state_key, str)
and isinstance(sender, str)
and isinstance(content, dict)
):
return StrippedStateEvent(
type=type,
state_key=state_key,
sender=sender,
content=content,
)
return None

View File

@@ -271,10 +271,6 @@ SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
"federation": FEDERATION_SERVLET_CLASSES,
"room_list": (PublicRoomList,),
"openid": (OpenIdUserInfo,),
"media": (
FederationMediaDownloadServlet,
FederationMediaThumbnailServlet,
),
}

View File

@@ -912,4 +912,6 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
FederationV1SendKnockServlet,
FederationMakeKnockServlet,
FederationAccountStatusServlet,
FederationMediaDownloadServlet,
FederationMediaThumbnailServlet,
)

View File

@@ -197,14 +197,8 @@ class AdminHandler:
# events that we have and then filtering, this isn't the most
# efficient method perhaps but it does guarantee we get everything.
while True:
events, _ = (
await self._store.paginate_room_events_by_topological_ordering(
room_id=room_id,
from_key=from_key,
to_key=to_key,
limit=100,
direction=Direction.FORWARDS,
)
events, _ = await self._store.paginate_room_events(
room_id, from_key, to_key, limit=100, direction=Direction.FORWARDS
)
if not events:
break

View File

@@ -20,20 +20,10 @@
#
#
import logging
from typing import (
TYPE_CHECKING,
AbstractSet,
Dict,
Iterable,
List,
Mapping,
Optional,
Set,
Tuple,
)
from typing import TYPE_CHECKING, Dict, Iterable, List, Mapping, Optional, Set, Tuple
from synapse.api import errors
from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.constants import EduTypes, EventTypes
from synapse.api.errors import (
Codes,
FederationDeniedError,
@@ -48,9 +38,7 @@ from synapse.metrics.background_process_metrics import (
wrap_as_background_process,
)
from synapse.storage.databases.main.client_ips import DeviceLastConnectionInfo
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import (
DeviceListUpdates,
JsonDict,
JsonMapping,
ScheduledTask,
@@ -226,210 +214,138 @@ class DeviceWorkerHandler:
@cancellable
async def get_user_ids_changed(
self, user_id: str, from_token: StreamToken
) -> DeviceListUpdates:
) -> JsonDict:
"""Get list of users that have had the devices updated, or have newly
joined a room, that `user_id` may be interested in.
"""
set_tag("user_id", user_id)
set_tag("from_token", str(from_token))
now_room_key = self.store.get_room_max_token()
now_token = self._event_sources.get_current_token()
room_ids = await self.store.get_rooms_for_user(user_id)
# We need to work out all the different membership changes for the user
# and user they share a room with, to pass to
# `generate_sync_entry_for_device_list`. See its docstring for details
# on the data required.
changed = await self.get_device_changes_in_shared_rooms(
user_id, room_ids, from_token
)
joined_room_ids = await self.store.get_rooms_for_user(user_id)
# Then work out if any users have since joined
rooms_changed = self.store.get_rooms_that_changed(room_ids, from_token.room_key)
# Get the set of rooms that the user has joined/left
membership_changes = (
await self.store.get_current_state_delta_membership_changes_for_user(
user_id, from_key=from_token.room_key, to_key=now_token.room_key
member_events = await self.store.get_membership_changes_for_user(
user_id, from_token.room_key, now_room_key
)
rooms_changed.update(event.room_id for event in member_events)
stream_ordering = from_token.room_key.stream
possibly_changed = set(changed)
possibly_left = set()
for room_id in rooms_changed:
# Check if the forward extremities have changed. If not then we know
# the current state won't have changed, and so we can skip this room.
try:
if not await self.store.have_room_forward_extremities_changed_since(
room_id, stream_ordering
):
continue
except errors.StoreError:
pass
current_state_ids = await self._state_storage.get_current_state_ids(
room_id, await_full_state=False
)
)
# Check for newly joined or left rooms. We need to make sure that we add
# to newly joined in the case membership goes from join -> leave -> join
# again.
newly_joined_rooms: Set[str] = set()
newly_left_rooms: Set[str] = set()
for change in membership_changes:
# We check for changes in "joinedness", i.e. if the membership has
# changed to or from JOIN.
if change.membership == Membership.JOIN:
if change.prev_membership != Membership.JOIN:
newly_joined_rooms.add(change.room_id)
newly_left_rooms.discard(change.room_id)
elif change.prev_membership == Membership.JOIN:
newly_joined_rooms.discard(change.room_id)
newly_left_rooms.add(change.room_id)
# We now work out if any other users have since joined or left the rooms
# the user is currently in.
# List of membership changes per room
room_to_deltas: Dict[str, List[StateDelta]] = {}
# The set of event IDs of membership events (so we can fetch their
# associated membership).
memberships_to_fetch: Set[str] = set()
# TODO: Only pull out membership events?
state_changes = await self.store.get_current_state_deltas_for_rooms(
joined_room_ids, from_token=from_token.room_key, to_token=now_token.room_key
)
for delta in state_changes:
if delta.event_type != EventTypes.Member:
# The user may have left the room
# TODO: Check if they actually did or if we were just invited.
if room_id not in room_ids:
for etype, state_key in current_state_ids.keys():
if etype != EventTypes.Member:
continue
possibly_left.add(state_key)
continue
room_to_deltas.setdefault(delta.room_id, []).append(delta)
if delta.event_id:
memberships_to_fetch.add(delta.event_id)
if delta.prev_event_id:
memberships_to_fetch.add(delta.prev_event_id)
# Fetch the current state at the time.
try:
event_ids = await self.store.get_forward_extremities_for_room_at_stream_ordering(
room_id, stream_ordering=stream_ordering
)
except errors.StoreError:
# we have purged the stream_ordering index since the stream
# ordering: treat it the same as a new room
event_ids = []
# Fetch all the memberships for the membership events
event_id_to_memberships = await self.store.get_membership_from_event_ids(
memberships_to_fetch
)
# special-case for an empty prev state: include all members
# in the changed list
if not event_ids:
log_kv(
{"event": "encountered empty previous state", "room_id": room_id}
)
for etype, state_key in current_state_ids.keys():
if etype != EventTypes.Member:
continue
possibly_changed.add(state_key)
continue
joined_invited_knocked = (
Membership.JOIN,
Membership.INVITE,
Membership.KNOCK,
)
current_member_id = current_state_ids.get((EventTypes.Member, user_id))
if not current_member_id:
continue
# We now want to find any user that have newly joined/invited/knocked,
# or newly left, similarly to above.
newly_joined_or_invited_or_knocked_users: Set[str] = set()
newly_left_users: Set[str] = set()
for _, deltas in room_to_deltas.items():
for delta in deltas:
# Get the prev/new memberships for the delta
new_membership = None
prev_membership = None
if delta.event_id:
m = event_id_to_memberships.get(delta.event_id)
if m is not None:
new_membership = m.membership
if delta.prev_event_id:
m = event_id_to_memberships.get(delta.prev_event_id)
if m is not None:
prev_membership = m.membership
# mapping from event_id -> state_dict
prev_state_ids = await self._state_storage.get_state_ids_for_events(
event_ids,
await_full_state=False,
)
# Check if a user has newly joined/invited/knocked, or left.
if new_membership in joined_invited_knocked:
if prev_membership not in joined_invited_knocked:
newly_joined_or_invited_or_knocked_users.add(delta.state_key)
newly_left_users.discard(delta.state_key)
elif prev_membership in joined_invited_knocked:
newly_joined_or_invited_or_knocked_users.discard(delta.state_key)
newly_left_users.add(delta.state_key)
# Check if we've joined the room? If so we just blindly add all the users to
# the "possibly changed" users.
for state_dict in prev_state_ids.values():
member_event = state_dict.get((EventTypes.Member, user_id), None)
if not member_event or member_event != current_member_id:
for etype, state_key in current_state_ids.keys():
if etype != EventTypes.Member:
continue
possibly_changed.add(state_key)
break
# Now we actually calculate the device list entry with the information
# calculated above.
device_list_updates = await self.generate_sync_entry_for_device_list(
user_id=user_id,
since_token=from_token,
now_token=now_token,
joined_room_ids=joined_room_ids,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
newly_left_users=newly_left_users,
)
# If there has been any change in membership, include them in the
# possibly changed list. We'll check if they are joined below,
# and we're not toooo worried about spuriously adding users.
for key, event_id in current_state_ids.items():
etype, state_key = key
if etype != EventTypes.Member:
continue
log_kv(
{
"changed": device_list_updates.changed,
"left": device_list_updates.left,
}
)
# check if this member has changed since any of the extremities
# at the stream_ordering, and add them to the list if so.
for state_dict in prev_state_ids.values():
prev_event_id = state_dict.get(key, None)
if not prev_event_id or prev_event_id != event_id:
if state_key != user_id:
possibly_changed.add(state_key)
break
return device_list_updates
if possibly_changed or possibly_left:
possibly_joined = possibly_changed
possibly_left = possibly_changed | possibly_left
@measure_func("_generate_sync_entry_for_device_list")
async def generate_sync_entry_for_device_list(
self,
user_id: str,
since_token: StreamToken,
now_token: StreamToken,
joined_room_ids: AbstractSet[str],
newly_joined_rooms: AbstractSet[str],
newly_joined_or_invited_or_knocked_users: AbstractSet[str],
newly_left_rooms: AbstractSet[str],
newly_left_users: AbstractSet[str],
) -> DeviceListUpdates:
"""Generate the DeviceListUpdates section of sync
# Double check if we still share rooms with the given user.
users_rooms = await self.store.get_rooms_for_users(possibly_left)
for changed_user_id, entries in users_rooms.items():
if any(rid in room_ids for rid in entries):
possibly_left.discard(changed_user_id)
else:
possibly_joined.discard(changed_user_id)
Args:
sync_result_builder
newly_joined_rooms: Set of rooms user has joined since previous sync
newly_joined_or_invited_or_knocked_users: Set of users that have joined,
been invited to a room or are knocking on a room since
previous sync.
newly_left_rooms: Set of rooms user has left since previous sync
newly_left_users: Set of users that have left a room we're in since
previous sync
"""
# Take a copy since these fields will be mutated later.
newly_joined_or_invited_or_knocked_users = set(
newly_joined_or_invited_or_knocked_users
)
newly_left_users = set(newly_left_users)
else:
possibly_joined = set()
possibly_left = set()
# We want to figure out what user IDs the client should refetch
# device keys for, and which users we aren't going to track changes
# for anymore.
#
# For the first step we check:
# a. if any users we share a room with have updated their devices,
# and
# b. we also check if we've joined any new rooms, or if a user has
# joined a room we're in.
#
# For the second step we just find any users we no longer share a
# room with by looking at all users that have left a room plus users
# that were in a room we've left.
result = {"changed": list(possibly_joined), "left": list(possibly_left)}
users_that_have_changed = set()
log_kv(result)
# Step 1a, check for changes in devices of users we share a room
# with
users_that_have_changed = await self.get_device_changes_in_shared_rooms(
user_id,
joined_room_ids,
from_token=since_token,
now_token=now_token,
)
# Step 1b, check for newly joined rooms
for room_id in newly_joined_rooms:
joined_users = await self.store.get_users_in_room(room_id)
newly_joined_or_invited_or_knocked_users.update(joined_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
user_id, since_token.device_list_key
)
users_that_have_changed.update(user_signatures_changed)
# Now find users that we no longer track
for room_id in newly_left_rooms:
left_users = await self.store.get_users_in_room(room_id)
newly_left_users.update(left_users)
# Remove any users that we still share a room with.
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
for user_id, entries in left_users_rooms.items():
if any(rid in joined_room_ids for rid in entries):
newly_left_users.discard(user_id)
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
return result
async def on_federation_query_user_devices(self, user_id: str) -> JsonDict:
if not self.hs.is_mine(UserID.from_string(user_id)):

View File

@@ -291,20 +291,13 @@ class E2eKeysHandler:
# Only try and fetch keys for destinations that are not marked as
# down.
unfiltered_destinations = remote_queries_not_in_cache.keys()
filtered_destinations = set(
await filter_destinations_by_retry_limiter(
unfiltered_destinations,
self.clock,
self.store,
# Let's give an arbitrary grace period for those hosts that are
# only recently down
retry_due_within_ms=60 * 1000,
)
)
failures.update(
(dest, _NOT_READY_FOR_RETRY_FAILURE)
for dest in (unfiltered_destinations - filtered_destinations)
filtered_destinations = await filter_destinations_by_retry_limiter(
remote_queries_not_in_cache.keys(),
self.clock,
self.store,
# Let's give an arbitrary grace period for those hosts that are
# only recently down
retry_due_within_ms=60 * 1000,
)
await concurrently_execute(
@@ -1648,9 +1641,6 @@ def _check_device_signature(
raise SynapseError(400, "Invalid signature", Codes.INVALID_SIGNATURE)
_NOT_READY_FOR_RETRY_FAILURE = {"status": 503, "message": "Not ready for retry"}
def _exception_to_failure(e: Exception) -> JsonDict:
if isinstance(e, SynapseError):
return {"status": e.code, "errcode": e.errcode, "message": str(e)}
@@ -1659,7 +1649,7 @@ def _exception_to_failure(e: Exception) -> JsonDict:
return {"status": e.code, "message": str(e)}
if isinstance(e, NotRetryingDestination):
return _NOT_READY_FOR_RETRY_FAILURE
return {"status": 503, "message": "Not ready for retry"}
# include ConnectionRefused and other errors
#

View File

@@ -34,7 +34,7 @@ from synapse.api.errors import (
from synapse.logging.opentracing import log_kv, trace
from synapse.storage.databases.main.e2e_room_keys import RoomKey
from synapse.types import JsonDict
from synapse.util.async_helpers import ReadWriteLock
from synapse.util.async_helpers import Linearizer
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -58,7 +58,7 @@ class E2eRoomKeysHandler:
# clients belonging to a user will receive and try to upload a new session at
# roughly the same time. Also used to lock out uploads when the key is being
# changed.
self._upload_lock = ReadWriteLock()
self._upload_linearizer = Linearizer("upload_room_keys_lock")
@trace
async def get_room_keys(
@@ -89,7 +89,7 @@ class E2eRoomKeysHandler:
# we deliberately take the lock to get keys so that changing the version
# works atomically
async with self._upload_lock.read(user_id):
async with self._upload_linearizer.queue(user_id):
# make sure the backup version exists
try:
await self.store.get_e2e_room_keys_version_info(user_id, version)
@@ -132,7 +132,7 @@ class E2eRoomKeysHandler:
"""
# lock for consistency with uploading
async with self._upload_lock.write(user_id):
async with self._upload_linearizer.queue(user_id):
# make sure the backup version exists
try:
version_info = await self.store.get_e2e_room_keys_version_info(
@@ -193,7 +193,7 @@ class E2eRoomKeysHandler:
# TODO: Validate the JSON to make sure it has the right keys.
# XXX: perhaps we should use a finer grained lock here?
async with self._upload_lock.write(user_id):
async with self._upload_linearizer.queue(user_id):
# Check that the version we're trying to upload is the current version
try:
version_info = await self.store.get_e2e_room_keys_version_info(user_id)
@@ -355,7 +355,7 @@ class E2eRoomKeysHandler:
# TODO: Validate the JSON to make sure it has the right keys.
# lock everyone out until we've switched version
async with self._upload_lock.write(user_id):
async with self._upload_linearizer.queue(user_id):
new_version = await self.store.create_e2e_room_keys_version(
user_id, version_info
)
@@ -382,7 +382,7 @@ class E2eRoomKeysHandler:
}
"""
async with self._upload_lock.read(user_id):
async with self._upload_linearizer.queue(user_id):
try:
res = await self.store.get_e2e_room_keys_version_info(user_id, version)
except StoreError as e:
@@ -407,7 +407,7 @@ class E2eRoomKeysHandler:
NotFoundError: if this backup version doesn't exist
"""
async with self._upload_lock.write(user_id):
async with self._upload_linearizer.queue(user_id):
try:
await self.store.delete_e2e_room_keys_version(user_id, version)
except StoreError as e:
@@ -437,7 +437,7 @@ class E2eRoomKeysHandler:
raise SynapseError(
400, "Version in body does not match", Codes.INVALID_PARAM
)
async with self._upload_lock.write(user_id):
async with self._upload_linearizer.queue(user_id):
try:
old_info = await self.store.get_e2e_room_keys_version_info(
user_id, version

View File

@@ -507,15 +507,13 @@ class PaginationHandler:
# Initially fetch the events from the database. With any luck, we can return
# these without blocking on backfill (handled below).
events, next_key = (
await self.store.paginate_room_events_by_topological_ordering(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
events, next_key = await self.store.paginate_room_events(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
if pagin_config.direction == Direction.BACKWARDS:
@@ -584,15 +582,13 @@ class PaginationHandler:
# If we did backfill something, refetch the events from the database to
# catch anything new that might have been added since we last fetched.
if did_backfill:
events, next_key = (
await self.store.paginate_room_events_by_topological_ordering(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
events, next_key = await self.store.paginate_room_events(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
else:
# Otherwise, we can backfill in the background for eventual

View File

@@ -74,17 +74,6 @@ class ProfileHandler:
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
async def get_profile(self, user_id: str, ignore_backoff: bool = True) -> JsonDict:
"""
Get a user's profile as a JSON dictionary.
Args:
user_id: The user to fetch the profile of.
ignore_backoff: True to ignore backoff when fetching over federation.
Returns:
A JSON dictionary. For local queries this will include the displayname and avatar_url
fields. For remote queries it may contain arbitrary information.
"""
target_user = UserID.from_string(user_id)
if self.hs.is_mine(target_user):
@@ -118,15 +107,6 @@ class ProfileHandler:
raise e.to_synapse_error()
async def get_displayname(self, target_user: UserID) -> Optional[str]:
"""
Fetch a user's display name from their profile.
Args:
target_user: The user to fetch the display name of.
Returns:
The user's display name or None if unset.
"""
if self.hs.is_mine(target_user):
try:
displayname = await self.store.get_profile_displayname(target_user)
@@ -223,15 +203,6 @@ class ProfileHandler:
await self._update_join_states(requester, target_user)
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
"""
Fetch a user's avatar URL from their profile.
Args:
target_user: The user to fetch the avatar URL of.
Returns:
The user's avatar URL or None if unset.
"""
if self.hs.is_mine(target_user):
try:
avatar_url = await self.store.get_profile_avatar_url(target_user)
@@ -432,12 +403,6 @@ class ProfileHandler:
async def _update_join_states(
self, requester: Requester, target_user: UserID
) -> None:
"""
Update the membership events of each room the user is joined to with the
new profile information.
Note that this stomps over any custom display name or avatar URL in member events.
"""
if not self.hs.is_mine(target_user):
return

View File

@@ -286,14 +286,8 @@ class ReceiptEventSource(EventSource[MultiWriterStreamToken, JsonMapping]):
room_ids: Iterable[str],
is_guest: bool,
explicit_room_id: Optional[str] = None,
to_key: Optional[MultiWriterStreamToken] = None,
) -> Tuple[List[JsonMapping], MultiWriterStreamToken]:
"""
Find read receipts for given rooms (> `from_token` and <= `to_token`)
"""
if to_key is None:
to_key = self.get_current_key()
to_key = self.get_current_key()
if from_key == to_key:
return [], to_key

View File

@@ -1188,8 +1188,6 @@ class RoomCreationHandler:
)
events_to_send.append((power_event, power_context))
else:
# Please update the docs for `default_power_level_content_override` when
# updating the `events` dict below
power_level_content: JsonDict = {
"users": {creator_id: 100},
"users_default": 0,
@@ -1750,7 +1748,7 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
from_key=from_key,
to_key=to_key,
limit=limit or 10,
direction=Direction.FORWARDS,
order="ASC",
)
events = list(room_events)

View File

@@ -183,13 +183,8 @@ class RoomSummaryHandler:
) -> JsonDict:
"""See docstring for SpaceSummaryHandler.get_room_hierarchy."""
# If the room is available locally, quickly check that the user can access it.
local_room = await self._store.is_host_joined(
requested_room_id, self._server_name
)
if local_room and not await self._is_local_room_accessible(
requested_room_id, requester
):
# First of all, check that the room is accessible.
if not await self._is_local_room_accessible(requested_room_id, requester):
raise UnstableSpecAuthError(
403,
"User %s not in room %s, and room previews are disabled"
@@ -197,22 +192,6 @@ class RoomSummaryHandler:
errcode=Codes.NOT_JOINED,
)
if not local_room:
room_hierarchy = await self._summarize_remote_room_hierarchy(
_RoomQueueEntry(requested_room_id, ()),
False,
)
root_room_entry = room_hierarchy[0]
if not root_room_entry or not await self._is_remote_room_accessible(
requester, requested_room_id, root_room_entry.room
):
raise UnstableSpecAuthError(
403,
"User %s not in room %s, and room previews are disabled"
% (requester, requested_room_id),
errcode=Codes.NOT_JOINED,
)
# If this is continuing a previous session, pull the persisted data.
if from_token:
try:

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,699 +0,0 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import itertools
import logging
from typing import TYPE_CHECKING, AbstractSet, Dict, Mapping, Optional, Sequence, Set
from typing_extensions import assert_never
from synapse.api.constants import AccountDataTypes, EduTypes
from synapse.handlers.receipts import ReceiptEventSource
from synapse.logging.opentracing import trace
from synapse.storage.databases.main.receipts import ReceiptInRoom
from synapse.types import (
DeviceListUpdates,
JsonMapping,
MultiWriterStreamToken,
SlidingSyncStreamToken,
StrCollection,
StreamToken,
)
from synapse.types.handlers.sliding_sync import (
HaveSentRoomFlag,
MutablePerConnectionState,
OperationType,
PerConnectionState,
SlidingSyncConfig,
SlidingSyncResult,
)
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class SlidingSyncExtensionHandler:
"""Handles the extensions to sliding sync."""
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastores().main
self.event_sources = hs.get_event_sources()
self.device_handler = hs.get_device_handler()
self.push_rules_handler = hs.get_push_rules_handler()
@trace
async def get_extensions_response(
self,
sync_config: SlidingSyncConfig,
previous_connection_state: "PerConnectionState",
new_connection_state: "MutablePerConnectionState",
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
actual_room_response_map: Mapping[str, SlidingSyncResult.RoomResult],
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> SlidingSyncResult.Extensions:
"""Handle extension requests.
Args:
sync_config: Sync configuration
new_connection_state: Snapshot of the current per-connection state
new_per_connection_state: A mutable copy of the per-connection
state, used to record updates to the state during this request.
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
actual_room_response_map: A map of room ID to room results in the the
Sliding Sync response.
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
if sync_config.extensions is None:
return SlidingSyncResult.Extensions()
to_device_response = None
if sync_config.extensions.to_device is not None:
to_device_response = await self.get_to_device_extension_response(
sync_config=sync_config,
to_device_request=sync_config.extensions.to_device,
to_token=to_token,
)
e2ee_response = None
if sync_config.extensions.e2ee is not None:
e2ee_response = await self.get_e2ee_extension_response(
sync_config=sync_config,
e2ee_request=sync_config.extensions.e2ee,
to_token=to_token,
from_token=from_token,
)
account_data_response = None
if sync_config.extensions.account_data is not None:
account_data_response = await self.get_account_data_extension_response(
sync_config=sync_config,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
account_data_request=sync_config.extensions.account_data,
to_token=to_token,
from_token=from_token,
)
receipts_response = None
if sync_config.extensions.receipts is not None:
receipts_response = await self.get_receipts_extension_response(
sync_config=sync_config,
previous_connection_state=previous_connection_state,
new_connection_state=new_connection_state,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
actual_room_response_map=actual_room_response_map,
receipts_request=sync_config.extensions.receipts,
to_token=to_token,
from_token=from_token,
)
typing_response = None
if sync_config.extensions.typing is not None:
typing_response = await self.get_typing_extension_response(
sync_config=sync_config,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
actual_room_response_map=actual_room_response_map,
typing_request=sync_config.extensions.typing,
to_token=to_token,
from_token=from_token,
)
return SlidingSyncResult.Extensions(
to_device=to_device_response,
e2ee=e2ee_response,
account_data=account_data_response,
receipts=receipts_response,
typing=typing_response,
)
def find_relevant_room_ids_for_extension(
self,
requested_lists: Optional[StrCollection],
requested_room_ids: Optional[StrCollection],
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: AbstractSet[str],
) -> Set[str]:
"""
Handle the reserved `lists`/`rooms` keys for extensions. Extensions should only
return results for rooms in the Sliding Sync response. This matches up the
requested rooms/lists with the actual lists/rooms in the Sliding Sync response.
{"lists": []} // Do not process any lists.
{"lists": ["rooms", "dms"]} // Process only a subset of lists.
{"lists": ["*"]} // Process all lists defined in the Sliding Window API. (This is the default.)
{"rooms": []} // Do not process any specific rooms.
{"rooms": ["!a:b", "!c:d"]} // Process only a subset of room subscriptions.
{"rooms": ["*"]} // Process all room subscriptions defined in the Room Subscription API. (This is the default.)
Args:
requested_lists: The `lists` from the extension request.
requested_room_ids: The `rooms` from the extension request.
actual_lists: The actual lists from the Sliding Sync response.
actual_room_ids: The actual room subscriptions from the Sliding Sync request.
"""
# We only want to include account data for rooms that are already in the sliding
# sync response AND that were requested in the account data request.
relevant_room_ids: Set[str] = set()
# See what rooms from the room subscriptions we should get account data for
if requested_room_ids is not None:
for room_id in requested_room_ids:
# A wildcard means we process all rooms from the room subscriptions
if room_id == "*":
relevant_room_ids.update(actual_room_ids)
break
if room_id in actual_room_ids:
relevant_room_ids.add(room_id)
# See what rooms from the sliding window lists we should get account data for
if requested_lists is not None:
for list_key in requested_lists:
# Just some typing because we share the variable name in multiple places
actual_list: Optional[SlidingSyncResult.SlidingWindowList] = None
# A wildcard means we process rooms from all lists
if list_key == "*":
for actual_list in actual_lists.values():
# We only expect a single SYNC operation for any list
assert len(actual_list.ops) == 1
sync_op = actual_list.ops[0]
assert sync_op.op == OperationType.SYNC
relevant_room_ids.update(sync_op.room_ids)
break
actual_list = actual_lists.get(list_key)
if actual_list is not None:
# We only expect a single SYNC operation for any list
assert len(actual_list.ops) == 1
sync_op = actual_list.ops[0]
assert sync_op.op == OperationType.SYNC
relevant_room_ids.update(sync_op.room_ids)
return relevant_room_ids
@trace
async def get_to_device_extension_response(
self,
sync_config: SlidingSyncConfig,
to_device_request: SlidingSyncConfig.Extensions.ToDeviceExtension,
to_token: StreamToken,
) -> Optional[SlidingSyncResult.Extensions.ToDeviceExtension]:
"""Handle to-device extension (MSC3885)
Args:
sync_config: Sync configuration
to_device_request: The to-device extension from the request
to_token: The point in the stream to sync up to.
"""
user_id = sync_config.user.to_string()
device_id = sync_config.requester.device_id
# Skip if the extension is not enabled
if not to_device_request.enabled:
return None
# Check that this request has a valid device ID (not all requests have
# to belong to a device, and so device_id is None)
if device_id is None:
return SlidingSyncResult.Extensions.ToDeviceExtension(
next_batch=f"{to_token.to_device_key}",
events=[],
)
since_stream_id = 0
if to_device_request.since is not None:
# We've already validated this is an int.
since_stream_id = int(to_device_request.since)
if to_token.to_device_key < since_stream_id:
# The since token is ahead of our current token, so we return an
# empty response.
logger.warning(
"Got to-device.since from the future. since token: %r is ahead of our current to_device stream position: %r",
since_stream_id,
to_token.to_device_key,
)
return SlidingSyncResult.Extensions.ToDeviceExtension(
next_batch=to_device_request.since,
events=[],
)
# Delete everything before the given since token, as we know the
# device must have received them.
deleted = await self.store.delete_messages_for_device(
user_id=user_id,
device_id=device_id,
up_to_stream_id=since_stream_id,
)
logger.debug(
"Deleted %d to-device messages up to %d for %s",
deleted,
since_stream_id,
user_id,
)
messages, stream_id = await self.store.get_messages_for_device(
user_id=user_id,
device_id=device_id,
from_stream_id=since_stream_id,
to_stream_id=to_token.to_device_key,
limit=min(to_device_request.limit, 100), # Limit to at most 100 events
)
return SlidingSyncResult.Extensions.ToDeviceExtension(
next_batch=f"{stream_id}",
events=messages,
)
@trace
async def get_e2ee_extension_response(
self,
sync_config: SlidingSyncConfig,
e2ee_request: SlidingSyncConfig.Extensions.E2eeExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.E2eeExtension]:
"""Handle E2EE device extension (MSC3884)
Args:
sync_config: Sync configuration
e2ee_request: The e2ee extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
user_id = sync_config.user.to_string()
device_id = sync_config.requester.device_id
# Skip if the extension is not enabled
if not e2ee_request.enabled:
return None
device_list_updates: Optional[DeviceListUpdates] = None
if from_token is not None:
# TODO: This should take into account the `from_token` and `to_token`
device_list_updates = await self.device_handler.get_user_ids_changed(
user_id=user_id,
from_token=from_token.stream_token,
)
device_one_time_keys_count: Mapping[str, int] = {}
device_unused_fallback_key_types: Sequence[str] = []
if device_id:
# TODO: We should have a way to let clients differentiate between the states of:
# * no change in OTK count since the provided since token
# * the server has zero OTKs left for this device
# Spec issue: https://github.com/matrix-org/matrix-doc/issues/3298
device_one_time_keys_count = await self.store.count_e2e_one_time_keys(
user_id, device_id
)
device_unused_fallback_key_types = (
await self.store.get_e2e_unused_fallback_key_types(user_id, device_id)
)
return SlidingSyncResult.Extensions.E2eeExtension(
device_list_updates=device_list_updates,
device_one_time_keys_count=device_one_time_keys_count,
device_unused_fallback_key_types=device_unused_fallback_key_types,
)
@trace
async def get_account_data_extension_response(
self,
sync_config: SlidingSyncConfig,
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
account_data_request: SlidingSyncConfig.Extensions.AccountDataExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.AccountDataExtension]:
"""Handle Account Data extension (MSC3959)
Args:
sync_config: Sync configuration
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
account_data_request: The account_data extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
user_id = sync_config.user.to_string()
# Skip if the extension is not enabled
if not account_data_request.enabled:
return None
global_account_data_map: Mapping[str, JsonMapping] = {}
if from_token is not None:
# TODO: This should take into account the `from_token` and `to_token`
global_account_data_map = (
await self.store.get_updated_global_account_data_for_user(
user_id, from_token.stream_token.account_data_key
)
)
have_push_rules_changed = await self.store.have_push_rules_changed_for_user(
user_id, from_token.stream_token.push_rules_key
)
if have_push_rules_changed:
global_account_data_map = dict(global_account_data_map)
# TODO: This should take into account the `from_token` and `to_token`
global_account_data_map[AccountDataTypes.PUSH_RULES] = (
await self.push_rules_handler.push_rules_for_user(sync_config.user)
)
else:
# TODO: This should take into account the `to_token`
all_global_account_data = await self.store.get_global_account_data_for_user(
user_id
)
global_account_data_map = dict(all_global_account_data)
# TODO: This should take into account the `to_token`
global_account_data_map[AccountDataTypes.PUSH_RULES] = (
await self.push_rules_handler.push_rules_for_user(sync_config.user)
)
# Fetch room account data
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]] = {}
relevant_room_ids = self.find_relevant_room_ids_for_extension(
requested_lists=account_data_request.lists,
requested_room_ids=account_data_request.rooms,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
)
if len(relevant_room_ids) > 0:
if from_token is not None:
# TODO: This should take into account the `from_token` and `to_token`
account_data_by_room_map = (
await self.store.get_updated_room_account_data_for_user(
user_id, from_token.stream_token.account_data_key
)
)
else:
# TODO: This should take into account the `to_token`
account_data_by_room_map = (
await self.store.get_room_account_data_for_user(user_id)
)
# Filter down to the relevant rooms
account_data_by_room_map = {
room_id: account_data_map
for room_id, account_data_map in account_data_by_room_map.items()
if room_id in relevant_room_ids
}
return SlidingSyncResult.Extensions.AccountDataExtension(
global_account_data_map=global_account_data_map,
account_data_by_room_map=account_data_by_room_map,
)
@trace
async def get_receipts_extension_response(
self,
sync_config: SlidingSyncConfig,
previous_connection_state: "PerConnectionState",
new_connection_state: "MutablePerConnectionState",
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
actual_room_response_map: Mapping[str, SlidingSyncResult.RoomResult],
receipts_request: SlidingSyncConfig.Extensions.ReceiptsExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.ReceiptsExtension]:
"""Handle Receipts extension (MSC3960)
Args:
sync_config: Sync configuration
previous_connection_state: The current per-connection state
new_connection_state: A mutable copy of the per-connection
state, used to record updates to the state.
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
actual_room_response_map: A map of room ID to room results in the the
Sliding Sync response.
account_data_request: The account_data extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
# Skip if the extension is not enabled
if not receipts_request.enabled:
return None
relevant_room_ids = self.find_relevant_room_ids_for_extension(
requested_lists=receipts_request.lists,
requested_room_ids=receipts_request.rooms,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
)
room_id_to_receipt_map: Dict[str, JsonMapping] = {}
if len(relevant_room_ids) > 0:
# We need to handle the different cases depending on if we have sent
# down receipts previously or not, so we split the relevant rooms
# up into different collections based on status.
live_rooms = set()
previously_rooms: Dict[str, MultiWriterStreamToken] = {}
initial_rooms = set()
for room_id in relevant_room_ids:
if not from_token:
initial_rooms.add(room_id)
continue
# If we're sending down the room from scratch again for some
# reason, we should always resend the receipts as well
# (regardless of if we've sent them down before). This is to
# mimic the behaviour of what happens on initial sync, where you
# get a chunk of timeline with all of the corresponding receipts
# for the events in the timeline.
#
# We also resend down receipts when we "expand" the timeline,
# (see the "XXX: Odd behavior" in
# `synapse.handlers.sliding_sync`).
room_result = actual_room_response_map.get(room_id)
if room_result is not None:
if room_result.initial or room_result.unstable_expanded_timeline:
initial_rooms.add(room_id)
continue
room_status = previous_connection_state.receipts.have_sent_room(room_id)
if room_status.status == HaveSentRoomFlag.LIVE:
live_rooms.add(room_id)
elif room_status.status == HaveSentRoomFlag.PREVIOUSLY:
assert room_status.last_token is not None
previously_rooms[room_id] = room_status.last_token
elif room_status.status == HaveSentRoomFlag.NEVER:
initial_rooms.add(room_id)
else:
assert_never(room_status.status)
# The set of receipts that we fetched. Private receipts need to be
# filtered out before returning.
fetched_receipts = []
# For live rooms we just fetch all receipts in those rooms since the
# `since` token.
if live_rooms:
assert from_token is not None
receipts = await self.store.get_linearized_receipts_for_rooms(
room_ids=live_rooms,
from_key=from_token.stream_token.receipt_key,
to_key=to_token.receipt_key,
)
fetched_receipts.extend(receipts)
# For rooms we've previously sent down, but aren't up to date, we
# need to use the from token from the room status.
if previously_rooms:
for room_id, receipt_token in previously_rooms.items():
# TODO: Limit the number of receipts we're about to send down
# for the room, if its too many we should TODO
previously_receipts = (
await self.store.get_linearized_receipts_for_room(
room_id=room_id,
from_key=receipt_token,
to_key=to_token.receipt_key,
)
)
fetched_receipts.extend(previously_receipts)
if initial_rooms:
# We also always send down receipts for the current user.
user_receipts = (
await self.store.get_linearized_receipts_for_user_in_rooms(
user_id=sync_config.user.to_string(),
room_ids=initial_rooms,
to_key=to_token.receipt_key,
)
)
# For rooms we haven't previously sent down, we could send all receipts
# from that room but we only want to include receipts for events
# in the timeline to avoid bloating and blowing up the sync response
# as the number of users in the room increases. (this behavior is part of the spec)
initial_rooms_and_event_ids = [
(room_id, event.event_id)
for room_id in initial_rooms
if room_id in actual_room_response_map
for event in actual_room_response_map[room_id].timeline_events
]
initial_receipts = await self.store.get_linearized_receipts_for_events(
room_and_event_ids=initial_rooms_and_event_ids,
)
# Combine the receipts for a room and add them to
# `fetched_receipts`
for room_id in initial_receipts.keys() | user_receipts.keys():
receipt_content = ReceiptInRoom.merge_to_content(
list(
itertools.chain(
initial_receipts.get(room_id, []),
user_receipts.get(room_id, []),
)
)
)
fetched_receipts.append(
{
"room_id": room_id,
"type": EduTypes.RECEIPT,
"content": receipt_content,
}
)
fetched_receipts = ReceiptEventSource.filter_out_private_receipts(
fetched_receipts, sync_config.user.to_string()
)
for receipt in fetched_receipts:
# These fields should exist for every receipt
room_id = receipt["room_id"]
type = receipt["type"]
content = receipt["content"]
room_id_to_receipt_map[room_id] = {"type": type, "content": content}
# Now we update the per-connection state to track which receipts we have
# and haven't sent down.
new_connection_state.receipts.record_sent_rooms(relevant_room_ids)
if from_token:
# Now find the set of rooms that may have receipts that we're not sending
# down. We only need to check rooms that we have previously returned
# receipts for (in `previous_connection_state`) because we only care about
# updating `LIVE` rooms to `PREVIOUSLY`. The `PREVIOUSLY` rooms will just
# stay pointing at their previous position so we don't need to waste time
# checking those and since we default to `NEVER`, rooms that were `NEVER`
# sent before don't need to be recorded as we'll handle them correctly when
# they come into range for the first time.
rooms_no_receipts = [
room_id
for room_id, room_status in previous_connection_state.receipts._statuses.items()
if room_status.status == HaveSentRoomFlag.LIVE
and room_id not in relevant_room_ids
]
changed_rooms = await self.store.get_rooms_with_receipts_between(
rooms_no_receipts,
from_key=from_token.stream_token.receipt_key,
to_key=to_token.receipt_key,
)
new_connection_state.receipts.record_unsent_rooms(
changed_rooms, from_token.stream_token.receipt_key
)
return SlidingSyncResult.Extensions.ReceiptsExtension(
room_id_to_receipt_map=room_id_to_receipt_map,
)
async def get_typing_extension_response(
self,
sync_config: SlidingSyncConfig,
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
actual_room_response_map: Mapping[str, SlidingSyncResult.RoomResult],
typing_request: SlidingSyncConfig.Extensions.TypingExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.TypingExtension]:
"""Handle Typing Notification extension (MSC3961)
Args:
sync_config: Sync configuration
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
actual_room_response_map: A map of room ID to room results in the the
Sliding Sync response.
account_data_request: The account_data extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
# Skip if the extension is not enabled
if not typing_request.enabled:
return None
relevant_room_ids = self.find_relevant_room_ids_for_extension(
requested_lists=typing_request.lists,
requested_room_ids=typing_request.rooms,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
)
room_id_to_typing_map: Dict[str, JsonMapping] = {}
if len(relevant_room_ids) > 0:
# Note: We don't need to take connection tracking into account for typing
# notifications because they'll get anything still relevant and hasn't timed
# out when the room comes into range. We consider the gap where the room
# fell out of range, as long enough for any typing notifications to have
# timed out (it's not worth the 30 seconds of data we may have missed).
typing_source = self.event_sources.sources.typing
typing_notifications, _ = await typing_source.get_new_events(
user=sync_config.user,
from_key=(from_token.stream_token.typing_key if from_token else 0),
to_key=to_token.typing_key,
# This is a dummy value and isn't used in the function
limit=0,
room_ids=relevant_room_ids,
is_guest=False,
)
for typing_notification in typing_notifications:
# These fields should exist for every typing notification
room_id = typing_notification["room_id"]
type = typing_notification["type"]
content = typing_notification["content"]
room_id_to_typing_map[room_id] = {"type": type, "content": content}
return SlidingSyncResult.Extensions.TypingExtension(
room_id_to_typing_map=room_id_to_typing_map,
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,128 +0,0 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import logging
from typing import TYPE_CHECKING, Optional
import attr
from synapse.logging.opentracing import trace
from synapse.storage.databases.main import DataStore
from synapse.types import SlidingSyncStreamToken
from synapse.types.handlers.sliding_sync import (
MutablePerConnectionState,
PerConnectionState,
SlidingSyncConfig,
)
if TYPE_CHECKING:
pass
logger = logging.getLogger(__name__)
@attr.s(auto_attribs=True)
class SlidingSyncConnectionStore:
"""In-memory store of per-connection state, including what rooms we have
previously sent down a sliding sync connection.
Note: This is NOT safe to run in a worker setup because connection positions will
point to different sets of rooms on different workers. e.g. for the same connection,
a connection position of 5 might have totally different states on worker A and
worker B.
One complication that we need to deal with here is needing to handle requests being
resent, i.e. if we sent down a room in a response that the client received, we must
consider the room *not* sent when we get the request again.
This is handled by using an integer "token", which is returned to the client
as part of the sync token. For each connection we store a mapping from
tokens to the room states, and create a new entry when we send down new
rooms.
Note that for any given sliding sync connection we will only store a maximum
of two different tokens: the previous token from the request and a new token
sent in the response. When we receive a request with a given token, we then
clear out all other entries with a different token.
Attributes:
_connections: Mapping from `(user_id, conn_id)` to mapping of `token`
to mapping of room ID to `HaveSentRoom`.
"""
store: "DataStore"
async def get_and_clear_connection_positions(
self,
sync_config: SlidingSyncConfig,
from_token: Optional[SlidingSyncStreamToken],
) -> PerConnectionState:
"""Fetch the per-connection state for the token.
Raises:
SlidingSyncUnknownPosition if the connection_token is unknown
"""
# If this is our first request, there is no previous connection state to fetch out of the database
if from_token is None or from_token.connection_position == 0:
return PerConnectionState()
conn_id = sync_config.conn_id or ""
device_id = sync_config.requester.device_id
assert device_id is not None
return await self.store.get_and_clear_connection_positions(
sync_config.user.to_string(),
device_id,
conn_id,
from_token.connection_position,
)
@trace
async def record_new_state(
self,
sync_config: SlidingSyncConfig,
from_token: Optional[SlidingSyncStreamToken],
new_connection_state: MutablePerConnectionState,
) -> int:
"""Record updated per-connection state, returning the connection
position associated with the new state.
If there are no changes to the state this may return the same token as
the existing per-connection state.
"""
if not new_connection_state.has_updates():
if from_token is not None:
return from_token.connection_position
else:
return 0
# A from token with a zero connection position means there was no
# previously stored connection state, so we treat a zero the same as
# there being no previous position.
previous_connection_position = None
if from_token is not None and from_token.connection_position != 0:
previous_connection_position = from_token.connection_position
conn_id = sync_config.conn_id or ""
device_id = sync_config.requester.device_id
assert device_id is not None
return await self.store.persist_per_connection_state(
sync_config.user.to_string(),
device_id,
conn_id,
previous_connection_position,
new_connection_state,
)

View File

@@ -293,9 +293,7 @@ class StatsHandler:
"history_visibility"
)
elif delta.event_type == EventTypes.RoomEncryption:
room_state["encryption"] = event_content.get(
EventContentFields.ENCRYPTION_ALGORITHM
)
room_state["encryption"] = event_content.get("algorithm")
elif delta.event_type == EventTypes.Name:
room_state["name"] = event_content.get("name")
elif delta.event_type == EventTypes.Topic:

View File

@@ -43,7 +43,6 @@ from prometheus_client import Counter
from synapse.api.constants import (
AccountDataTypes,
Direction,
EventContentFields,
EventTypes,
JoinRules,
@@ -65,7 +64,6 @@ from synapse.logging.opentracing import (
)
from synapse.storage.databases.main.event_push_actions import RoomNotifCounts
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
from synapse.storage.databases.main.stream import PaginateFunction
from synapse.storage.roommember import MemberSummary
from synapse.types import (
DeviceListUpdates,
@@ -86,7 +84,7 @@ from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.caches.lrucache import LruCache
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
from synapse.util.metrics import Measure
from synapse.util.metrics import Measure, measure_func
from synapse.visibility import filter_events_for_client
if TYPE_CHECKING:
@@ -881,49 +879,22 @@ class SyncHandler:
since_key = since_token.room_key
while limited and len(recents) < timeline_limit and max_repeat:
# For initial `/sync`, we want to view a historical section of the
# timeline; to fetch events by `topological_ordering` (best
# representation of the room DAG as others were seeing it at the time).
# This also aligns with the order that `/messages` returns events in.
#
# For incremental `/sync`, we want to get all updates for rooms since
# the last `/sync` (regardless if those updates arrived late or happened
# a while ago in the past); to fetch events by `stream_ordering` (in the
# order they were received by the server).
#
# Relevant spec issue: https://github.com/matrix-org/matrix-spec/issues/1917
#
# FIXME: Using workaround for mypy,
# https://github.com/python/mypy/issues/10740#issuecomment-1997047277 and
# https://github.com/python/mypy/issues/17479
paginate_room_events_by_topological_ordering: PaginateFunction = (
self.store.paginate_room_events_by_topological_ordering
)
paginate_room_events_by_stream_ordering: PaginateFunction = (
self.store.paginate_room_events_by_stream_ordering
)
pagination_method: PaginateFunction = (
# Use `topographical_ordering` for historical events
paginate_room_events_by_topological_ordering
if since_key is None
# Use `stream_ordering` for updates
else paginate_room_events_by_stream_ordering
)
events, end_key = await pagination_method(
room_id=room_id,
# The bounds are reversed so we can paginate backwards
# (from newer to older events) starting at to_bound.
# This ensures we fill the `limit` with the newest events first,
from_key=end_key,
to_key=since_key,
direction=Direction.BACKWARDS,
# We add one so we can determine if there are enough events to saturate
# the limit or not (see `limited`)
limit=load_limit + 1,
)
# We want to return the events in ascending order (the last event is the
# most recent).
events.reverse()
# If we have a since_key then we are trying to get any events
# that have happened since `since_key` up to `end_key`, so we
# can just use `get_room_events_stream_for_room`.
# Otherwise, we want to return the last N events in the room
# in topological ordering.
if since_key:
events, end_key = await self.store.get_room_events_stream_for_room(
room_id,
limit=load_limit + 1,
from_key=since_key,
to_key=end_key,
)
else:
events, end_key = await self.store.get_recent_events_for_room(
room_id, limit=load_limit + 1, end_token=end_key
)
log_kv({"loaded_recents": len(events)})
@@ -1779,15 +1750,8 @@ class SyncHandler:
)
if include_device_list_updates:
# include_device_list_updates can only be True if we have a
# since token.
assert since_token is not None
device_lists = await self._device_handler.generate_sync_entry_for_device_list(
user_id=user_id,
since_token=since_token,
now_token=sync_result_builder.now_token,
joined_room_ids=sync_result_builder.joined_room_ids,
device_lists = await self._generate_sync_entry_for_device_list(
sync_result_builder,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
@@ -1899,14 +1863,8 @@ class SyncHandler:
newly_left_users,
) = sync_result_builder.calculate_user_changes()
# include_device_list_updates can only be True if we have a
# since token.
assert since_token is not None
device_lists = await self._device_handler.generate_sync_entry_for_device_list(
user_id=user_id,
since_token=since_token,
now_token=sync_result_builder.now_token,
joined_room_ids=sync_result_builder.joined_room_ids,
device_lists = await self._generate_sync_entry_for_device_list(
sync_result_builder,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
@@ -2083,6 +2041,94 @@ class SyncHandler:
return sync_result_builder
@measure_func("_generate_sync_entry_for_device_list")
async def _generate_sync_entry_for_device_list(
self,
sync_result_builder: "SyncResultBuilder",
newly_joined_rooms: AbstractSet[str],
newly_joined_or_invited_or_knocked_users: AbstractSet[str],
newly_left_rooms: AbstractSet[str],
newly_left_users: AbstractSet[str],
) -> DeviceListUpdates:
"""Generate the DeviceListUpdates section of sync
Args:
sync_result_builder
newly_joined_rooms: Set of rooms user has joined since previous sync
newly_joined_or_invited_or_knocked_users: Set of users that have joined,
been invited to a room or are knocking on a room since
previous sync.
newly_left_rooms: Set of rooms user has left since previous sync
newly_left_users: Set of users that have left a room we're in since
previous sync
"""
user_id = sync_result_builder.sync_config.user.to_string()
since_token = sync_result_builder.since_token
assert since_token is not None
# Take a copy since these fields will be mutated later.
newly_joined_or_invited_or_knocked_users = set(
newly_joined_or_invited_or_knocked_users
)
newly_left_users = set(newly_left_users)
# We want to figure out what user IDs the client should refetch
# device keys for, and which users we aren't going to track changes
# for anymore.
#
# For the first step we check:
# a. if any users we share a room with have updated their devices,
# and
# b. we also check if we've joined any new rooms, or if a user has
# joined a room we're in.
#
# For the second step we just find any users we no longer share a
# room with by looking at all users that have left a room plus users
# that were in a room we've left.
users_that_have_changed = set()
joined_room_ids = sync_result_builder.joined_room_ids
# Step 1a, check for changes in devices of users we share a room
# with
users_that_have_changed = (
await self._device_handler.get_device_changes_in_shared_rooms(
user_id,
joined_room_ids,
from_token=since_token,
now_token=sync_result_builder.now_token,
)
)
# Step 1b, check for newly joined rooms
for room_id in newly_joined_rooms:
joined_users = await self.store.get_users_in_room(room_id)
newly_joined_or_invited_or_knocked_users.update(joined_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
user_id, since_token.device_list_key
)
users_that_have_changed.update(user_signatures_changed)
# Now find users that we no longer track
for room_id in newly_left_rooms:
left_users = await self.store.get_users_in_room(room_id)
newly_left_users.update(left_users)
# Remove any users that we still share a room with.
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
for user_id, entries in left_users_rooms.items():
if any(rid in joined_room_ids for rid in entries):
newly_left_users.discard(user_id)
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
@trace
async def _generate_sync_entry_for_to_device(
self, sync_result_builder: "SyncResultBuilder"
@@ -2224,11 +2270,7 @@ class SyncHandler:
user=user,
from_key=presence_key,
is_guest=sync_config.is_guest,
include_offline=(
True
if self.hs_config.server.presence_include_offline_users_on_sync
else include_offline
),
include_offline=include_offline,
)
assert presence_key
sync_result_builder.now_token = now_token.copy_and_replace(
@@ -2595,10 +2637,9 @@ class SyncHandler:
# a "gap" in the timeline, as described by the spec for /sync.
room_to_events = await self.store.get_room_events_stream_for_rooms(
room_ids=sync_result_builder.joined_room_ids,
from_key=now_token.room_key,
to_key=since_token.room_key,
from_key=since_token.room_key,
to_key=now_token.room_key,
limit=timeline_limit + 1,
direction=Direction.BACKWARDS,
)
# We loop through all room ids, even if there are no new events, in case
@@ -2609,9 +2650,6 @@ class SyncHandler:
newly_joined = room_id in newly_joined_rooms
if room_entry:
events, start_key = room_entry
# We want to return the events in ascending order (the last event is the
# most recent).
events.reverse()
prev_batch_token = now_token.copy_and_replace(
StreamKeyType.ROOM, start_key

View File

@@ -565,12 +565,7 @@ class TypingNotificationEventSource(EventSource[int, JsonMapping]):
room_ids: Iterable[str],
is_guest: bool,
explicit_room_id: Optional[str] = None,
to_key: Optional[int] = None,
) -> Tuple[List[JsonMapping], int]:
"""
Find typing notifications for given rooms (> `from_token` and <= `to_token`)
"""
with Measure(self.clock, "typing.get_new_events"):
from_key = int(from_key)
handler = self.get_typing_handler()
@@ -579,9 +574,7 @@ class TypingNotificationEventSource(EventSource[int, JsonMapping]):
for room_id in room_ids:
if room_id not in handler._room_serials:
continue
if handler._room_serials[room_id] <= from_key or (
to_key is not None and handler._room_serials[room_id] > to_key
):
if handler._room_serials[room_id] <= from_key:
continue
events.append(self._make_event_for(room_id))

View File

@@ -1057,11 +1057,11 @@ class _MultipartParserProtocol(protocol.Protocol):
if not self.parser:
def on_header_field(data: bytes, start: int, end: int) -> None:
if data[start:end].lower() == b"location":
if data[start:end] == b"Location":
self.has_redirect = True
if data[start:end].lower() == b"content-disposition":
if data[start:end] == b"Content-Disposition":
self.in_disposition = True
if data[start:end].lower() == b"content-type":
if data[start:end] == b"Content-Type":
self.in_content_type = True
def on_header_value(data: bytes, start: int, end: int) -> None:
@@ -1088,6 +1088,7 @@ class _MultipartParserProtocol(protocol.Protocol):
return
# otherwise we are in the file part
else:
logger.info("Writing multipart file data to stream")
try:
self.stream.write(data[start:end])
except Exception as e:

View File

@@ -90,7 +90,7 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import set_tag, start_active_span, tags
from synapse.types import JsonDict
from synapse.util import json_decoder
from synapse.util.async_helpers import AwakenableSleeper, Linearizer, timeout_deferred
from synapse.util.async_helpers import AwakenableSleeper, timeout_deferred
from synapse.util.metrics import Measure
from synapse.util.stringutils import parse_and_validate_server_name
@@ -475,8 +475,6 @@ class MatrixFederationHttpClient:
use_proxy=True,
)
self.remote_download_linearizer = Linearizer("remote_download_linearizer", 6)
def wake_destination(self, destination: str) -> None:
"""Called when the remote server may have come back online."""
@@ -1488,44 +1486,35 @@ class MatrixFederationHttpClient:
)
headers = dict(response.headers.getAllRawHeaders())
expected_size = response.length
expected_size = response.length
# if we don't get an expected length then use the max length
if expected_size == UNKNOWN_LENGTH:
expected_size = max_size
else:
if int(expected_size) > max_size:
msg = "Requested file is too large > %r bytes" % (max_size,)
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg, Codes.TOO_LARGE)
read_body, _ = await download_ratelimiter.can_do_action(
requester=None,
key=ip_address,
n_actions=expected_size,
logger.debug(
f"File size unknown, assuming file is max allowable size: {max_size}"
)
if not read_body:
msg = "Requested file size exceeds ratelimits"
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(
HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED
)
read_body, _ = await download_ratelimiter.can_do_action(
requester=None,
key=ip_address,
n_actions=expected_size,
)
if not read_body:
msg = "Requested file size exceeds ratelimits"
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
try:
async with self.remote_download_linearizer.queue(ip_address):
# add a byte of headroom to max size as function errs at >=
d = read_body_with_max_size(response, output_stream, expected_size + 1)
d.addTimeout(self.default_timeout_seconds, self.reactor)
length = await make_deferred_yieldable(d)
# add a byte of headroom to max size as function errs at >=
d = read_body_with_max_size(response, output_stream, expected_size + 1)
d.addTimeout(self.default_timeout_seconds, self.reactor)
length = await make_deferred_yieldable(d)
except BodyExceededMaxSize:
msg = "Requested file is too large > %r bytes" % (expected_size,)
logger.warning(
@@ -1571,13 +1560,6 @@ class MatrixFederationHttpClient:
request.method,
request.uri.decode("ascii"),
)
# if we didn't know the length upfront, decrement the actual size from ratelimiter
if response.length == UNKNOWN_LENGTH:
download_ratelimiter.record_action(
requester=None, key=ip_address, n_actions=length
)
return length, headers
async def federation_get_file(
@@ -1648,37 +1630,29 @@ class MatrixFederationHttpClient:
)
headers = dict(response.headers.getAllRawHeaders())
expected_size = response.length
expected_size = response.length
# if we don't get an expected length then use the max length
if expected_size == UNKNOWN_LENGTH:
expected_size = max_size
else:
if int(expected_size) > max_size:
msg = "Requested file is too large > %r bytes" % (max_size,)
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg, Codes.TOO_LARGE)
read_body, _ = await download_ratelimiter.can_do_action(
requester=None,
key=ip_address,
n_actions=expected_size,
logger.debug(
f"File size unknown, assuming file is max allowable size: {max_size}"
)
if not read_body:
msg = "Requested file size exceeds ratelimits"
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(
HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED
)
read_body, _ = await download_ratelimiter.can_do_action(
requester=None,
key=ip_address,
n_actions=expected_size,
)
if not read_body:
msg = "Requested file size exceeds ratelimits"
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
# this should be a multipart/mixed response with the boundary string in the header
try:
@@ -1698,12 +1672,11 @@ class MatrixFederationHttpClient:
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg)
try:
async with self.remote_download_linearizer.queue(ip_address):
# add a byte of headroom to max size as `_MultipartParserProtocol.dataReceived` errs at >=
deferred = read_multipart_response(
response, output_stream, boundary, expected_size + 1
)
deferred.addTimeout(self.default_timeout_seconds, self.reactor)
# add a byte of headroom to max size as `_MultipartParserProtocol.dataReceived` errs at >=
deferred = read_multipart_response(
response, output_stream, boundary, expected_size + 1
)
deferred.addTimeout(self.default_timeout_seconds, self.reactor)
except BodyExceededMaxSize:
msg = "Requested file is too large > %r bytes" % (expected_size,)
logger.warning(
@@ -1756,10 +1729,8 @@ class MatrixFederationHttpClient:
request.destination,
str_url,
)
# We don't know how large the response will be upfront, so limit it to
# the `max_size` config value.
length, headers, _, _ = await self._simple_http_client.get_file(
str_url, output_stream, max_size
str_url, output_stream, expected_size
)
logger.info(
@@ -1772,13 +1743,6 @@ class MatrixFederationHttpClient:
request.method,
request.uri.decode("ascii"),
)
# if we didn't know the length upfront, decrement the actual size from ratelimiter
if response.length == UNKNOWN_LENGTH:
download_ratelimiter.record_action(
requester=None, key=ip_address, n_actions=length
)
return length, headers, multipart_response.json

View File

@@ -62,15 +62,6 @@ HOP_BY_HOP_HEADERS = {
"Upgrade",
}
if hasattr(Headers, "_canonicalNameCaps"):
# Twisted < 24.7.0rc1
_canonicalHeaderName = Headers()._canonicalNameCaps # type: ignore[attr-defined]
else:
# Twisted >= 24.7.0rc1
# But note that `_encodeName` still exists on prior versions,
# it just encodes differently
_canonicalHeaderName = Headers()._encodeName
def parse_connection_header_value(
connection_header_value: Optional[bytes],
@@ -94,10 +85,11 @@ def parse_connection_header_value(
The set of header names that should not be copied over from the remote response.
The keys are capitalized in canonical capitalization.
"""
headers = Headers()
extra_headers_to_remove: Set[str] = set()
if connection_header_value:
extra_headers_to_remove = {
_canonicalHeaderName(connection_option.strip()).decode("ascii")
headers._canonicalNameCaps(connection_option.strip()).decode("ascii")
for connection_option in connection_header_value.split(b",")
}

View File

@@ -658,7 +658,7 @@ class SynapseSite(ProxySite):
)
self.site_tag = site_tag
self.reactor: ISynapseReactor = reactor
self.reactor = reactor
assert config.http_options is not None
proxied = config.http_options.x_forwarded
@@ -683,7 +683,7 @@ class SynapseSite(ProxySite):
self.access_logger = logging.getLogger(logger_name)
self.server_version_string = server_version_string.encode("ascii")
def log(self, request: SynapseRequest) -> None: # type: ignore[override]
def log(self, request: SynapseRequest) -> None:
pass

View File

@@ -1032,13 +1032,13 @@ def tag_args(func: Callable[P, R]) -> Callable[P, R]:
def _wrapping_logic(
_func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
) -> Generator[None, None, None]:
for i, arg in enumerate(args, start=0):
if argspec.args[i] in ("self", "cls"):
# Ignore `self` and `cls` values. Ideally we'd properly detect
# if we were wrapping a method, but that is really non-trivial
# and this is good enough.
continue
# We use `[1:]` to skip the `self` object reference and `start=1` to
# make the index line up with `argspec.args`.
#
# FIXME: We could update this to handle any type of function by ignoring the
# first argument only if it's named `self` or `cls`. This isn't fool-proof
# but handles the idiomatic cases.
for i, arg in enumerate(args[1:], start=1):
set_tag(SynapseTags.FUNC_ARG_PREFIX + argspec.args[i], str(arg))
set_tag(SynapseTags.FUNC_ARGS, str(args[len(argspec.args) :]))
set_tag(SynapseTags.FUNC_KWARGS, str(kwargs))

View File

@@ -28,7 +28,6 @@ from types import TracebackType
from typing import (
TYPE_CHECKING,
Awaitable,
BinaryIO,
Dict,
Generator,
List,
@@ -38,28 +37,21 @@ from typing import (
)
import attr
from zope.interface import implementer
from twisted.internet import interfaces
from twisted.internet.defer import Deferred
from twisted.internet.interfaces import IConsumer
from twisted.python.failure import Failure
from twisted.protocols.basic import FileSender
from twisted.web.server import Request
from synapse.api.errors import Codes, cs_error
from synapse.http.server import finish_request, respond_with_json
from synapse.http.site import SynapseRequest
from synapse.logging.context import (
defer_to_threadpool,
make_deferred_yieldable,
run_in_background,
)
from synapse.logging.context import make_deferred_yieldable
from synapse.util import Clock
from synapse.util.async_helpers import DeferredEvent
from synapse.util.stringutils import is_ascii
if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.storage.databases.main.media_repository import LocalMedia
logger = logging.getLogger(__name__)
@@ -130,7 +122,6 @@ def respond_404(request: SynapseRequest) -> None:
async def respond_with_file(
hs: "HomeServer",
request: SynapseRequest,
media_type: str,
file_path: str,
@@ -147,7 +138,7 @@ async def respond_with_file(
add_file_headers(request, media_type, file_size, upload_name)
with open(file_path, "rb") as f:
await ThreadedFileSender(hs).beginFileTransfer(f, request)
await make_deferred_yieldable(FileSender().beginFileTransfer(f, request))
finish_request(request)
else:
@@ -288,9 +279,7 @@ async def respond_with_multipart_responder(
clock: Clock,
request: SynapseRequest,
responder: "Optional[Responder]",
media_type: str,
media_length: Optional[int],
upload_name: Optional[str],
media_info: "LocalMedia",
) -> None:
"""
Responds to requests originating from the federation media `/download` endpoint by
@@ -314,7 +303,7 @@ async def respond_with_multipart_responder(
)
return
if media_type.lower().split(";", 1)[0] in INLINE_CONTENT_TYPES:
if media_info.media_type.lower().split(";", 1)[0] in INLINE_CONTENT_TYPES:
disposition = "inline"
else:
disposition = "attachment"
@@ -322,16 +311,16 @@ async def respond_with_multipart_responder(
def _quote(x: str) -> str:
return urllib.parse.quote(x.encode("utf-8"))
if upload_name:
if _can_encode_filename_as_token(upload_name):
if media_info.upload_name:
if _can_encode_filename_as_token(media_info.upload_name):
disposition = "%s; filename=%s" % (
disposition,
upload_name,
media_info.upload_name,
)
else:
disposition = "%s; filename*=utf-8''%s" % (
disposition,
_quote(upload_name),
_quote(media_info.upload_name),
)
from synapse.media.media_storage import MultipartFileConsumer
@@ -341,14 +330,14 @@ async def respond_with_multipart_responder(
multipart_consumer = MultipartFileConsumer(
clock,
request,
media_type,
media_info.media_type,
{},
disposition,
media_length,
media_info.media_length,
)
logger.debug("Responding to media request with responder %s", responder)
if media_length is not None:
if media_info.media_length is not None:
content_length = multipart_consumer.content_length()
assert content_length is not None
request.setHeader(b"Content-Length", b"%d" % (content_length,))
@@ -612,151 +601,3 @@ def _parseparam(s: bytes) -> Generator[bytes, None, None]:
f = s[:end]
yield f.strip()
s = s[end:]
@implementer(interfaces.IPushProducer)
class ThreadedFileSender:
"""
A producer that sends the contents of a file to a consumer, reading from the
file on a thread.
This works by having a loop in a threadpool repeatedly reading from the
file, until the consumer pauses the producer. There is then a loop in the
main thread that waits until the consumer resumes the producer and then
starts reading in the threadpool again.
This is done to ensure that we're never waiting in the threadpool, as
otherwise its easy to starve it of threads.
"""
# How much data to read in one go.
CHUNK_SIZE = 2**14
# How long we wait for the consumer to be ready again before aborting the
# read.
TIMEOUT_SECONDS = 90.0
def __init__(self, hs: "HomeServer") -> None:
self.reactor = hs.get_reactor()
self.thread_pool = hs.get_media_sender_thread_pool()
self.file: Optional[BinaryIO] = None
self.deferred: "Deferred[None]" = Deferred()
self.consumer: Optional[interfaces.IConsumer] = None
# Signals if the thread should keep reading/sending data. Set means
# continue, clear means pause.
self.wakeup_event = DeferredEvent(self.reactor)
# Signals if the thread should terminate, e.g. because the consumer has
# gone away.
self.stop_writing = False
def beginFileTransfer(
self, file: BinaryIO, consumer: interfaces.IConsumer
) -> "Deferred[None]":
"""
Begin transferring a file
"""
self.file = file
self.consumer = consumer
self.consumer.registerProducer(self, True)
# We set the wakeup signal as we should start producing immediately.
self.wakeup_event.set()
run_in_background(self.start_read_loop)
return make_deferred_yieldable(self.deferred)
def resumeProducing(self) -> None:
"""interfaces.IPushProducer"""
self.wakeup_event.set()
def pauseProducing(self) -> None:
"""interfaces.IPushProducer"""
self.wakeup_event.clear()
def stopProducing(self) -> None:
"""interfaces.IPushProducer"""
# Unregister the consumer so we don't try and interact with it again.
if self.consumer:
self.consumer.unregisterProducer()
self.consumer = None
# Terminate the loop.
self.stop_writing = True
self.wakeup_event.set()
if not self.deferred.called:
self.deferred.errback(Exception("Consumer asked us to stop producing"))
async def start_read_loop(self) -> None:
"""This is the loop that drives reading/writing"""
try:
while not self.stop_writing:
# Start the loop in the threadpool to read data.
more_data = await defer_to_threadpool(
self.reactor, self.thread_pool, self._on_thread_read_loop
)
if not more_data:
# Reached EOF, we can just return.
return
if not self.wakeup_event.is_set():
ret = await self.wakeup_event.wait(self.TIMEOUT_SECONDS)
if not ret:
raise Exception("Timed out waiting to resume")
except Exception:
self._error(Failure())
finally:
self._finish()
def _on_thread_read_loop(self) -> bool:
"""This is the loop that happens on a thread.
Returns:
Whether there is more data to send.
"""
while not self.stop_writing and self.wakeup_event.is_set():
# The file should always have been set before we get here.
assert self.file is not None
chunk = self.file.read(self.CHUNK_SIZE)
if not chunk:
return False
self.reactor.callFromThread(self._write, chunk)
return True
def _write(self, chunk: bytes) -> None:
"""Called from the thread to write a chunk of data"""
if self.consumer:
self.consumer.write(chunk)
def _error(self, failure: Failure) -> None:
"""Called when there was a fatal error"""
if self.consumer:
self.consumer.unregisterProducer()
self.consumer = None
if not self.deferred.called:
self.deferred.errback(failure)
def _finish(self) -> None:
"""Called when we have finished writing (either on success or
failure)."""
if self.file:
self.file.close()
self.file = None
if self.consumer:
self.consumer.unregisterProducer()
self.consumer = None
if not self.deferred.called:
self.deferred.callback(None)

View File

@@ -430,7 +430,6 @@ class MediaRepository:
media_id: str,
name: Optional[str],
max_timeout_ms: int,
allow_authenticated: bool = True,
federation: bool = False,
) -> None:
"""Responds to requests for local media, if exists, or returns 404.
@@ -443,7 +442,6 @@ class MediaRepository:
the filename in the Content-Disposition header of the response.
max_timeout_ms: the maximum number of milliseconds to wait for the
media to be uploaded.
allow_authenticated: whether media marked as authenticated may be served to this request
federation: whether the local media being fetched is for a federation request
Returns:
@@ -453,10 +451,6 @@ class MediaRepository:
if not media_info:
return
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
if media_info.authenticated:
raise NotFoundError()
self.mark_recently_accessed(None, media_id)
media_type = media_info.media_type
@@ -471,7 +465,7 @@ class MediaRepository:
responder = await self.media_storage.fetch_media(file_info)
if federation:
await respond_with_multipart_responder(
self.clock, request, responder, media_type, media_length, upload_name
self.clock, request, responder, media_info
)
else:
await respond_with_responder(
@@ -487,7 +481,6 @@ class MediaRepository:
max_timeout_ms: int,
ip_address: str,
use_federation_endpoint: bool,
allow_authenticated: bool = True,
) -> None:
"""Respond to requests for remote media.
@@ -502,8 +495,6 @@ class MediaRepository:
ip_address: the IP address of the requester
use_federation_endpoint: whether to request the remote media over the new
federation `/download` endpoint
allow_authenticated: whether media marked as authenticated may be served to this
request
Returns:
Resolves once a response has successfully been written to request
@@ -535,7 +526,6 @@ class MediaRepository:
self.download_ratelimiter,
ip_address,
use_federation_endpoint,
allow_authenticated,
)
# We deliberately stream the file outside the lock
@@ -558,7 +548,6 @@ class MediaRepository:
max_timeout_ms: int,
ip_address: str,
use_federation: bool,
allow_authenticated: bool,
) -> RemoteMedia:
"""Gets the media info associated with the remote file, downloading
if necessary.
@@ -571,8 +560,6 @@ class MediaRepository:
ip_address: IP address of the requester
use_federation: if a download is necessary, whether to request the remote file
over the federation `/download` endpoint
allow_authenticated: whether media marked as authenticated may be served to this
request
Returns:
The media info of the file
@@ -594,7 +581,6 @@ class MediaRepository:
self.download_ratelimiter,
ip_address,
use_federation,
allow_authenticated,
)
# Ensure we actually use the responder so that it releases resources
@@ -612,7 +598,6 @@ class MediaRepository:
download_ratelimiter: Ratelimiter,
ip_address: str,
use_federation_endpoint: bool,
allow_authenticated: bool,
) -> Tuple[Optional[Responder], RemoteMedia]:
"""Looks for media in local cache, if not there then attempt to
download from remote server.
@@ -634,11 +619,6 @@ class MediaRepository:
"""
media_info = await self.store.get_cached_remote_media(server_name, media_id)
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
# if it isn't cached then don't fetch it or if it's authenticated then don't serve it
if not media_info or media_info.authenticated:
raise NotFoundError()
# file_id is the ID we use to track the file locally. If we've already
# seen the file then reuse the existing ID, otherwise generate a new
# one.
@@ -812,11 +792,6 @@ class MediaRepository:
logger.info("Stored remote media in file %r", fname)
if self.hs.config.media.enable_authenticated_media:
authenticated = True
else:
authenticated = False
return RemoteMedia(
media_origin=server_name,
media_id=media_id,
@@ -827,7 +802,6 @@ class MediaRepository:
filesystem_id=file_id,
last_access_ts=time_now_ms,
quarantined_by=None,
authenticated=authenticated,
)
async def _federation_download_remote_file(
@@ -941,11 +915,6 @@ class MediaRepository:
logger.debug("Stored remote media in file %r", fname)
if self.hs.config.media.enable_authenticated_media:
authenticated = True
else:
authenticated = False
return RemoteMedia(
media_origin=server_name,
media_id=media_id,
@@ -956,7 +925,6 @@ class MediaRepository:
filesystem_id=file_id,
last_access_ts=time_now_ms,
quarantined_by=None,
authenticated=authenticated,
)
def _get_thumbnail_requirements(
@@ -1008,7 +976,7 @@ class MediaRepository:
t_method: str,
t_type: str,
url_cache: bool,
) -> Optional[Tuple[str, FileInfo]]:
) -> Optional[str]:
input_path = await self.media_storage.ensure_media_is_in_local_cache(
FileInfo(None, media_id, url_cache=url_cache)
)
@@ -1062,15 +1030,10 @@ class MediaRepository:
t_len = os.path.getsize(output_path)
await self.store.store_local_thumbnail(
media_id,
t_width,
t_height,
t_type,
t_method,
t_len,
media_id, t_width, t_height, t_type, t_method, t_len
)
return output_path, file_info
return output_path
# Could not generate thumbnail.
return None

View File

@@ -49,11 +49,15 @@ from zope.interface import implementer
from twisted.internet import interfaces
from twisted.internet.defer import Deferred
from twisted.internet.interfaces import IConsumer
from twisted.protocols.basic import FileSender
from synapse.api.errors import NotFoundError
from synapse.logging.context import defer_to_thread, run_in_background
from synapse.logging.context import (
defer_to_thread,
make_deferred_yieldable,
run_in_background,
)
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
from synapse.media._base import ThreadedFileSender
from synapse.util import Clock
from synapse.util.file_consumer import BackgroundFileConsumer
@@ -209,7 +213,7 @@ class MediaStorage:
local_path = os.path.join(self.local_media_directory, path)
if os.path.exists(local_path):
logger.debug("responding with local file %s", local_path)
return FileResponder(self.hs, open(local_path, "rb"))
return FileResponder(open(local_path, "rb"))
logger.debug("local file %s did not exist", local_path)
for provider in self.storage_providers:
@@ -332,12 +336,13 @@ class FileResponder(Responder):
is closed when finished streaming.
"""
def __init__(self, hs: "HomeServer", open_file: BinaryIO):
self.hs = hs
def __init__(self, open_file: IO):
self.open_file = open_file
def write_to_consumer(self, consumer: IConsumer) -> Deferred:
return ThreadedFileSender(self.hs).beginFileTransfer(self.open_file, consumer)
return make_deferred_yieldable(
FileSender().beginFileTransfer(self.open_file, consumer)
)
def __exit__(
self,
@@ -544,7 +549,7 @@ class MultipartFileConsumer:
Calculate the content length of the multipart response
in bytes.
"""
if self.length is None:
if not self.length:
return None
# calculate length of json field and content-type, disposition headers
json_field = json.dumps(self.json_field)

View File

@@ -145,7 +145,6 @@ class FileStorageProviderBackend(StorageProvider):
def __init__(self, hs: "HomeServer", config: str):
self.hs = hs
self.reactor = hs.get_reactor()
self.cache_directory = hs.config.media.media_store_path
self.base_directory = config
@@ -166,7 +165,7 @@ class FileStorageProviderBackend(StorageProvider):
shutil_copyfile: Callable[[str, str], str] = shutil.copyfile
with start_active_span("shutil_copyfile"):
await defer_to_thread(
self.reactor,
self.hs.get_reactor(),
shutil_copyfile,
primary_fname,
backup_fname,
@@ -178,7 +177,7 @@ class FileStorageProviderBackend(StorageProvider):
backup_fname = os.path.join(self.base_directory, path)
if os.path.isfile(backup_fname):
return FileResponder(self.hs, open(backup_fname, "rb"))
return FileResponder(open(backup_fname, "rb"))
return None

View File

@@ -26,7 +26,7 @@ from typing import TYPE_CHECKING, List, Optional, Tuple, Type
from PIL import Image
from synapse.api.errors import Codes, NotFoundError, SynapseError, cs_error
from synapse.api.errors import Codes, SynapseError, cs_error
from synapse.config.repository import THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP
from synapse.http.server import respond_with_json
from synapse.http.site import SynapseRequest
@@ -259,7 +259,6 @@ class ThumbnailProvider:
media_storage: MediaStorage,
):
self.hs = hs
self.reactor = hs.get_reactor()
self.media_repo = media_repo
self.media_storage = media_storage
self.store = hs.get_datastores().main
@@ -275,7 +274,6 @@ class ThumbnailProvider:
m_type: str,
max_timeout_ms: int,
for_federation: bool,
allow_authenticated: bool = True,
) -> None:
media_info = await self.media_repo.get_local_media_info(
request, media_id, max_timeout_ms
@@ -283,12 +281,6 @@ class ThumbnailProvider:
if not media_info:
return
# if the media the thumbnail is generated from is authenticated, don't serve the
# thumbnail over an unauthenticated endpoint
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
if media_info.authenticated:
raise NotFoundError()
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
await self._select_and_respond_with_thumbnail(
request,
@@ -315,20 +307,14 @@ class ThumbnailProvider:
desired_type: str,
max_timeout_ms: int,
for_federation: bool,
allow_authenticated: bool = True,
) -> None:
media_info = await self.media_repo.get_local_media_info(
request, media_id, max_timeout_ms
)
if not media_info:
return
# if the media the thumbnail is generated from is authenticated, don't serve the
# thumbnail over an unauthenticated endpoint
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
if media_info.authenticated:
raise NotFoundError()
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
for info in thumbnail_infos:
t_w = info.width == desired_width
@@ -348,12 +334,7 @@ class ThumbnailProvider:
if responder:
if for_federation:
await respond_with_multipart_responder(
self.hs.get_clock(),
request,
responder,
info.type,
info.length,
None,
self.hs.get_clock(), request, responder, media_info
)
return
else:
@@ -365,7 +346,7 @@ class ThumbnailProvider:
logger.debug("We don't have a thumbnail of that size. Generating")
# Okay, so we generate one.
thumbnail_result = await self.media_repo.generate_local_exact_thumbnail(
file_path = await self.media_repo.generate_local_exact_thumbnail(
media_id,
desired_width,
desired_height,
@@ -374,21 +355,16 @@ class ThumbnailProvider:
url_cache=bool(media_info.url_cache),
)
if thumbnail_result:
file_path, file_info = thumbnail_result
assert file_info.thumbnail is not None
if file_path:
if for_federation:
await respond_with_multipart_responder(
self.hs.get_clock(),
request,
FileResponder(self.hs, open(file_path, "rb")),
file_info.thumbnail.type,
file_info.thumbnail.length,
None,
FileResponder(open(file_path, "rb")),
media_info,
)
else:
await respond_with_file(self.hs, request, desired_type, file_path)
await respond_with_file(request, desired_type, file_path)
else:
logger.warning("Failed to generate thumbnail")
raise SynapseError(400, "Failed to generate thumbnail.")
@@ -405,27 +381,14 @@ class ThumbnailProvider:
max_timeout_ms: int,
ip_address: str,
use_federation: bool,
allow_authenticated: bool = True,
) -> None:
media_info = await self.media_repo.get_remote_media_info(
server_name,
media_id,
max_timeout_ms,
ip_address,
use_federation,
allow_authenticated,
server_name, media_id, max_timeout_ms, ip_address, use_federation
)
if not media_info:
respond_404(request)
return
# if the media the thumbnail is generated from is authenticated, don't serve the
# thumbnail over an unauthenticated endpoint
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
if media_info.authenticated:
respond_404(request)
return
thumbnail_infos = await self.store.get_remote_media_thumbnails(
server_name, media_id
)
@@ -466,7 +429,7 @@ class ThumbnailProvider:
)
if file_path:
await respond_with_file(self.hs, request, desired_type, file_path)
await respond_with_file(request, desired_type, file_path)
else:
logger.warning("Failed to generate thumbnail")
raise SynapseError(400, "Failed to generate thumbnail.")
@@ -483,28 +446,16 @@ class ThumbnailProvider:
max_timeout_ms: int,
ip_address: str,
use_federation: bool,
allow_authenticated: bool = True,
) -> None:
# TODO: Don't download the whole remote file
# We should proxy the thumbnail from the remote server instead of
# downloading the remote file and generating our own thumbnails.
media_info = await self.media_repo.get_remote_media_info(
server_name,
media_id,
max_timeout_ms,
ip_address,
use_federation,
allow_authenticated,
server_name, media_id, max_timeout_ms, ip_address, use_federation
)
if not media_info:
return
# if the media the thumbnail is generated from is authenticated, don't serve the
# thumbnail over an unauthenticated endpoint
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
if media_info.authenticated:
raise NotFoundError()
thumbnail_infos = await self.store.get_remote_media_thumbnails(
server_name, media_id
)
@@ -534,8 +485,8 @@ class ThumbnailProvider:
file_id: str,
url_cache: bool,
for_federation: bool,
media_info: Optional[LocalMedia] = None,
server_name: Optional[str] = None,
media_info: Optional[LocalMedia] = None,
) -> None:
"""
Respond to a request with an appropriate thumbnail from the previously generated thumbnails.
@@ -590,12 +541,7 @@ class ThumbnailProvider:
if for_federation:
assert media_info is not None
await respond_with_multipart_responder(
self.hs.get_clock(),
request,
responder,
file_info.thumbnail.type,
file_info.thumbnail.length,
None,
self.hs.get_clock(), request, responder, media_info
)
return
else:
@@ -649,12 +595,7 @@ class ThumbnailProvider:
if for_federation:
assert media_info is not None
await respond_with_multipart_responder(
self.hs.get_clock(),
request,
responder,
file_info.thumbnail.type,
file_info.thumbnail.length,
None,
self.hs.get_clock(), request, responder, media_info
)
else:
await respond_with_responder(

View File

@@ -773,7 +773,6 @@ class Notifier:
stream_token = await self.event_sources.bound_future_token(stream_token)
start = self.clock.time_msec()
logged = False
while True:
current_token = self.event_sources.get_current_token()
if stream_token.is_before_or_eq(current_token):
@@ -784,13 +783,11 @@ class Notifier:
if now - start > 10_000:
return False
if not logged:
logger.info(
"Waiting for current token to reach %s; currently at %s",
stream_token,
current_token,
)
logged = True
logger.info(
"Waiting for current token to reach %s; currently at %s",
stream_token,
current_token,
)
# TODO: be better
await self.clock.sleep(0.5)

View File

@@ -18,8 +18,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
import logging
from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Optional, Tuple
from typing import TYPE_CHECKING, Callable
from synapse.http.server import HttpServer, JsonResource
from synapse.rest import admin
@@ -68,64 +67,11 @@ from synapse.rest.client import (
voip,
)
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from synapse.server import HomeServer
RegisterServletsFunc = Callable[["HomeServer", HttpServer], None]
CLIENT_SERVLET_FUNCTIONS: Tuple[RegisterServletsFunc, ...] = (
versions.register_servlets,
initial_sync.register_servlets,
room.register_deprecated_servlets,
events.register_servlets,
room.register_servlets,
login.register_servlets,
profile.register_servlets,
presence.register_servlets,
directory.register_servlets,
voip.register_servlets,
pusher.register_servlets,
push_rule.register_servlets,
logout.register_servlets,
sync.register_servlets,
filter.register_servlets,
account.register_servlets,
register.register_servlets,
auth.register_servlets,
receipts.register_servlets,
read_marker.register_servlets,
room_keys.register_servlets,
keys.register_servlets,
tokenrefresh.register_servlets,
tags.register_servlets,
account_data.register_servlets,
reporting.register_servlets,
openid.register_servlets,
notifications.register_servlets,
devices.register_servlets,
thirdparty.register_servlets,
sendtodevice.register_servlets,
user_directory.register_servlets,
room_upgrade_rest_servlet.register_servlets,
capabilities.register_servlets,
account_validity.register_servlets,
relations.register_servlets,
password_policy.register_servlets,
knock.register_servlets,
appservice_ping.register_servlets,
admin.register_servlets_for_client_rest_resource,
mutual_rooms.register_servlets,
login_token_request.register_servlets,
rendezvous.register_servlets,
auth_issuer.register_servlets,
)
SERVLET_GROUPS: Dict[str, Iterable[RegisterServletsFunc]] = {
"client": CLIENT_SERVLET_FUNCTIONS,
}
class ClientRestResource(JsonResource):
"""Matrix Client API REST resource.
@@ -137,56 +83,80 @@ class ClientRestResource(JsonResource):
* etc
"""
def __init__(self, hs: "HomeServer", servlet_groups: Optional[List[str]] = None):
def __init__(self, hs: "HomeServer"):
JsonResource.__init__(self, hs, canonical_json=False)
if hs.config.media.can_load_media_repo:
# This import is here to prevent a circular import failure
from synapse.rest.client import media
SERVLET_GROUPS["media"] = (media.register_servlets,)
self.register_servlets(self, hs, servlet_groups)
self.register_servlets(self, hs)
@staticmethod
def register_servlets(
client_resource: HttpServer,
hs: "HomeServer",
servlet_groups: Optional[Iterable[str]] = None,
) -> None:
def register_servlets(client_resource: HttpServer, hs: "HomeServer") -> None:
# Some servlets are only registered on the main process (and not worker
# processes).
is_main_process = hs.config.worker.worker_app is None
if not servlet_groups:
servlet_groups = SERVLET_GROUPS.keys()
versions.register_servlets(hs, client_resource)
for servlet_group in servlet_groups:
# Fail on unknown servlet groups.
if servlet_group not in SERVLET_GROUPS:
if servlet_group == "media":
logger.warn(
"media.can_load_media_repo needs to be configured for the media servlet to be available"
)
raise RuntimeError(
f"Attempting to register unknown client servlet: '{servlet_group}'"
)
# Deprecated in r0
initial_sync.register_servlets(hs, client_resource)
room.register_deprecated_servlets(hs, client_resource)
for servletfunc in SERVLET_GROUPS[servlet_group]:
if not is_main_process and servletfunc in [
pusher.register_servlets,
logout.register_servlets,
auth.register_servlets,
tokenrefresh.register_servlets,
reporting.register_servlets,
openid.register_servlets,
thirdparty.register_servlets,
room_upgrade_rest_servlet.register_servlets,
account_validity.register_servlets,
admin.register_servlets_for_client_rest_resource,
mutual_rooms.register_servlets,
login_token_request.register_servlets,
rendezvous.register_servlets,
auth_issuer.register_servlets,
]:
continue
# Partially deprecated in r0
events.register_servlets(hs, client_resource)
servletfunc(hs, client_resource)
room.register_servlets(hs, client_resource)
login.register_servlets(hs, client_resource)
profile.register_servlets(hs, client_resource)
presence.register_servlets(hs, client_resource)
directory.register_servlets(hs, client_resource)
voip.register_servlets(hs, client_resource)
if is_main_process:
pusher.register_servlets(hs, client_resource)
push_rule.register_servlets(hs, client_resource)
if is_main_process:
logout.register_servlets(hs, client_resource)
sync.register_servlets(hs, client_resource)
filter.register_servlets(hs, client_resource)
account.register_servlets(hs, client_resource)
register.register_servlets(hs, client_resource)
if is_main_process:
auth.register_servlets(hs, client_resource)
receipts.register_servlets(hs, client_resource)
read_marker.register_servlets(hs, client_resource)
room_keys.register_servlets(hs, client_resource)
keys.register_servlets(hs, client_resource)
if is_main_process:
tokenrefresh.register_servlets(hs, client_resource)
tags.register_servlets(hs, client_resource)
account_data.register_servlets(hs, client_resource)
if is_main_process:
reporting.register_servlets(hs, client_resource)
openid.register_servlets(hs, client_resource)
notifications.register_servlets(hs, client_resource)
devices.register_servlets(hs, client_resource)
if is_main_process:
thirdparty.register_servlets(hs, client_resource)
sendtodevice.register_servlets(hs, client_resource)
user_directory.register_servlets(hs, client_resource)
if is_main_process:
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
capabilities.register_servlets(hs, client_resource)
if is_main_process:
account_validity.register_servlets(hs, client_resource)
relations.register_servlets(hs, client_resource)
password_policy.register_servlets(hs, client_resource)
knock.register_servlets(hs, client_resource)
appservice_ping.register_servlets(hs, client_resource)
if hs.config.media.can_load_media_repo:
from synapse.rest.client import media
media.register_servlets(hs, client_resource)
# moving to /_synapse/admin
if is_main_process:
admin.register_servlets_for_client_rest_resource(hs, client_resource)
# unstable
if is_main_process:
mutual_rooms.register_servlets(hs, client_resource)
login_token_request.register_servlets(hs, client_resource)
rendezvous.register_servlets(hs, client_resource)
auth_issuer.register_servlets(hs, client_resource)

View File

@@ -20,14 +20,14 @@
#
import logging
from typing import TYPE_CHECKING, cast
from typing import TYPE_CHECKING
from twisted.web.server import Request
from synapse.api.constants import LoginType
from synapse.api.errors import LoginError, SynapseError
from synapse.api.urls import CLIENT_API_PREFIX
from synapse.http.server import HttpServer, respond_with_html, respond_with_redirect
from synapse.http.server import HttpServer, respond_with_html
from synapse.http.servlet import RestServlet, parse_string
from synapse.http.site import SynapseRequest
@@ -66,23 +66,6 @@ class AuthRestServlet(RestServlet):
if not session:
raise SynapseError(400, "No session supplied")
if (
self.hs.config.experimental.msc3861.enabled
and stagetype == "org.matrix.cross_signing_reset"
):
# If MSC3861 is enabled, we can assume self._auth is an instance of MSC3861DelegatedAuth
# We import lazily here because of the authlib requirement
from synapse.api.auth.msc3861_delegated import MSC3861DelegatedAuth
auth = cast(MSC3861DelegatedAuth, self.auth)
url = await auth.account_management_url()
if url is not None:
url = f"{url}?action=org.matrix.cross_signing_reset"
else:
url = await auth.issuer()
respond_with_redirect(request, str.encode(url))
if stagetype == LoginType.RECAPTCHA:
html = self.recaptcha_template.render(
session=session,

View File

@@ -13,7 +13,7 @@
# limitations under the License.
import logging
import typing
from typing import Tuple, cast
from typing import Tuple
from synapse.api.errors import Codes, SynapseError
from synapse.http.server import HttpServer
@@ -43,16 +43,10 @@ class AuthIssuerServlet(RestServlet):
def __init__(self, hs: "HomeServer"):
super().__init__()
self._config = hs.config
self._auth = hs.get_auth()
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
if self._config.experimental.msc3861.enabled:
# If MSC3861 is enabled, we can assume self._auth is an instance of MSC3861DelegatedAuth
# We import lazily here because of the authlib requirement
from synapse.api.auth.msc3861_delegated import MSC3861DelegatedAuth
auth = cast(MSC3861DelegatedAuth, self._auth)
return 200, {"issuer": await auth.issuer()}
return 200, {"issuer": self._config.experimental.msc3861.issuer}
else:
# Wouldn't expect this to be reached: the servelet shouldn't have been
# registered. Still, fail gracefully if we are registered for some reason.

View File

@@ -23,13 +23,10 @@
import logging
import re
from collections import Counter
from typing import TYPE_CHECKING, Any, Dict, Optional, Tuple, cast
from http import HTTPStatus
from typing import TYPE_CHECKING, Any, Dict, Optional, Tuple
from synapse.api.errors import (
InteractiveAuthIncompleteError,
InvalidAPICallError,
SynapseError,
)
from synapse.api.errors import Codes, InvalidAPICallError, SynapseError
from synapse.http.server import HttpServer
from synapse.http.servlet import (
RestServlet,
@@ -259,15 +256,9 @@ class KeyChangesServlet(RestServlet):
user_id = requester.user.to_string()
device_list_updates = await self.device_handler.get_user_ids_changed(
user_id, from_token
)
results = await self.device_handler.get_user_ids_changed(user_id, from_token)
response: JsonDict = {}
response["changed"] = list(device_list_updates.changed)
response["left"] = list(device_list_updates.left)
return 200, response
return 200, results
class OneTimeKeyServlet(RestServlet):
@@ -406,36 +397,17 @@ class SigningKeyUploadServlet(RestServlet):
# explicitly mark the master key as replaceable.
if self.hs.config.experimental.msc3861.enabled:
if not master_key_updatable_without_uia:
# If MSC3861 is enabled, we can assume self.auth is an instance of MSC3861DelegatedAuth
# We import lazily here because of the authlib requirement
from synapse.api.auth.msc3861_delegated import MSC3861DelegatedAuth
auth = cast(MSC3861DelegatedAuth, self.auth)
uri = await auth.account_management_url()
if uri is not None:
url = f"{uri}?action=org.matrix.cross_signing_reset"
config = self.hs.config.experimental.msc3861
if config.account_management_url is not None:
url = f"{config.account_management_url}?action=org.matrix.cross_signing_reset"
else:
url = await auth.issuer()
url = config.issuer
# We use a dummy session ID as this isn't really a UIA flow, but we
# reuse the same API shape for better client compatibility.
raise InteractiveAuthIncompleteError(
"dummy",
{
"session": "dummy",
"flows": [
{"stages": ["org.matrix.cross_signing_reset"]},
],
"params": {
"org.matrix.cross_signing_reset": {
"url": url,
},
},
"msg": "To reset your end-to-end encryption cross-signing "
f"identity, you first need to approve it at {url} and "
"then try again.",
},
raise SynapseError(
HTTPStatus.NOT_IMPLEMENTED,
"To reset your end-to-end encryption cross-signing identity, "
f"you first need to approve it at {url} and then try again.",
Codes.UNRECOGNIZED,
)
else:
# Without MSC3861, we require UIA.

Some files were not shown because too many files have changed in this diff Show More