Compare commits

..

9 Commits

Author SHA1 Message Date
Erik Johnston
fb2fc478a0 Update synapse/storage/databases/main/events_worker.py
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-07-05 09:46:07 +01:00
Erik Johnston
5f1612eb06 Have _filter_results_by_stream handle null instance_name 2024-07-05 09:43:48 +01:00
Erik Johnston
d15d8ab882 Have _filter_results handle null instance_name 2024-07-05 09:36:32 +01:00
Erik Johnston
24444a4762 Lint 2024-07-04 17:25:24 +01:00
Erik Johnston
5a47fede6e Remove unused function 2024-07-04 17:21:41 +01:00
Erik Johnston
b1773077a7 Also fixup internal metadata 2024-07-04 17:21:41 +01:00
Erik Johnston
b2880b6d60 Update changelog.d/17398.bugfix
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-07-04 17:05:50 +01:00
Erik Johnston
c742b747d7 Newsfile 2024-07-04 15:42:42 +01:00
Erik Johnston
59314d5e41 Fix bug in sliding sync when using old DB.
We don't necessarily have `instance_name` for old events (before we
support multiple event persisters). We treat those as if the
`instance_name` was "master".
2024-07-04 15:41:30 +01:00
162 changed files with 3669 additions and 17518 deletions

View File

@@ -30,7 +30,7 @@ jobs:
run: docker buildx inspect
- name: Install Cosign
uses: sigstore/cosign-installer@v3.6.0
uses: sigstore/cosign-installer@v3.5.0
- name: Checkout repository
uses: actions/checkout@v4

View File

@@ -73,7 +73,7 @@ jobs:
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/tests.yml'
linting_readme:
- 'README.rst'
@@ -139,7 +139,7 @@ jobs:
- name: Semantic checks (ruff)
# --quiet suppresses the update check.
run: poetry run ruff check --quiet .
run: poetry run ruff --quiet .
lint-mypy:
runs-on: ubuntu-latest
@@ -305,7 +305,7 @@ jobs:
- lint-readme
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@v3
- uses: matrix-org/done-action@v2
with:
needs: ${{ toJSON(needs) }}
@@ -737,7 +737,7 @@ jobs:
- linting-done
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@v3
- uses: matrix-org/done-action@v2
with:
needs: ${{ toJSON(needs) }}

View File

@@ -1,242 +1,3 @@
# Synapse 1.113.0 (2024-08-13)
No significant changes since 1.113.0rc1.
# Synapse 1.113.0rc1 (2024-08-06)
### Features
- Track which rooms have been sent to clients in the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17447](https://github.com/element-hq/synapse/issues/17447))
- Add Account Data extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17477](https://github.com/element-hq/synapse/issues/17477))
- Add receipts extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17489](https://github.com/element-hq/synapse/issues/17489))
- Add typing notification extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17505](https://github.com/element-hq/synapse/issues/17505))
### Bugfixes
- Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to handle invite/knock rooms when filtering. ([\#17450](https://github.com/element-hq/synapse/issues/17450))
- Fix a bug introduced in v1.110.0 which caused `/keys/query` to return incomplete results, leading to high network activity and CPU usage on Matrix clients. ([\#17499](https://github.com/element-hq/synapse/issues/17499))
### Improved Documentation
- Update the [`allowed_local_3pids`](https://element-hq.github.io/synapse/v1.112/usage/configuration/config_documentation.html#allowed_local_3pids) config option's msisdn address to a working example. ([\#17476](https://github.com/element-hq/synapse/issues/17476))
### Internal Changes
- Change sliding sync to use their own token format in preparation for storing per-connection state. ([\#17452](https://github.com/element-hq/synapse/issues/17452))
- Ensure we don't send down negative `bump_stamp` in experimental sliding sync endpoint. ([\#17478](https://github.com/element-hq/synapse/issues/17478))
- Do not send down empty room entries down experimental sliding sync endpoint. ([\#17479](https://github.com/element-hq/synapse/issues/17479))
- Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`. ([\#17481](https://github.com/element-hq/synapse/issues/17481), [\#17482](https://github.com/element-hq/synapse/issues/17482))
- Add some opentracing tags and logging to the experimental sliding sync implementation. ([\#17501](https://github.com/element-hq/synapse/issues/17501))
- Split and move Sliding Sync tests so we have some more sane test file sizes. ([\#17504](https://github.com/element-hq/synapse/issues/17504))
- Update the `limited` field description in the Sliding Sync response to accurately describe what it actually represents. ([\#17507](https://github.com/element-hq/synapse/issues/17507))
- Easier to understand `timeline` assertions in Sliding Sync tests. ([\#17511](https://github.com/element-hq/synapse/issues/17511))
- Reset the sliding sync connection if we don't recognize the per-connection state position. ([\#17529](https://github.com/element-hq/synapse/issues/17529))
### Updates to locked dependencies
* Bump bcrypt from 4.1.3 to 4.2.0. ([\#17495](https://github.com/element-hq/synapse/issues/17495))
* Bump black from 24.4.2 to 24.8.0. ([\#17522](https://github.com/element-hq/synapse/issues/17522))
* Bump phonenumbers from 8.13.39 to 8.13.42. ([\#17521](https://github.com/element-hq/synapse/issues/17521))
* Bump ruff from 0.5.4 to 0.5.5. ([\#17494](https://github.com/element-hq/synapse/issues/17494))
* Bump serde_json from 1.0.120 to 1.0.121. ([\#17493](https://github.com/element-hq/synapse/issues/17493))
* Bump serde_json from 1.0.121 to 1.0.122. ([\#17525](https://github.com/element-hq/synapse/issues/17525))
* Bump towncrier from 23.11.0 to 24.7.1. ([\#17523](https://github.com/element-hq/synapse/issues/17523))
* Bump types-pyopenssl from 24.1.0.20240425 to 24.1.0.20240722. ([\#17496](https://github.com/element-hq/synapse/issues/17496))
* Bump types-setuptools from 70.1.0.20240627 to 71.1.0.20240726. ([\#17497](https://github.com/element-hq/synapse/issues/17497))
# Synapse 1.112.0 (2024-07-30)
This security release is to update our locked dependency on Twisted to 24.7.0rc1, which includes a security fix for [CVE-2024-41671 / GHSA-c8m8-j448-xjx7: Disordered HTTP pipeline response in twisted.web, again](https://github.com/twisted/twisted/security/advisories/GHSA-c8m8-j448-xjx7).
Note that this security fix is also available as **Synapse 1.111.1**, which does not include the rest of the changes in Synapse 1.112.0.
This issue means that, if multiple HTTP requests are pipelined in the same TCP connection, Synapse can send responses to the wrong HTTP request.
If a reverse proxy was configured to use HTTP pipelining, this could result in responses being sent to the wrong user, severely harming confidentiality.
With that said, despite being a high severity issue, **we consider it unlikely that Synapse installations will be affected**.
The use of HTTP pipelining in this fashion would cause worse performance for clients (request-response latencies would be increased as users' responses would be artificially blocked behind other users' slow requests). Further, Nginx and Haproxy, two common reverse proxies, do not appear to support configuring their upstreams to use HTTP pipelining and thus would not be affected. For both of these reasons, we consider it unlikely that a Synapse deployment would be set up in such a configuration.
Despite that, we cannot rule out that some installations may exist with this unusual setup and so we are releasing this security update today.
**pip users:** Note that by default, upgrading Synapse using pip will not automatically upgrade Twisted. **Please manually install the new version of Twisted** using `pip install Twisted==24.7.0rc1`. Note also that even the `--upgrade-strategy=eager` flag to `pip install -U matrix-synapse` will not upgrade Twisted to a patched version because it is only a release candidate at this time.
### Internal Changes
- Upgrade locked dependency on Twisted to 24.7.0rc1. ([\#17502](https://github.com/element-hq/synapse/issues/17502))
# Synapse 1.112.0rc1 (2024-07-23)
Please note that this release candidate does not include the security dependency update
included in version 1.111.1 as this version was released before 1.111.1.
The same security fix can be found in the full release of 1.112.0.
### Features
- Add to-device extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17416](https://github.com/element-hq/synapse/issues/17416))
- Populate `name`/`avatar` fields in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17418](https://github.com/element-hq/synapse/issues/17418))
- Populate `heroes` and room summary fields (`joined_count`, `invited_count`) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17419](https://github.com/element-hq/synapse/issues/17419))
- Populate `is_dm` room field in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17429](https://github.com/element-hq/synapse/issues/17429))
- Add room subscriptions to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17432](https://github.com/element-hq/synapse/issues/17432))
- Prepare for authenticated media freeze. ([\#17433](https://github.com/element-hq/synapse/issues/17433))
- Add E2EE extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17454](https://github.com/element-hq/synapse/issues/17454))
### Bugfixes
- Add configurable option to always include offline users in presence sync results. Contributed by @Michael-Hollister. ([\#17231](https://github.com/element-hq/synapse/issues/17231))
- Fix bug in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint when using room type filters and the user has one or more remote invites. ([\#17434](https://github.com/element-hq/synapse/issues/17434))
- Order `heroes` by `stream_ordering` as the Matrix specification states (applies to `/sync`). ([\#17435](https://github.com/element-hq/synapse/issues/17435))
- Fix rare bug where `/sync` would break for a user when using workers with multiple stream writers. ([\#17438](https://github.com/element-hq/synapse/issues/17438))
### Improved Documentation
- Update the readme image to have a white background, so that it is readable in dark mode. ([\#17387](https://github.com/element-hq/synapse/issues/17387))
- Add Red Hat Enterprise Linux and Rocky Linux 8 and 9 installation instructions. ([\#17423](https://github.com/element-hq/synapse/issues/17423))
- Improve documentation for the [`default_power_level_content_override`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#default_power_level_content_override) config option. ([\#17451](https://github.com/element-hq/synapse/issues/17451))
### Internal Changes
- Make sure we always use the right logic for enabling the media repo. ([\#17424](https://github.com/element-hq/synapse/issues/17424))
- Fix argument documentation for method `RateLimiter.record_action`. ([\#17426](https://github.com/element-hq/synapse/issues/17426))
- Reduce volume of 'Waiting for current token' logs, which were introduced in v1.109.0. ([\#17428](https://github.com/element-hq/synapse/issues/17428))
- Limit concurrent remote downloads to 6 per IP address, and decrement remote downloads without a content-length from the ratelimiter after the download is complete. ([\#17439](https://github.com/element-hq/synapse/issues/17439))
- Remove unnecessary call to resume producing in fake channel. ([\#17449](https://github.com/element-hq/synapse/issues/17449))
- Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to bump room when it is created. ([\#17453](https://github.com/element-hq/synapse/issues/17453))
- Speed up generating sliding sync responses. ([\#17458](https://github.com/element-hq/synapse/issues/17458))
- Add cache to `get_rooms_for_local_user_where_membership_is` to speed up sliding sync. ([\#17460](https://github.com/element-hq/synapse/issues/17460))
- Speed up fetching room keys from backup. ([\#17461](https://github.com/element-hq/synapse/issues/17461))
- Speed up sorting of the room list in sliding sync. ([\#17468](https://github.com/element-hq/synapse/issues/17468))
- Implement handling of `$ME` as a state key in sliding sync. ([\#17469](https://github.com/element-hq/synapse/issues/17469))
### Updates to locked dependencies
* Bump bytes from 1.6.0 to 1.6.1. ([\#17441](https://github.com/element-hq/synapse/issues/17441))
* Bump hiredis from 2.3.2 to 3.0.0. ([\#17464](https://github.com/element-hq/synapse/issues/17464))
* Bump jsonschema from 4.22.0 to 4.23.0. ([\#17444](https://github.com/element-hq/synapse/issues/17444))
* Bump matrix-org/done-action from 2 to 3. ([\#17440](https://github.com/element-hq/synapse/issues/17440))
* Bump mypy from 1.9.0 to 1.10.1. ([\#17445](https://github.com/element-hq/synapse/issues/17445))
* Bump pyopenssl from 24.1.0 to 24.2.1. ([\#17465](https://github.com/element-hq/synapse/issues/17465))
* Bump ruff from 0.5.0 to 0.5.4. ([\#17466](https://github.com/element-hq/synapse/issues/17466))
* Bump sentry-sdk from 2.6.0 to 2.8.0. ([\#17456](https://github.com/element-hq/synapse/issues/17456))
* Bump sentry-sdk from 2.8.0 to 2.10.0. ([\#17467](https://github.com/element-hq/synapse/issues/17467))
* Bump setuptools from 67.6.0 to 70.0.0. ([\#17448](https://github.com/element-hq/synapse/issues/17448))
* Bump twine from 5.1.0 to 5.1.1. ([\#17443](https://github.com/element-hq/synapse/issues/17443))
* Bump types-jsonschema from 4.22.0.20240610 to 4.23.0.20240712. ([\#17446](https://github.com/element-hq/synapse/issues/17446))
* Bump ulid from 1.1.2 to 1.1.3. ([\#17442](https://github.com/element-hq/synapse/issues/17442))
* Bump zipp from 3.15.0 to 3.19.1. ([\#17427](https://github.com/element-hq/synapse/issues/17427))
# Synapse 1.111.1 (2024-07-30)
This security release is to update our locked dependency on Twisted to 24.7.0rc1, which includes a security fix for [CVE-2024-41671 / GHSA-c8m8-j448-xjx7: Disordered HTTP pipeline response in twisted.web, again](https://github.com/twisted/twisted/security/advisories/GHSA-c8m8-j448-xjx7).
This issue means that, if multiple HTTP requests are pipelined in the same TCP connection, Synapse can send responses to the wrong HTTP request.
If a reverse proxy was configured to use HTTP pipelining, this could result in responses being sent to the wrong user, severely harming confidentiality.
With that said, despite being a high severity issue, **we consider it unlikely that Synapse installations will be affected**.
The use of HTTP pipelining in this fashion would cause worse performance for clients (request-response latencies would be increased as users' responses would be artificially blocked behind other users' slow requests). Further, Nginx and Haproxy, two common reverse proxies, do not appear to support configuring their upstreams to use HTTP pipelining and thus would not be affected. For both of these reasons, we consider it unlikely that a Synapse deployment would be set up in such a configuration.
Despite that, we cannot rule out that some installations may exist with this unusual setup and so we are releasing this security update today.
**pip users:** Note that by default, upgrading Synapse using pip will not automatically upgrade Twisted. **Please manually install the new version of Twisted** using `pip install Twisted==24.7.0rc1`. Note also that even the `--upgrade-strategy=eager` flag to `pip install -U matrix-synapse` will not upgrade Twisted to a patched version because it is only a release candidate at this time.
### Internal Changes
- Upgrade locked dependency on Twisted to 24.7.0rc1. ([\#17502](https://github.com/element-hq/synapse/issues/17502))
# Synapse 1.111.0 (2024-07-16)
No significant changes since 1.111.0rc2.
# Synapse 1.111.0rc2 (2024-07-10)
### Bugfixes
- Fix bug where using `synapse.app.media_repository` worker configuration would break the new media endpoints. ([\#17420](https://github.com/element-hq/synapse/issues/17420))
### Improved Documentation
- Document the new federation media worker endpoints in the [upgrade notes](https://element-hq.github.io/synapse/v1.111/upgrade.html) and [worker docs](https://element-hq.github.io/synapse/v1.111/workers.html). ([\#17421](https://github.com/element-hq/synapse/issues/17421))
### Internal Changes
- Route authenticated federation media requests to media repository workers in Complement tests. ([\#17422](https://github.com/element-hq/synapse/issues/17422))
# Synapse 1.111.0rc1 (2024-07-09)
### Features
- Add `rooms` data to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17320](https://github.com/element-hq/synapse/issues/17320))
- Add `room_types`/`not_room_types` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17337](https://github.com/element-hq/synapse/issues/17337))
- Return "required state" in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17342](https://github.com/element-hq/synapse/issues/17342))
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/3916-authentication-for-media.md) by adding [`_matrix/client/v1/media/download`](https://spec.matrix.org/v1.11/client-server-api/#get_matrixclientv1mediadownloadservernamemediaid) endpoint. ([\#17365](https://github.com/element-hq/synapse/issues/17365))
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md)
by adding [`_matrix/client/v1/media/thumbnail`](https://spec.matrix.org/v1.11/client-server-api/#get_matrixclientv1mediathumbnailservernamemediaid), [`_matrix/federation/v1/media/thumbnail`](https://spec.matrix.org/v1.11/server-server-api/#get_matrixfederationv1mediathumbnailmediaid) endpoints and stabilizing the
remaining [`_matrix/client/v1/media`](https://spec.matrix.org/v1.11/client-server-api/#get_matrixclientv1mediaconfig) endpoints. ([\#17388](https://github.com/element-hq/synapse/issues/17388))
- Add `rooms.bump_stamp` for easier client-side sorting in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17395](https://github.com/element-hq/synapse/issues/17395))
- Forget all of a user's rooms upon deactivation, preventing local room purges from being blocked on deactivated users. ([\#17400](https://github.com/element-hq/synapse/issues/17400))
- Declare support for [Matrix 1.11](https://matrix.org/blog/2024/06/20/matrix-v1.11-release/). ([\#17403](https://github.com/element-hq/synapse/issues/17403))
- [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861): allow overriding the introspection endpoint. ([\#17406](https://github.com/element-hq/synapse/issues/17406))
### Bugfixes
- Fix rare race which caused no new to-device messages to be received from remote server. ([\#17362](https://github.com/element-hq/synapse/issues/17362))
- Fix bug in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint when using an old database. ([\#17398](https://github.com/element-hq/synapse/issues/17398))
### Improved Documentation
- Clarify that `url_preview_url_blacklist` is a usability feature. ([\#17356](https://github.com/element-hq/synapse/issues/17356))
- Fix broken links in README. ([\#17379](https://github.com/element-hq/synapse/issues/17379))
- Clarify that changelog content *and file extension* need to match in order for entries to merge. ([\#17399](https://github.com/element-hq/synapse/issues/17399))
### Internal Changes
- Make the release script create a release branch for Complement as well. ([\#17318](https://github.com/element-hq/synapse/issues/17318))
- Fix uploading packages to PyPi. ([\#17363](https://github.com/element-hq/synapse/issues/17363))
- Add CI check for the README. ([\#17367](https://github.com/element-hq/synapse/issues/17367))
- Fix linting errors from new `ruff` version. ([\#17381](https://github.com/element-hq/synapse/issues/17381), [\#17411](https://github.com/element-hq/synapse/issues/17411))
- Fix building debian packages on non-clean checkouts. ([\#17390](https://github.com/element-hq/synapse/issues/17390))
- Finish up work to allow per-user feature flags. ([\#17392](https://github.com/element-hq/synapse/issues/17392), [\#17410](https://github.com/element-hq/synapse/issues/17410))
- Allow enabling sliding sync per-user. ([\#17393](https://github.com/element-hq/synapse/issues/17393))
### Updates to locked dependencies
* Bump certifi from 2023.7.22 to 2024.7.4. ([\#17404](https://github.com/element-hq/synapse/issues/17404))
* Bump cryptography from 42.0.7 to 42.0.8. ([\#17382](https://github.com/element-hq/synapse/issues/17382))
* Bump ijson from 3.2.3 to 3.3.0. ([\#17413](https://github.com/element-hq/synapse/issues/17413))
* Bump log from 0.4.21 to 0.4.22. ([\#17384](https://github.com/element-hq/synapse/issues/17384))
* Bump mypy-zope from 1.0.4 to 1.0.5. ([\#17414](https://github.com/element-hq/synapse/issues/17414))
* Bump pillow from 10.3.0 to 10.4.0. ([\#17412](https://github.com/element-hq/synapse/issues/17412))
* Bump pydantic from 2.7.1 to 2.8.2. ([\#17415](https://github.com/element-hq/synapse/issues/17415))
* Bump ruff from 0.3.7 to 0.5.0. ([\#17381](https://github.com/element-hq/synapse/issues/17381))
* Bump serde from 1.0.203 to 1.0.204. ([\#17409](https://github.com/element-hq/synapse/issues/17409))
* Bump serde_json from 1.0.117 to 1.0.120. ([\#17385](https://github.com/element-hq/synapse/issues/17385), [\#17408](https://github.com/element-hq/synapse/issues/17408))
* Bump types-setuptools from 69.5.0.20240423 to 70.1.0.20240627. ([\#17380](https://github.com/element-hq/synapse/issues/17380))
# Synapse 1.110.0 (2024-07-03)
No significant changes since 1.110.0rc3.
# Synapse 1.110.0rc3 (2024-07-02)
### Bugfixes
@@ -280,7 +41,7 @@ No significant changes since 1.110.0rc3.
This is useful for scripts that bootstrap user accounts with initial passwords. ([\#17304](https://github.com/element-hq/synapse/issues/17304))
- Add support for via query parameter from [MSC4156](https://github.com/matrix-org/matrix-spec-proposals/pull/4156). ([\#17322](https://github.com/element-hq/synapse/issues/17322))
- Add `is_invite` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17335](https://github.com/element-hq/synapse/issues/17335))
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/3916-authentication-for-media.md) by adding a federation /download endpoint. ([\#17350](https://github.com/element-hq/synapse/issues/17350))
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md) by adding a federation /download endpoint. ([\#17350](https://github.com/element-hq/synapse/issues/17350))
### Bugfixes

25
Cargo.lock generated
View File

@@ -67,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
[[package]]
name = "bytes"
version = "1.7.1"
version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8318a53db07bb3f8dca91a600466bdb3f2eaadeedfdbcf02e1accbad9271ba50"
checksum = "514de17de45fdb8dc022b1a7975556c53c86f9f0aa5f534b98977b171857c2c9"
[[package]]
name = "cfg-if"
@@ -444,9 +444,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.10.6"
version = "1.10.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4219d74c6b67a3654a9fbebc4b419e22126d13d2f3c4a07ee0cb61ff79a79619"
checksum = "b91213439dad192326a0d7c6ee3955910425f441d7038e0d6933b0aec5c4517f"
dependencies = [
"aho-corasick",
"memchr",
@@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]]
name = "serde"
version = "1.0.206"
version = "1.0.203"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b3e4cd94123dd520a128bcd11e34d9e9e423e7e3e50425cb1b4b1e3549d0284"
checksum = "7253ab4de971e72fb7be983802300c30b5a7f0c2e56fab8abfc6a214307c0094"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.206"
version = "1.0.203"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fabfb6138d2383ea8208cf98ccf69cdfb1aff4088460681d84189aa259762f97"
checksum = "500cbc0ebeb6f46627f50f3f5811ccf6bf00643be300b4c3eabc0ef55dc5b5ba"
dependencies = [
"proc-macro2",
"quote",
@@ -505,12 +505,11 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.124"
version = "1.0.119"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "66ad62847a56b3dba58cc891acd13884b9c61138d330c0d7b6181713d4fce38d"
checksum = "e8eddb61f0697cc3989c5d64b452f5488e2b8a60fd7d5076a3045076ffef8cb0"
dependencies = [
"itoa",
"memchr",
"ryu",
"serde",
]
@@ -598,9 +597,9 @@ checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825"
[[package]]
name = "ulid"
version = "1.1.3"
version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04f903f293d11f31c0c29e4148f6dc0d033a7f80cebc0282bea147611667d289"
checksum = "34778c17965aa2a08913b57e1f34db9b4a63f5de31768b55bf20d2795f921259"
dependencies = [
"getrandom",
"rand",

View File

@@ -1,4 +1,4 @@
.. image:: ./docs/element_logo_white_bg.svg
.. image:: https://github.com/element-hq/product/assets/87339233/7abf477a-5277-47f3-be44-ea44917d8ed7
:height: 60px
**Element Synapse - Matrix homeserver implementation**
@@ -179,10 +179,10 @@ desired ``localpart`` in the 'User name' box.
-----------------------
Enterprise quality support for Synapse including SLAs is available as part of an
`Element Server Suite (ESS) <https://element.io/pricing>`_ subscription.
`Element Server Suite (ESS) <https://element.io/pricing>` subscription.
If you are an existing ESS subscriber then you can raise a `support request <https://ems.element.io/support>`_
and access the `knowledge base <https://ems-docs.element.io>`_.
If you are an existing ESS subscriber then you can raise a `support request <https://ems.element.io/support>`
and access the `knowledge base <https://ems-docs.element.io>`.
🤝 Community support
--------------------

View File

@@ -1 +1 @@
Add more tracing to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
Add `rooms` data to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -0,0 +1 @@
Add `room_types`/`not_room_types` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

1
changelog.d/17356.doc Normal file
View File

@@ -0,0 +1 @@
Clarify `url_preview_url_blacklist` is a usability feature.

1
changelog.d/17362.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix rare race which causes no new to-device messages to be received from remote server.

1
changelog.d/17363.misc Normal file
View File

@@ -0,0 +1 @@
Fix uploading packages to PyPi.

View File

@@ -0,0 +1 @@
Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md) by adding _matrix/client/v1/media/download endpoint.

1
changelog.d/17367.misc Normal file
View File

@@ -0,0 +1 @@
Add CI check for the README.

1
changelog.d/17390.misc Normal file
View File

@@ -0,0 +1 @@
Fix building debian packages on non-clean checkouts.

1
changelog.d/17398.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint when using an old database.

View File

@@ -1 +0,0 @@
Start handlers for new media endpoints when media resource configured.

View File

@@ -1 +0,0 @@
Fix timeline ordering (using `stream_ordering` instead of topological ordering) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -1,3 +0,0 @@
Clarify default behaviour of the
[`auto_accept_invites.worker_to_run_on`](https://element-hq.github.io/synapse/develop/usage/configuration/config_documentation.html#auto-accept-invites)
option.

View File

@@ -1 +0,0 @@
Fixup comment in sliding sync implementation.

View File

@@ -1 +0,0 @@
Fix experimental sliding sync implementation to remember any updates in rooms that were not sent down immediately.

View File

@@ -1 +0,0 @@
Replace override of deprecated method `HTTPAdapter.get_connection` with `get_connection_with_tls_context`.

View File

@@ -1 +0,0 @@
Fix performance of device lists in `/key/changes` and sliding sync.

View File

@@ -1 +0,0 @@
Better exclude partially stated rooms if we must await full state in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -1 +0,0 @@
Bump setuptools from 67.6.0 to 72.1.0.

View File

@@ -1 +0,0 @@
Handle lower-case http headers in `_Mulitpart_Parser_Protocol`.

View File

@@ -1 +0,0 @@
Add a utility function for generating random event IDs.

View File

@@ -1 +0,0 @@
Speed up responding to media requests.

View File

@@ -1 +0,0 @@
Improve docstrings for profile methods.

View File

@@ -1 +0,0 @@
Speed up responding to media requests.

View File

@@ -1 +0,0 @@
Reduce log spam of multipart files.

View File

@@ -1 +0,0 @@
Speed up responding to media requests.

View File

@@ -1 +0,0 @@
Speed up responding to media requests.

View File

@@ -1 +0,0 @@
Speed up responding to media requests.

View File

@@ -1 +0,0 @@
Add a flag to `/versions`, `org.matrix.simplified_msc3575`, to indicate whether experimental sliding sync support has been enabled.

54
debian/changelog vendored
View File

@@ -1,57 +1,3 @@
matrix-synapse-py3 (1.113.0) stable; urgency=medium
* New Synapse release 1.113.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 13 Aug 2024 14:36:56 +0100
matrix-synapse-py3 (1.113.0~rc1) stable; urgency=medium
* New Synapse release 1.113.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 06 Aug 2024 12:23:23 +0100
matrix-synapse-py3 (1.112.0) stable; urgency=medium
* New Synapse release 1.112.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Jul 2024 17:15:48 +0100
matrix-synapse-py3 (1.112.0~rc1) stable; urgency=medium
* New Synapse release 1.112.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Jul 2024 08:58:55 -0600
matrix-synapse-py3 (1.111.1) stable; urgency=medium
* New Synapse release 1.111.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Jul 2024 16:13:52 +0100
matrix-synapse-py3 (1.111.0) stable; urgency=medium
* New Synapse release 1.111.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Jul 2024 12:42:46 +0200
matrix-synapse-py3 (1.111.0~rc2) stable; urgency=medium
* New synapse release 1.111.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 10 Jul 2024 08:46:54 +0000
matrix-synapse-py3 (1.111.0~rc1) stable; urgency=medium
* New synapse release 1.111.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 09 Jul 2024 09:49:25 +0000
matrix-synapse-py3 (1.110.0) stable; urgency=medium
* New Synapse release 1.110.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 03 Jul 2024 09:08:59 -0600
matrix-synapse-py3 (1.110.0~rc3) stable; urgency=medium
* New Synapse release 1.110.0rc3.

2
debian/templates vendored
View File

@@ -5,7 +5,7 @@ _Description: Name of the server:
servers via federation. This is normally the public hostname of the
server running synapse, but can be different if you set up delegation.
Please refer to the delegation documentation in this case:
https://element-hq.github.io/synapse/latest/delegate.html.
https://github.com/element-hq/synapse/blob/master/docs/delegate.md.
Template: matrix-synapse/report-stats
Type: boolean

View File

@@ -27,7 +27,7 @@ ARG PYTHON_VERSION=3.11
###
# We hardcode the use of Debian bookworm here because this could change upstream
# and other Dockerfiles used for testing are expecting bookworm.
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm AS requirements
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm as requirements
# RUN --mount is specific to buildkit and is documented at
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
@@ -87,7 +87,7 @@ RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
###
### Stage 1: builder
###
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm AS builder
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm as builder
# install the OS build deps
RUN \

View File

@@ -24,7 +24,7 @@ ARG distro=""
# https://launchpad.net/~jyrki-pulliainen/+archive/ubuntu/dh-virtualenv, but
# it's not obviously easier to use that than to build our own.)
FROM docker.io/library/${distro} AS builder
FROM docker.io/library/${distro} as builder
RUN apt-get update -qq -o Acquire::Languages=none
RUN env DEBIAN_FRONTEND=noninteractive apt-get install \

View File

@@ -126,7 +126,6 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"^/_synapse/admin/v1/media/.*$",
"^/_synapse/admin/v1/quarantine_media/.*$",
"^/_matrix/client/v1/media/.*$",
"^/_matrix/federation/v1/media/.*$",
],
# The first configured media worker will run the media background jobs
"shared_extra_conf": {

View File

@@ -1,17 +1,21 @@
# Experimental Features API
This API allows a server administrator to enable or disable some experimental features on a per-user
basis. The currently supported features are:
- [MSC3881](https://github.com/matrix-org/matrix-spec-proposals/pull/3881): enable remotely toggling push notifications
for another client
- [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575): enable experimental sliding sync support
basis. The currently supported features are:
- [MSC3026](https://github.com/matrix-org/matrix-spec-proposals/pull/3026): busy
presence state enabled
- [MSC3881](https://github.com/matrix-org/matrix-spec-proposals/pull/3881): enable remotely toggling push notifications
for another client
- [MSC3967](https://github.com/matrix-org/matrix-spec-proposals/pull/3967): do not require
UIA when first uploading cross-signing keys.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api/).
## Enabling/Disabling Features
This API allows a server administrator to enable experimental features for a given user. The request must
This API allows a server administrator to enable experimental features for a given user. The request must
provide a body containing the user id and listing the features to enable/disable in the following format:
```json
{
@@ -31,7 +35,7 @@ PUT /_synapse/admin/v1/experimental_features/<user_id>
```
## Listing Enabled Features
To list which features are enabled/disabled for a given user send a request to the following API:
```
@@ -48,4 +52,4 @@ user like so:
"msc3967": false
}
}
```
```

View File

@@ -449,9 +449,9 @@ For example, a fix in PR #1234 would have its changelog entry in
> The security levels of Florbs are now validated when received
> via the `/federation/florb` endpoint. Contributed by Jane Matrix.
If there are multiple pull requests involved in a single bugfix/feature/etc, then the
content for each `changelog.d` file and file extension should be the same. Towncrier
will merge the matching files together into a single changelog entry when we come to
If there are multiple pull requests involved in a single bugfix/feature/etc,
then the content for each `changelog.d` file should be the same. Towncrier will
merge the matching files together into a single changelog entry when we come to
release.
### How do I know what to call the changelog file before I create the PR?

View File

@@ -21,10 +21,8 @@ incrementing integer, but backfilled events start with `stream_ordering=-1` and
---
- Incremental `/sync?since=xxx` returns things in the order they arrive at the server
(`stream_ordering`).
- Initial `/sync`, `/messages` (and `/backfill` in the federation API) return them in
the order determined by the event graph `(topological_ordering, stream_ordering)`.
- `/sync` returns things in the order they arrive at the server (`stream_ordering`).
- `/messages` (and `/backfill` in the federation API) return them in the order determined by the event graph `(topological_ordering, stream_ordering)`.
The general idea is that, if you're following a room in real-time (i.e.
`/sync`), you probably want to see the messages as they arrive at your server,

View File

@@ -1,94 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="41.440346mm"
height="10.383124mm"
viewBox="0 0 41.440346 10.383125"
version="1.1"
id="svg1"
xml:space="preserve"
sodipodi:docname="element_logo_white_bg.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><sodipodi:namedview
id="namedview1"
pagecolor="#ffffff"
bordercolor="#000000"
borderopacity="0.25"
inkscape:showpageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1"
inkscape:document-units="mm"
showgrid="false"
inkscape:export-bgcolor="#ffffffff" /><defs
id="defs1" /><g
id="layer1"
transform="translate(-84.803844,-143.2075)"
inkscape:export-filename="element_logo_white_bg.svg"
inkscape:export-xdpi="96"
inkscape:export-ydpi="96"><g
style="fill:none"
id="g1"
transform="matrix(0.26458333,0,0,0.26458333,85.841658,144.26667)"><rect
style="display:inline;fill:#ffffff;fill-opacity:1;stroke:#ffffff;stroke-width:1.31041;stroke-dasharray:none;stroke-opacity:1"
id="rect20"
width="155.31451"
height="37.932892"
x="-3.2672384"
y="-3.3479743"
rx="3.3718522"
ry="3.7915266"
transform="translate(-2.1259843e-6)"
inkscape:label="rect20"
inkscape:export-filename="rect20.svg"
inkscape:export-xdpi="96"
inkscape:export-ydpi="96" /><path
fill-rule="evenodd"
clip-rule="evenodd"
d="M 16,32 C 24.8366,32 32,24.8366 32,16 32,7.16344 24.8366,0 16,0 7.16344,0 0,7.16344 0,16 0,24.8366 7.16344,32 16,32 Z"
fill="#0dbd8b"
id="path1" /><path
fill-rule="evenodd"
clip-rule="evenodd"
d="m 13.0756,7.455 c 0,-0.64584 0.5247,-1.1694 1.1719,-1.1694 4.3864,0 7.9423,3.54853 7.9423,7.9259 0,0.6458 -0.5246,1.1694 -1.1718,1.1694 -0.6472,0 -1.1719,-0.5236 -1.1719,-1.1694 0,-3.0857 -2.5066,-5.58711 -5.5986,-5.58711 -0.6472,0 -1.1719,-0.52355 -1.1719,-1.16939 z"
fill="#ffffff"
id="path2" /><path
fill-rule="evenodd"
clip-rule="evenodd"
d="m 24.5424,13.042 c 0.6472,0 1.1719,0.5235 1.1719,1.1694 0,4.3773 -3.5559,7.9258 -7.9424,7.9258 -0.6472,0 -1.1718,-0.5235 -1.1718,-1.1693 0,-0.6459 0.5246,-1.1694 1.1718,-1.1694 3.0921,0 5.5987,-2.5015 5.5987,-5.5871 0,-0.6459 0.5247,-1.1694 1.1718,-1.1694 z"
fill="#ffffff"
id="path3" /><path
fill-rule="evenodd"
clip-rule="evenodd"
d="m 18.9446,24.5446 c 0,0.6459 -0.5247,1.1694 -1.1718,1.1694 -4.3865,0 -7.94239,-3.5485 -7.94239,-7.9258 0,-0.6459 0.52469,-1.1694 1.17179,-1.1694 0.6472,0 1.1719,0.5235 1.1719,1.1694 0,3.0856 2.5066,5.587 5.5987,5.587 0.6471,0 1.1718,0.5236 1.1718,1.1694 z"
fill="#ffffff"
id="path4" /><path
fill-rule="evenodd"
clip-rule="evenodd"
d="m 7.45823,18.9576 c -0.64718,0 -1.17183,-0.5235 -1.17183,-1.1694 0,-4.3773 3.55591,-7.92581 7.9423,-7.92581 0.6472,0 1.1719,0.52351 1.1719,1.16941 0,0.6458 -0.5247,1.1694 -1.1719,1.1694 -3.092,0 -5.59864,2.5014 -5.59864,5.587 0,0.6459 -0.52465,1.1694 -1.17183,1.1694 z"
fill="#ffffff"
id="path5" /><path
d="M 56.2856,18.1428 H 44.9998 c 0.1334,1.181 0.5619,2.1238 1.2858,2.8286 0.7238,0.6857 1.6761,1.0286 2.8571,1.0286 0.7809,0 1.4857,-0.1905 2.1143,-0.5715 0.6286,-0.3809 1.0762,-0.8952 1.3428,-1.5428 h 3.4286 c -0.4571,1.5047 -1.3143,2.7238 -2.5714,3.6571 -1.2381,0.9143 -2.7048,1.3715 -4.4,1.3715 -2.2095,0 -4,-0.7334 -5.3714,-2.2 -1.3524,-1.4667 -2.0286,-3.3239 -2.0286,-5.5715 0,-2.1905 0.6857,-4.0285 2.0571,-5.5143 1.3715,-1.4857 3.1429,-2.22853 5.3143,-2.22853 2.1714,0 3.9238,0.73333 5.2572,2.20003 1.3523,1.4476 2.0285,3.2762 2.0285,5.4857 z m -7.2572,-5.9714 c -1.0667,0 -1.9524,0.3143 -2.6571,0.9429 -0.7048,0.6285 -1.1429,1.4666 -1.3143,2.5142 h 7.8857 c -0.1524,-1.0476 -0.5714,-1.8857 -1.2571,-2.5142 -0.6858,-0.6286 -1.5715,-0.9429 -2.6572,-0.9429 z"
fill="#000000"
id="path6" /><path
d="M 58.6539,20.1428 V 3.14282 h 3.4 V 20.2 c 0,0.7619 0.419,1.1428 1.2571,1.1428 l 0.6,-0.0285 v 3.2285 c -0.3238,0.0572 -0.6667,0.0857 -1.0286,0.0857 -1.4666,0 -2.5428,-0.3714 -3.2285,-1.1142 -0.6667,-0.7429 -1,-1.8667 -1,-3.3715 z"
fill="#000000"
id="path7" /><path
d="M 79.7454,18.1428 H 68.4597 c 0.1333,1.181 0.5619,2.1238 1.2857,2.8286 0.7238,0.6857 1.6762,1.0286 2.8571,1.0286 0.781,0 1.4857,-0.1905 2.1143,-0.5715 0.6286,-0.3809 1.0762,-0.8952 1.3429,-1.5428 h 3.4285 c -0.4571,1.5047 -1.3143,2.7238 -2.5714,3.6571 -1.2381,0.9143 -2.7048,1.3715 -4.4,1.3715 -2.2095,0 -4,-0.7334 -5.3714,-2.2 -1.3524,-1.4667 -2.0286,-3.3239 -2.0286,-5.5715 0,-2.1905 0.6857,-4.0285 2.0571,-5.5143 1.3715,-1.4857 3.1429,-2.22853 5.3143,-2.22853 2.1715,0 3.9238,0.73333 5.2572,2.20003 1.3524,1.4476 2.0285,3.2762 2.0285,5.4857 z m -7.2572,-5.9714 c -1.0666,0 -1.9524,0.3143 -2.6571,0.9429 -0.7048,0.6285 -1.1429,1.4666 -1.3143,2.5142 h 7.8857 c -0.1524,-1.0476 -0.5714,-1.8857 -1.2571,-2.5142 -0.6857,-0.6286 -1.5715,-0.9429 -2.6572,-0.9429 z"
fill="#000000"
id="path8" /><path
d="m 95.0851,16.0571 v 8.5143 h -3.4 v -8.8857 c 0,-2.2476 -0.9333,-3.3714 -2.8,-3.3714 -1.0095,0 -1.819,0.3238 -2.4286,0.9714 -0.5904,0.6476 -0.8857,1.5333 -0.8857,2.6571 v 8.6286 h -3.4 V 9.74282 h 3.1429 v 1.97148 c 0.3619,-0.6667 0.9143,-1.2191 1.6571,-1.6572 0.7429,-0.43809 1.6667,-0.65713 2.7714,-0.65713 2.0572,0 3.5429,0.78093 4.4572,2.34283 1.2571,-1.5619 2.9333,-2.34283 5.0286,-2.34283 1.733,0 3.067,0.54285 4,1.62853 0.933,1.0667 1.4,2.4762 1.4,4.2286 v 9.3143 h -3.4 v -8.8857 c 0,-2.2476 -0.933,-3.3714 -2.8,-3.3714 -1.0286,0 -1.8477,0.3333 -2.4572,1 -0.5905,0.6476 -0.8857,1.5619 -0.8857,2.7428 z"
fill="#000000"
id="path9" /><path
d="m 121.537,18.1428 h -11.286 c 0.133,1.181 0.562,2.1238 1.286,2.8286 0.723,0.6857 1.676,1.0286 2.857,1.0286 0.781,0 1.486,-0.1905 2.114,-0.5715 0.629,-0.3809 1.076,-0.8952 1.343,-1.5428 h 3.429 c -0.458,1.5047 -1.315,2.7238 -2.572,3.6571 -1.238,0.9143 -2.705,1.3715 -4.4,1.3715 -2.209,0 -4,-0.7334 -5.371,-2.2 -1.353,-1.4667 -2.029,-3.3239 -2.029,-5.5715 0,-2.1905 0.686,-4.0285 2.057,-5.5143 1.372,-1.4857 3.143,-2.22853 5.315,-2.22853 2.171,0 3.923,0.73333 5.257,2.20003 1.352,1.4476 2.028,3.2762 2.028,5.4857 z m -7.257,-5.9714 c -1.067,0 -1.953,0.3143 -2.658,0.9429 -0.704,0.6285 -1.142,1.4666 -1.314,2.5142 h 7.886 c -0.153,-1.0476 -0.572,-1.8857 -1.257,-2.5142 -0.686,-0.6286 -1.572,-0.9429 -2.657,-0.9429 z"
fill="#000000"
id="path10" /><path
d="m 127.105,9.74282 v 1.97148 c 0.343,-0.6477 0.905,-1.1905 1.686,-1.6286 0.8,-0.45716 1.762,-0.68573 2.885,-0.68573 1.753,0 3.105,0.53333 4.058,1.60003 0.971,1.0666 1.457,2.4857 1.457,4.2571 v 9.3143 h -3.4 v -8.8857 c 0,-1.0476 -0.248,-1.8667 -0.743,-2.4572 -0.476,-0.6095 -1.21,-0.9142 -2.2,-0.9142 -1.086,0 -1.943,0.3238 -2.572,0.9714 -0.609,0.6476 -0.914,1.5428 -0.914,2.6857 v 8.6 h -3.4 V 9.74282 Z"
fill="#000000"
id="path11" /><path
d="m 147.12,21.5428 v 2.9429 c -0.419,0.1143 -1.009,0.1714 -1.771,0.1714 -2.895,0 -4.343,-1.4571 -4.343,-4.3714 v -7.8286 h -2.257 V 9.74282 h 2.257 V 5.88568 h 3.4 v 3.85714 h 2.772 v 2.71428 h -2.772 v 7.4857 c 0,1.1619 0.552,1.7429 1.657,1.7429 z"
fill="#000000"
id="path12" /></g></g></svg>

Before

Width:  |  Height:  |  Size: 7.5 KiB

View File

@@ -67,7 +67,7 @@ in Synapse can be deactivated.
**NOTE**: This has an impact on security and is for testing purposes only!
To deactivate the certificate validation, the following setting must be added to
your [homeserver.yaml](../usage/configuration/homeserver_sample_config.md).
your [homserver.yaml](../usage/configuration/homeserver_sample_config.md).
```yaml
use_insecure_ssl_client_just_for_testing_do_not_use: true

View File

@@ -309,62 +309,7 @@ sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
libwebp-devel libxml2-devel libxslt-devel libpq-devel \
python3-virtualenv libffi-devel openssl-devel python3-devel \
libicu-devel
sudo dnf group install "Development Tools"
```
##### Red Hat Enterprise Linux / Rocky Linux
*Note: The term "RHEL" below refers to both Red Hat Enterprise Linux and Rocky Linux. The distributions are 1:1 binary compatible.*
It's recommended to use the latest Python versions.
RHEL 8 in particular ships with Python 3.6 by default which is EOL and therefore no longer supported by Synapse. RHEL 9 ship with Python 3.9 which is still supported by the Python core team as of this writing. However, newer Python versions provide significant performance improvements and they're available in official distributions' repositories. Therefore it's recommended to use them.
Python 3.11 and 3.12 are available for both RHEL 8 and 9.
These commands should be run as root user.
RHEL 8
```bash
# Enable PowerTools repository
dnf config-manager --set-enabled powertools
```
RHEL 9
```bash
# Enable CodeReady Linux Builder repository
crb enable
```
Install new version of Python. You only need one of these:
```bash
# Python 3.11
dnf install python3.11 python3.11-devel
```
```bash
# Python 3.12
dnf install python3.12 python3.12-devel
```
Finally, install common prerequisites
```bash
dnf install libicu libicu-devel libpq5 libpq5-devel lz4 pkgconf
dnf group install "Development Tools"
```
###### Using venv module instead of virtualenv command
It's recommended to use Python venv module directly rather than the virtualenv command.
* On RHEL 9, virtualenv is only available on [EPEL](https://docs.fedoraproject.org/en-US/epel/).
* On RHEL 8, virtualenv is based on Python 3.6. It does not support creating 3.11/3.12 virtual environments.
Here's an example of creating Python 3.12 virtual environment and installing Synapse from PyPI.
```bash
mkdir -p ~/synapse
# To use Python 3.11, simply use the command "python3.11" instead.
python3.12 -m venv ~/synapse/env
source ~/synapse/env/bin/activate
pip install --upgrade pip
pip install --upgrade setuptools
pip install matrix-synapse
sudo dnf groupinstall "Development Tools"
```
##### macOS

View File

@@ -119,14 +119,13 @@ stacking them up. You can monitor the currently running background updates with
# Upgrading to v1.111.0
## New worker endpoints for authenticated client and federation media
## New worker endpoints for authenticated client media
[Media repository workers](./workers.md#synapseappmedia_repository) handling
Media APIs can now handle the following endpoint patterns:
Media APIs can now handle the following endpoint pattern:
```
^/_matrix/client/v1/media/.*$
^/_matrix/federation/v1/media/.*$
```
Please update your reverse proxy configuration.

View File

@@ -246,7 +246,6 @@ Example configuration:
```yaml
presence:
enabled: false
include_offline_users_on_sync: false
```
`enabled` can also be set to a special value of "untracked" which ignores updates
@@ -255,10 +254,6 @@ received via clients and federation, while still accepting updates from the
*The "untracked" option was added in Synapse 1.96.0.*
When clients perform an initial or `full_state` sync, presence results for offline users are
not included by default. Setting `include_offline_users_on_sync` to `true` will always include
offline users in the results. Defaults to false.
---
### `require_auth_for_profile_requests`
@@ -1868,18 +1863,6 @@ federation_rr_transactions_per_room_per_second: 40
## Media Store
Config options related to Synapse's media store.
---
### `enable_authenticated_media`
When set to true, all subsequent media uploads will be marked as authenticated, and will not be available over legacy
unauthenticated media endpoints (`/_matrix/media/(r0|v3|v1)/download` and `/_matrix/media/(r0|v3|v1)/thumbnail`) - requests for authenticated media over these endpoints will result in a 404. All media, including authenticated media, will be available over the authenticated media endpoints `_matrix/client/v1/media/download` and `_matrix/client/v1/media/thumbnail`. Media uploaded prior to setting this option to true will still be available over the legacy endpoints. Note if the setting is switched to false
after enabling, media marked as authenticated will be available over legacy endpoints. Defaults to false, but
this will change to true in a future Synapse release.
Example configuration:
```yaml
enable_authenticated_media: true
```
---
### `enable_media_repo`
@@ -2386,7 +2369,7 @@ enable_registration_without_verification: true
---
### `registrations_require_3pid`
If this is set, users must provide all of the specified types of [3PID](https://spec.matrix.org/latest/appendices/#3pid-types) when registering an account.
If this is set, users must provide all of the specified types of 3PID when registering an account.
Note that [`enable_registration`](#enable_registration) must also be set to allow account registration.
@@ -2411,9 +2394,6 @@ disable_msisdn_registration: true
Mandate that users are only allowed to associate certain formats of
3PIDs with accounts on this server, as specified by the `medium` and `pattern` sub-options.
`pattern` is a [Perl-like regular expression](https://docs.python.org/3/library/re.html#module-re).
More information about 3PIDs, allowed `medium` types and their `address` syntax can be found [in the Matrix spec](https://spec.matrix.org/latest/appendices/#3pid-types).
Example configuration:
```yaml
@@ -2423,7 +2403,7 @@ allowed_local_3pids:
- medium: email
pattern: '^[^@]+@vector\.im$'
- medium: msisdn
pattern: '^44\d{10}$'
pattern: '\+44'
```
---
### `enable_3pid_lookup`
@@ -4154,38 +4134,6 @@ default_power_level_content_override:
trusted_private_chat: null
public_chat: null
```
The default power levels for each preset are:
```yaml
"m.room.name": 50
"m.room.power_levels": 100
"m.room.history_visibility": 100
"m.room.canonical_alias": 50
"m.room.avatar": 50
"m.room.tombstone": 100
"m.room.server_acl": 100
"m.room.encryption": 100
```
So a complete example where the default power-levels for a preset are maintained
but the power level for a new key is set is:
```yaml
default_power_level_content_override:
private_chat:
events:
"com.example.foo": 0
"m.room.name": 50
"m.room.power_levels": 100
"m.room.history_visibility": 100
"m.room.canonical_alias": 50
"m.room.avatar": 50
"m.room.tombstone": 100
"m.room.server_acl": 100
"m.room.encryption": 100
trusted_private_chat: null
public_chat: null
```
---
### `forget_rooms_on_leave`
@@ -4685,9 +4633,7 @@ This setting has the following sub-options:
* `only_for_direct_messages`: Whether invites should be automatically accepted for all room types, or only
for direct messages. Defaults to false.
* `only_from_local_users`: Whether to only automatically accept invites from users on this homeserver. Defaults to false.
* `worker_to_run_on`: Which worker to run this module on. This must match
the "worker_name". If not set or `null`, invites will be accepted on the
main process.
* `worker_to_run_on`: Which worker to run this module on. This must match the "worker_name".
NOTE: Care should be taken not to enable this setting if the `synapse_auto_accept_invite` module is enabled and installed.
The two modules will compete to perform the same task and may result in undesired behaviour. For example, multiple join

View File

@@ -740,7 +740,6 @@ Handles the media repository. It can handle all endpoints starting with:
/_matrix/media/
/_matrix/client/v1/media/
/_matrix/federation/v1/media/
... and the following regular expressions matching media-specific administration APIs:

1402
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -43,7 +43,6 @@ target-version = ['py38', 'py39', 'py310', 'py311']
[tool.ruff]
line-length = 88
[tool.ruff.lint]
# See https://beta.ruff.rs/docs/rules/#error-e
# for error codes. The ones we ignore are:
# E501: Line too long (black enforces this for us)
@@ -97,7 +96,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.113.0"
version = "1.110.0rc3"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"
@@ -201,8 +200,8 @@ netaddr = ">=0.7.18"
# add a lower bound to the Jinja2 dependency.
Jinja2 = ">=3.0"
bleach = ">=1.4.3"
# We use `assert_never`, which were added in `typing-extensions` 4.1.
typing-extensions = ">=4.1"
# We use `Self`, which were added in `typing-extensions` 4.0.
typing-extensions = ">=4.0"
# We enforce that we have a `cryptography` version that bundles an `openssl`
# with the latest security patches.
cryptography = ">=3.4.7"
@@ -322,7 +321,7 @@ all = [
# This helps prevents merge conflicts when running a batch of dependabot updates.
isort = ">=5.10.1"
black = ">=22.7.0"
ruff = "0.5.5"
ruff = "0.3.7"
# Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2"

View File

@@ -43,7 +43,7 @@ import argparse
import base64
import json
import sys
from typing import Any, Dict, Mapping, Optional, Tuple, Union
from typing import Any, Dict, Optional, Tuple
from urllib import parse as urlparse
import requests
@@ -75,7 +75,7 @@ def encode_canonical_json(value: object) -> bytes:
value,
# Encode code-points outside of ASCII as UTF-8 rather than \u escapes
ensure_ascii=False,
# Remove unnecessary white space.
# Remove unecessary white space.
separators=(",", ":"),
# Sort the keys of dictionaries.
sort_keys=True,
@@ -298,23 +298,12 @@ class MatrixConnectionAdapter(HTTPAdapter):
return super().send(request, *args, **kwargs)
def get_connection_with_tls_context(
self,
request: PreparedRequest,
verify: Optional[Union[bool, str]],
proxies: Optional[Mapping[str, str]] = None,
cert: Optional[Union[Tuple[str, str], str]] = None,
def get_connection(
self, url: str, proxies: Optional[Dict[str, str]] = None
) -> HTTPConnectionPool:
# overrides the get_connection_with_tls_context() method in the base class
parsed = urlparse.urlsplit(request.url)
# Extract the server name from the request URL, and ensure it's a str.
hostname = parsed.netloc
if isinstance(hostname, bytes):
hostname = hostname.decode("utf-8")
assert isinstance(hostname, str)
(host, port, ssl_server_name) = self._lookup(hostname)
# overrides the get_connection() method in the base class
parsed = urlparse.urlsplit(url)
(host, port, ssl_server_name) = self._lookup(parsed.netloc)
print(
f"Connecting to {host}:{port} with SNI {ssl_server_name}", file=sys.stderr
)

View File

@@ -112,7 +112,7 @@ python3 -m black "${files[@]}"
# Catch any common programming mistakes in Python code.
# --quiet suppresses the update check.
ruff check --quiet --fix "${files[@]}"
ruff --quiet --fix "${files[@]}"
# Catch any common programming mistakes in Rust code.
#

View File

@@ -70,7 +70,6 @@ def cli() -> None:
pip install -e .[dev]
- A checkout of the sytest repository at ../sytest
- A checkout of the complement repository at ../complement
Then to use:
@@ -113,12 +112,10 @@ def _prepare() -> None:
# Make sure we're in a git repo.
synapse_repo = get_repo_and_check_clean_checkout()
sytest_repo = get_repo_and_check_clean_checkout("../sytest", "sytest")
complement_repo = get_repo_and_check_clean_checkout("../complement", "complement")
click.secho("Updating Synapse and Sytest git repos...")
synapse_repo.remote().fetch()
sytest_repo.remote().fetch()
complement_repo.remote().fetch()
# Get the current version and AST from root Synapse module.
current_version = get_package_version()
@@ -211,15 +208,7 @@ def _prepare() -> None:
"Which branch should the release be based on?", default=default
)
for repo_name, repo in {
"synapse": synapse_repo,
"sytest": sytest_repo,
"complement": complement_repo,
}.items():
# Special case for Complement: `develop` maps to `main`
if repo_name == "complement" and branch_name == "develop":
branch_name = "main"
for repo_name, repo in {"synapse": synapse_repo, "sytest": sytest_repo}.items():
base_branch = find_ref(repo, branch_name)
if not base_branch:
print(f"Could not find base branch {branch_name} for {repo_name}!")
@@ -242,12 +231,6 @@ def _prepare() -> None:
if click.confirm("Push new SyTest branch?", default=True):
sytest_repo.git.push("-u", sytest_repo.remote().name, release_branch_name)
# Same for Complement
if click.confirm("Push new Complement branch?", default=True):
complement_repo.git.push(
"-u", complement_repo.remote().name, release_branch_name
)
# Switch to the release branch and ensure it's up to date.
synapse_repo.git.checkout(release_branch_name)
update_branch(synapse_repo)
@@ -647,9 +630,6 @@ def _merge_back() -> None:
else:
# Full release
sytest_repo = get_repo_and_check_clean_checkout("../sytest", "sytest")
complement_repo = get_repo_and_check_clean_checkout(
"../complement", "complement"
)
if click.confirm(f"Merge {branch_name} → master?", default=True):
_merge_into(synapse_repo, branch_name, "master")
@@ -663,9 +643,6 @@ def _merge_back() -> None:
if click.confirm("On SyTest, merge master → develop?", default=True):
_merge_into(sytest_repo, "master", "develop")
if click.confirm(f"On Complement, merge {branch_name} → main?", default=True):
_merge_into(complement_repo, branch_name, "main")
@cli.command()
def announce() -> None:

View File

@@ -44,7 +44,7 @@ logger = logging.getLogger("generate_workers_map")
class MockHomeserver(HomeServer):
DATASTORE_CLASS = DataStore
DATASTORE_CLASS = DataStore # type: ignore
def __init__(self, config: HomeServerConfig, worker_app: Optional[str]) -> None:
super().__init__(config.server.server_name, config=config)

View File

@@ -119,19 +119,18 @@ BOOLEAN_COLUMNS = {
"e2e_room_keys": ["is_verified"],
"event_edges": ["is_state"],
"events": ["processed", "outlier", "contains_url"],
"local_media_repository": ["safe_from_quarantine", "authenticated"],
"per_user_experimental_features": ["enabled"],
"local_media_repository": ["safe_from_quarantine"],
"presence_list": ["accepted"],
"presence_stream": ["currently_active"],
"public_room_list_stream": ["visibility"],
"pushers": ["enabled"],
"redactions": ["have_censored"],
"remote_media_cache": ["authenticated"],
"room_stats_state": ["is_federatable"],
"rooms": ["is_public", "has_auth_chain_index"],
"users": ["shadow_banned", "approved", "locked", "suspended"],
"un_partial_stated_event_stream": ["rejection_status_changed"],
"users_who_share_rooms": ["share_private"],
"per_user_experimental_features": ["enabled"],
}

View File

@@ -41,7 +41,7 @@ logger = logging.getLogger("update_database")
class MockHomeserver(HomeServer):
DATASTORE_CLASS = DataStore
DATASTORE_CLASS = DataStore # type: ignore [assignment]
def __init__(self, config: HomeServerConfig):
super().__init__(

View File

@@ -18,7 +18,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
from typing import TYPE_CHECKING, Optional, Tuple
from typing import Optional, Tuple
from typing_extensions import Protocol
@@ -28,9 +28,6 @@ from synapse.appservice import ApplicationService
from synapse.http.site import SynapseRequest
from synapse.types import Requester
if TYPE_CHECKING:
from synapse.rest.admin.experimental_features import ExperimentalFeature
# guests always get this device id.
GUEST_DEVICE_ID = "guest_device"
@@ -90,19 +87,6 @@ class Auth(Protocol):
AuthError if access is denied for the user in the access token
"""
async def get_user_by_req_experimental_feature(
self,
request: SynapseRequest,
feature: "ExperimentalFeature",
allow_guest: bool = False,
allow_expired: bool = False,
allow_locked: bool = False,
) -> Requester:
"""Like `get_user_by_req`, except also checks if the user has access to
the experimental feature. If they don't returns a 404 unrecognized
request.
"""
async def validate_appservice_can_control_user_id(
self, app_service: ApplicationService, user_id: str
) -> None:

View File

@@ -28,7 +28,6 @@ from synapse.api.errors import (
Codes,
InvalidClientTokenError,
MissingClientTokenError,
UnrecognizedRequestError,
)
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import active_span, force_tracing, start_active_span
@@ -39,10 +38,8 @@ from . import GUEST_DEVICE_ID
from .base import BaseAuth
if TYPE_CHECKING:
from synapse.rest.admin.experimental_features import ExperimentalFeature
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@@ -109,32 +106,6 @@ class InternalAuth(BaseAuth):
parent_span.set_tag("appservice_id", requester.app_service.id)
return requester
async def get_user_by_req_experimental_feature(
self,
request: SynapseRequest,
feature: "ExperimentalFeature",
allow_guest: bool = False,
allow_expired: bool = False,
allow_locked: bool = False,
) -> Requester:
try:
requester = await self.get_user_by_req(
request,
allow_guest=allow_guest,
allow_expired=allow_expired,
allow_locked=allow_locked,
)
if await self.store.is_feature_enabled(requester.user.to_string(), feature):
return requester
raise UnrecognizedRequestError(code=404)
except (AuthError, InvalidClientTokenError):
if feature.is_globally_enabled(self.hs.config):
# If its globally enabled then return the auth error
raise
raise UnrecognizedRequestError(code=404)
@cancellable
async def _wrapped_get_user_by_req(
self,

View File

@@ -40,7 +40,6 @@ from synapse.api.errors import (
OAuthInsufficientScopeError,
StoreError,
SynapseError,
UnrecognizedRequestError,
)
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
@@ -49,7 +48,6 @@ from synapse.util import json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
if TYPE_CHECKING:
from synapse.rest.admin.experimental_features import ExperimentalFeature
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@@ -145,18 +143,6 @@ class MSC3861DelegatedAuth(BaseAuth):
# metadata.validate_introspection_endpoint()
return metadata
async def _introspection_endpoint(self) -> str:
"""
Returns the introspection endpoint of the issuer
It uses the config option if set, otherwise it will use OIDC discovery to get it
"""
if self._config.introspection_endpoint is not None:
return self._config.introspection_endpoint
metadata = await self._load_metadata()
return metadata.get("introspection_endpoint")
async def _introspect_token(self, token: str) -> IntrospectionToken:
"""
Send a token to the introspection endpoint and returns the introspection response
@@ -173,7 +159,8 @@ class MSC3861DelegatedAuth(BaseAuth):
Returns:
The introspection response
"""
introspection_endpoint = await self._introspection_endpoint()
metadata = await self._issuer_metadata.get()
introspection_endpoint = metadata.get("introspection_endpoint")
raw_headers: Dict[str, str] = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": str(self._http_client.user_agent, "utf-8"),
@@ -258,32 +245,6 @@ class MSC3861DelegatedAuth(BaseAuth):
return requester
async def get_user_by_req_experimental_feature(
self,
request: SynapseRequest,
feature: "ExperimentalFeature",
allow_guest: bool = False,
allow_expired: bool = False,
allow_locked: bool = False,
) -> Requester:
try:
requester = await self.get_user_by_req(
request,
allow_guest=allow_guest,
allow_expired=allow_expired,
allow_locked=allow_locked,
)
if await self.store.is_feature_enabled(requester.user.to_string(), feature):
return requester
raise UnrecognizedRequestError(code=404)
except (AuthError, InvalidClientTokenError):
if feature.is_globally_enabled(self.hs.config):
# If its globally enabled then return the auth error
raise
raise UnrecognizedRequestError(code=404)
async def get_user_by_access_token(
self,
token: str,

View File

@@ -50,7 +50,7 @@ class Membership:
KNOCK: Final = "knock"
LEAVE: Final = "leave"
BAN: Final = "ban"
LIST: Final = frozenset((INVITE, JOIN, KNOCK, LEAVE, BAN))
LIST: Final = {INVITE, JOIN, KNOCK, LEAVE, BAN}
class PresenceState:
@@ -128,13 +128,9 @@ class EventTypes:
SpaceParent: Final = "m.space.parent"
Reaction: Final = "m.reaction"
Sticker: Final = "m.sticker"
LiveLocationShareStart: Final = "m.beacon_info"
CallInvite: Final = "m.call.invite"
PollStart: Final = "m.poll.start"
class ToDeviceEventTypes:
RoomKeyRequest: Final = "m.room_key_request"
@@ -225,11 +221,6 @@ class EventContentFields:
# This is deprecated in MSC2175.
ROOM_CREATOR: Final = "creator"
# The version of the room for `m.room.create` events.
ROOM_VERSION: Final = "room_version"
ROOM_NAME: Final = "name"
# Used in m.room.guest_access events.
GUEST_ACCESS: Final = "guest_access"
@@ -242,9 +233,6 @@ class EventContentFields:
# an unspecced field added to to-device messages to identify them uniquely-ish
TO_DEVICE_MSGID: Final = "org.matrix.msgid"
# `m.room.encryption`` algorithm field
ENCRYPTION_ALGORITHM: Final = "algorithm"
class EventUnsignedContentFields:
"""Fields found inside the 'unsigned' data on events"""

View File

@@ -128,10 +128,6 @@ class Codes(str, Enum):
# MSC2677
DUPLICATE_ANNOTATION = "M_DUPLICATE_ANNOTATION"
# MSC3575 we are telling the client they need to expire their sliding sync
# connection.
UNKNOWN_POS = "M_UNKNOWN_POS"
class CodeMessageException(RuntimeError):
"""An exception with integer code, a message string attributes and optional headers.
@@ -851,17 +847,3 @@ class PartialStateConflictError(SynapseError):
msg=PartialStateConflictError.message(),
errcode=Codes.UNKNOWN,
)
class SlidingSyncUnknownPosition(SynapseError):
"""An error that Synapse can return to signal to the client to expire their
sliding sync connection (i.e. send a new request without a `?since=`
param).
"""
def __init__(self) -> None:
super().__init__(
HTTPStatus.BAD_REQUEST,
msg="Unknown position",
errcode=Codes.UNKNOWN_POS,
)

View File

@@ -236,8 +236,9 @@ class Ratelimiter:
requester: The requester that is doing the action, if any.
key: An arbitrary key used to classify an action. Defaults to the
requester's user ID.
n_actions: The number of times the user performed the action. May be negative
to "refund" the rate limit.
n_actions: The number of times the user wants to do this action. If the user
cannot do all of the actions, the user's action count is not incremented
at all.
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
"""

View File

@@ -110,7 +110,7 @@ class AdminCmdStore(
class AdminCmdServer(HomeServer):
DATASTORE_CLASS = AdminCmdStore
DATASTORE_CLASS = AdminCmdStore # type: ignore
async def export_data_command(hs: HomeServer, args: argparse.Namespace) -> None:

View File

@@ -74,9 +74,6 @@ from synapse.storage.databases.main.event_push_actions import (
EventPushActionsWorkerStore,
)
from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.storage.databases.main.experimental_features import (
ExperimentalFeaturesStore,
)
from synapse.storage.databases.main.filtering import FilteringWorkerStore
from synapse.storage.databases.main.keys import KeyStore
from synapse.storage.databases.main.lock import LockStore
@@ -158,7 +155,6 @@ class GenericWorkerStore(
LockStore,
SessionStore,
TaskSchedulerWorkerStore,
ExperimentalFeaturesStore,
):
# Properties that multiple storage classes define. Tell mypy what the
# expected type is.
@@ -167,7 +163,7 @@ class GenericWorkerStore(
class GenericWorkerServer(HomeServer):
DATASTORE_CLASS = GenericWorkerStore
DATASTORE_CLASS = GenericWorkerStore # type: ignore
def _listen_http(self, listener_config: ListenerConfig) -> None:
assert listener_config.http_options is not None
@@ -206,21 +202,6 @@ class GenericWorkerServer(HomeServer):
"/_synapse/admin": admin_resource,
}
)
if "federation" not in res.names:
# Only load the federation media resource separately if federation
# resource is not specified since federation resource includes media
# resource.
resources[FEDERATION_PREFIX] = TransportLayerServer(
self, servlet_groups=["media"]
)
if "client" not in res.names:
# Only load the client media resource separately if client
# resource is not specified since client resource includes media
# resource.
resources[CLIENT_API_PREFIX] = ClientRestResource(
self, servlet_groups=["media"]
)
else:
logger.warning(
"A 'media' listener is configured but the media"

View File

@@ -81,7 +81,7 @@ def gz_wrap(r: Resource) -> Resource:
class SynapseHomeServer(HomeServer):
DATASTORE_CLASS = DataStore
DATASTORE_CLASS = DataStore # type: ignore
def _listener_http(
self,
@@ -101,12 +101,6 @@ class SynapseHomeServer(HomeServer):
# Skip loading openid resource if federation is defined
# since federation resource will include openid
continue
if name == "media" and (
"federation" in res.names or "client" in res.names
):
# Skip loading media resource if federation or client are defined
# since federation & client resources will include media
continue
if name == "health":
# Skip loading, health resource is always included
continue
@@ -223,7 +217,7 @@ class SynapseHomeServer(HomeServer):
)
if name in ["media", "federation", "client"]:
if self.config.media.can_load_media_repo:
if self.config.server.enable_media_repo:
media_repo = self.get_media_repository_resource()
resources.update(
{
@@ -237,14 +231,6 @@ class SynapseHomeServer(HomeServer):
"'media' resource conflicts with enable_media_repo=False"
)
if name == "media":
resources[FEDERATION_PREFIX] = TransportLayerServer(
self, servlet_groups=["media"]
)
resources[CLIENT_API_PREFIX] = ClientRestResource(
self, servlet_groups=["media"]
)
if name in ["keys", "federation"]:
resources[SERVER_KEY_PREFIX] = KeyResource(self)

View File

@@ -140,12 +140,6 @@ class MSC3861:
("experimental", "msc3861", "client_auth_method"),
)
introspection_endpoint: Optional[str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(str)),
)
"""The URL of the introspection endpoint used to validate access tokens."""
account_management_url: Optional[str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(str)),
@@ -443,6 +437,10 @@ class ExperimentalConfig(Config):
"msc3823_account_suspension", False
)
self.msc3916_authenticated_media_enabled = experimental.get(
"msc3916_authenticated_media_enabled", False
)
# MSC4151: Report room API (Client-Server API)
self.msc4151_enabled: bool = experimental.get("msc4151_enabled", False)

View File

@@ -126,7 +126,7 @@ class ContentRepositoryConfig(Config):
# Only enable the media repo if either the media repo is enabled or the
# current worker app is the media repo.
if (
config.get("enable_media_repo", True) is False
self.root.server.enable_media_repo is False
and config.get("worker_app") != "synapse.app.media_repository"
):
self.can_load_media_repo = False
@@ -272,10 +272,6 @@ class ContentRepositoryConfig(Config):
remote_media_lifetime
)
self.enable_authenticated_media = config.get(
"enable_authenticated_media", False
)
def generate_config_section(self, data_dir_path: str, **kwargs: Any) -> str:
assert data_dir_path is not None
media_store = os.path.join(data_dir_path, "media_store")

View File

@@ -384,11 +384,6 @@ class ServerConfig(Config):
# Whether to internally track presence, requires that presence is enabled,
self.track_presence = self.presence_enabled and presence_enabled != "untracked"
# Determines if presence results for offline users are included on initial/full sync
self.presence_include_offline_users_on_sync = presence_config.get(
"include_offline_users_on_sync", False
)
# Custom presence router module
# This is the legacy way of configuring it (the config should now be put in the modules section)
self.presence_router_module_class = None
@@ -400,6 +395,12 @@ class ServerConfig(Config):
self.presence_router_config,
) = load_module(presence_router_config, ("presence", "presence_router"))
# whether to enable the media repository endpoints. This should be set
# to false if the media repository is running as a separate endpoint;
# doing so ensures that we will not run cache cleanup jobs on the
# master, potentially causing inconsistency.
self.enable_media_repo = config.get("enable_media_repo", True)
# Whether to require authentication to retrieve profile data (avatars,
# display names) of other users through the client API.
self.require_auth_for_profile_requests = config.get(

View File

@@ -554,22 +554,3 @@ def relation_from_event(event: EventBase) -> Optional[_EventRelation]:
aggregation_key = None
return _EventRelation(parent_id, rel_type, aggregation_key)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class StrippedStateEvent:
"""
A stripped down state event. Usually used for remote invite/knocks so the user can
make an informed decision on whether they want to join.
Attributes:
type: Event `type`
state_key: Event `state_key`
sender: Event `sender`
content: Event `content`
"""
type: str
state_key: str
sender: str
content: Dict[str, Any]

View File

@@ -49,7 +49,7 @@ from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersion
from synapse.types import JsonDict, Requester
from . import EventBase, StrippedStateEvent, make_event_from_dict
from . import EventBase, make_event_from_dict
if TYPE_CHECKING:
from synapse.handlers.relations import BundledAggregations
@@ -854,30 +854,3 @@ def strip_event(event: EventBase) -> JsonDict:
"content": event.content,
"sender": event.sender,
}
def parse_stripped_state_event(raw_stripped_event: Any) -> Optional[StrippedStateEvent]:
"""
Given a raw value from an event's `unsigned` field, attempt to parse it into a
`StrippedStateEvent`.
"""
if isinstance(raw_stripped_event, dict):
# All of these fields are required
type = raw_stripped_event.get("type")
state_key = raw_stripped_event.get("state_key")
sender = raw_stripped_event.get("sender")
content = raw_stripped_event.get("content")
if (
isinstance(type, str)
and isinstance(state_key, str)
and isinstance(sender, str)
and isinstance(content, dict)
):
return StrippedStateEvent(
type=type,
state_key=state_key,
sender=sender,
content=content,
)
return None

View File

@@ -338,11 +338,12 @@ class PerDestinationQueue:
# not caught up yet
return
pending_pdus = []
while True:
self._new_data_to_send = False
async with _TransactionQueueManager(self) as (
pending_pdus, # noqa: F811
pending_pdus,
pending_edus,
):
if not pending_pdus and not pending_edus:

View File

@@ -33,7 +33,6 @@ from synapse.federation.transport.server.federation import (
FEDERATION_SERVLET_CLASSES,
FederationAccountStatusServlet,
FederationMediaDownloadServlet,
FederationMediaThumbnailServlet,
FederationUnstableClientKeysClaimServlet,
)
from synapse.http.server import HttpServer, JsonResource
@@ -271,10 +270,6 @@ SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
"federation": FEDERATION_SERVLET_CLASSES,
"room_list": (PublicRoomList,),
"openid": (OpenIdUserInfo,),
"media": (
FederationMediaDownloadServlet,
FederationMediaThumbnailServlet,
),
}
@@ -321,11 +316,8 @@ def register_servlets(
):
continue
if (
servletclass == FederationMediaDownloadServlet
or servletclass == FederationMediaThumbnailServlet
):
if not hs.config.media.can_load_media_repo:
if servletclass == FederationMediaDownloadServlet:
if not hs.config.server.enable_media_repo:
continue
servletclass(

View File

@@ -363,8 +363,6 @@ class BaseFederationServlet:
if (
func.__self__.__class__.__name__ # type: ignore
== "FederationMediaDownloadServlet"
or func.__self__.__class__.__name__ # type: ignore
== "FederationMediaThumbnailServlet"
):
response = await func(
origin, content, request, *args, **kwargs
@@ -377,8 +375,6 @@ class BaseFederationServlet:
if (
func.__self__.__class__.__name__ # type: ignore
== "FederationMediaDownloadServlet"
or func.__self__.__class__.__name__ # type: ignore
== "FederationMediaThumbnailServlet"
):
response = await func(
origin, content, request, *args, **kwargs

View File

@@ -46,13 +46,11 @@ from synapse.http.servlet import (
parse_boolean_from_args,
parse_integer,
parse_integer_from_args,
parse_string,
parse_string_from_args,
parse_strings_from_args,
)
from synapse.http.site import SynapseRequest
from synapse.media._base import DEFAULT_MAX_TIMEOUT_MS, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS
from synapse.media.thumbnailer import ThumbnailProvider
from synapse.types import JsonDict
from synapse.util import SYNAPSE_VERSION
from synapse.util.ratelimitutils import FederationRateLimiter
@@ -828,59 +826,6 @@ class FederationMediaDownloadServlet(BaseFederationServerServlet):
)
class FederationMediaThumbnailServlet(BaseFederationServerServlet):
"""
Implementation of new federation media `/thumbnail` endpoint outlined in MSC3916. Returns
a multipart/mixed response consisting of a JSON object and the requested media
item. This endpoint only returns local media.
"""
PATH = "/media/thumbnail/(?P<media_id>[^/]*)"
RATELIMIT = True
def __init__(
self,
hs: "HomeServer",
ratelimiter: FederationRateLimiter,
authenticator: Authenticator,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self.media_repo = self.hs.get_media_repository()
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
self.thumbnail_provider = ThumbnailProvider(
hs, self.media_repo, self.media_repo.media_storage
)
async def on_GET(
self,
origin: Optional[str],
content: Literal[None],
request: SynapseRequest,
media_id: str,
) -> None:
width = parse_integer(request, "width", required=True)
height = parse_integer(request, "height", required=True)
method = parse_string(request, "method", "scale")
# TODO Parse the Accept header to get an prioritised list of thumbnail types.
m_type = "image/png"
max_timeout_ms = parse_integer(
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
)
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
if self.dynamic_thumbnails:
await self.thumbnail_provider.select_or_generate_local_thumbnail(
request, media_id, width, height, method, m_type, max_timeout_ms, True
)
else:
await self.thumbnail_provider.respond_local_thumbnail(
request, media_id, width, height, method, m_type, max_timeout_ms, True
)
self.media_repo.mark_recently_accessed(None, media_id)
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
FederationSendServlet,
FederationEventServlet,
@@ -912,4 +857,5 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
FederationV1SendKnockServlet,
FederationMakeKnockServlet,
FederationAccountStatusServlet,
FederationMediaDownloadServlet,
)

View File

@@ -197,14 +197,8 @@ class AdminHandler:
# events that we have and then filtering, this isn't the most
# efficient method perhaps but it does guarantee we get everything.
while True:
events, _ = (
await self._store.paginate_room_events_by_topological_ordering(
room_id=room_id,
from_key=from_key,
to_key=to_key,
limit=100,
direction=Direction.FORWARDS,
)
events, _ = await self._store.paginate_room_events(
room_id, from_key, to_key, limit=100, direction=Direction.FORWARDS
)
if not events:
break

View File

@@ -283,10 +283,6 @@ class DeactivateAccountHandler:
ratelimit=False,
require_consent=False,
)
# Mark the room forgotten too, because they won't be able to do this
# for us. This may lead to the room being purged eventually.
await self._room_member_handler.forget(user, room_id)
except Exception:
logger.exception(
"Failed to part user %r from room %r: ignoring and continuing",

View File

@@ -20,20 +20,10 @@
#
#
import logging
from typing import (
TYPE_CHECKING,
AbstractSet,
Dict,
Iterable,
List,
Mapping,
Optional,
Set,
Tuple,
)
from typing import TYPE_CHECKING, Dict, Iterable, List, Mapping, Optional, Set, Tuple
from synapse.api import errors
from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.constants import EduTypes, EventTypes
from synapse.api.errors import (
Codes,
FederationDeniedError,
@@ -48,9 +38,7 @@ from synapse.metrics.background_process_metrics import (
wrap_as_background_process,
)
from synapse.storage.databases.main.client_ips import DeviceLastConnectionInfo
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import (
DeviceListUpdates,
JsonDict,
JsonMapping,
ScheduledTask,
@@ -226,214 +214,138 @@ class DeviceWorkerHandler:
@cancellable
async def get_user_ids_changed(
self, user_id: str, from_token: StreamToken
) -> DeviceListUpdates:
) -> JsonDict:
"""Get list of users that have had the devices updated, or have newly
joined a room, that `user_id` may be interested in.
"""
set_tag("user_id", user_id)
set_tag("from_token", str(from_token))
now_room_key = self.store.get_room_max_token()
now_token = self._event_sources.get_current_token()
room_ids = await self.store.get_rooms_for_user(user_id)
# We need to work out all the different membership changes for the user
# and user they share a room with, to pass to
# `generate_sync_entry_for_device_list`. See its docstring for details
# on the data required.
joined_room_ids = await self.store.get_rooms_for_user(user_id)
# Get the set of rooms that the user has joined/left
membership_changes = (
await self.store.get_current_state_delta_membership_changes_for_user(
user_id, from_key=from_token.room_key, to_key=now_token.room_key
)
changed = await self.get_device_changes_in_shared_rooms(
user_id, room_ids, from_token
)
# Check for newly joined or left rooms. We need to make sure that we add
# to newly joined in the case membership goes from join -> leave -> join
# again.
newly_joined_rooms: Set[str] = set()
newly_left_rooms: Set[str] = set()
for change in membership_changes:
# We check for changes in "joinedness", i.e. if the membership has
# changed to or from JOIN.
if change.membership == Membership.JOIN:
if change.prev_membership != Membership.JOIN:
newly_joined_rooms.add(change.room_id)
newly_left_rooms.discard(change.room_id)
elif change.prev_membership == Membership.JOIN:
newly_joined_rooms.discard(change.room_id)
newly_left_rooms.add(change.room_id)
# Then work out if any users have since joined
rooms_changed = self.store.get_rooms_that_changed(room_ids, from_token.room_key)
# We now work out if any other users have since joined or left the rooms
# the user is currently in. First we filter out rooms that we know
# haven't changed recently.
rooms_changed = self.store.get_rooms_that_changed(
joined_room_ids, from_token.room_key
member_events = await self.store.get_membership_changes_for_user(
user_id, from_token.room_key, now_room_key
)
rooms_changed.update(event.room_id for event in member_events)
# List of membership changes per room
room_to_deltas: Dict[str, List[StateDelta]] = {}
# The set of event IDs of membership events (so we can fetch their
# associated membership).
memberships_to_fetch: Set[str] = set()
stream_ordering = from_token.room_key.stream
possibly_changed = set(changed)
possibly_left = set()
for room_id in rooms_changed:
# TODO: Only pull out membership events?
state_changes = await self.store.get_current_state_deltas_for_room(
room_id, from_token=from_token.room_key, to_token=now_token.room_key
# Check if the forward extremities have changed. If not then we know
# the current state won't have changed, and so we can skip this room.
try:
if not await self.store.have_room_forward_extremities_changed_since(
room_id, stream_ordering
):
continue
except errors.StoreError:
pass
current_state_ids = await self._state_storage.get_current_state_ids(
room_id, await_full_state=False
)
for delta in state_changes:
if delta.event_type != EventTypes.Member:
# The user may have left the room
# TODO: Check if they actually did or if we were just invited.
if room_id not in room_ids:
for etype, state_key in current_state_ids.keys():
if etype != EventTypes.Member:
continue
possibly_left.add(state_key)
continue
# Fetch the current state at the time.
try:
event_ids = await self.store.get_forward_extremities_for_room_at_stream_ordering(
room_id, stream_ordering=stream_ordering
)
except errors.StoreError:
# we have purged the stream_ordering index since the stream
# ordering: treat it the same as a new room
event_ids = []
# special-case for an empty prev state: include all members
# in the changed list
if not event_ids:
log_kv(
{"event": "encountered empty previous state", "room_id": room_id}
)
for etype, state_key in current_state_ids.keys():
if etype != EventTypes.Member:
continue
possibly_changed.add(state_key)
continue
current_member_id = current_state_ids.get((EventTypes.Member, user_id))
if not current_member_id:
continue
# mapping from event_id -> state_dict
prev_state_ids = await self._state_storage.get_state_ids_for_events(
event_ids,
await_full_state=False,
)
# Check if we've joined the room? If so we just blindly add all the users to
# the "possibly changed" users.
for state_dict in prev_state_ids.values():
member_event = state_dict.get((EventTypes.Member, user_id), None)
if not member_event or member_event != current_member_id:
for etype, state_key in current_state_ids.keys():
if etype != EventTypes.Member:
continue
possibly_changed.add(state_key)
break
# If there has been any change in membership, include them in the
# possibly changed list. We'll check if they are joined below,
# and we're not toooo worried about spuriously adding users.
for key, event_id in current_state_ids.items():
etype, state_key = key
if etype != EventTypes.Member:
continue
room_to_deltas.setdefault(room_id, []).append(delta)
if delta.event_id:
memberships_to_fetch.add(delta.event_id)
if delta.prev_event_id:
memberships_to_fetch.add(delta.prev_event_id)
# check if this member has changed since any of the extremities
# at the stream_ordering, and add them to the list if so.
for state_dict in prev_state_ids.values():
prev_event_id = state_dict.get(key, None)
if not prev_event_id or prev_event_id != event_id:
if state_key != user_id:
possibly_changed.add(state_key)
break
# Fetch all the memberships for the membership events
event_id_to_memberships = await self.store.get_membership_from_event_ids(
memberships_to_fetch
)
if possibly_changed or possibly_left:
possibly_joined = possibly_changed
possibly_left = possibly_changed | possibly_left
joined_invited_knocked = (
Membership.JOIN,
Membership.INVITE,
Membership.KNOCK,
)
# Double check if we still share rooms with the given user.
users_rooms = await self.store.get_rooms_for_users(possibly_left)
for changed_user_id, entries in users_rooms.items():
if any(rid in room_ids for rid in entries):
possibly_left.discard(changed_user_id)
else:
possibly_joined.discard(changed_user_id)
# We now want to find any user that have newly joined/invited/knocked,
# or newly left, similarly to above.
newly_joined_or_invited_or_knocked_users: Set[str] = set()
newly_left_users: Set[str] = set()
for _, deltas in room_to_deltas.items():
for delta in deltas:
# Get the prev/new memberships for the delta
new_membership = None
prev_membership = None
if delta.event_id:
m = event_id_to_memberships.get(delta.event_id)
if m is not None:
new_membership = m.membership
if delta.prev_event_id:
m = event_id_to_memberships.get(delta.prev_event_id)
if m is not None:
prev_membership = m.membership
else:
possibly_joined = set()
possibly_left = set()
# Check if a user has newly joined/invited/knocked, or left.
if new_membership in joined_invited_knocked:
if prev_membership not in joined_invited_knocked:
newly_joined_or_invited_or_knocked_users.add(delta.state_key)
newly_left_users.discard(delta.state_key)
elif prev_membership in joined_invited_knocked:
newly_joined_or_invited_or_knocked_users.discard(delta.state_key)
newly_left_users.add(delta.state_key)
result = {"changed": list(possibly_joined), "left": list(possibly_left)}
# Now we actually calculate the device list entry with the information
# calculated above.
device_list_updates = await self.generate_sync_entry_for_device_list(
user_id=user_id,
since_token=from_token,
now_token=now_token,
joined_room_ids=joined_room_ids,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
newly_left_users=newly_left_users,
)
log_kv(result)
log_kv(
{
"changed": device_list_updates.changed,
"left": device_list_updates.left,
}
)
return device_list_updates
@measure_func("_generate_sync_entry_for_device_list")
async def generate_sync_entry_for_device_list(
self,
user_id: str,
since_token: StreamToken,
now_token: StreamToken,
joined_room_ids: AbstractSet[str],
newly_joined_rooms: AbstractSet[str],
newly_joined_or_invited_or_knocked_users: AbstractSet[str],
newly_left_rooms: AbstractSet[str],
newly_left_users: AbstractSet[str],
) -> DeviceListUpdates:
"""Generate the DeviceListUpdates section of sync
Args:
sync_result_builder
newly_joined_rooms: Set of rooms user has joined since previous sync
newly_joined_or_invited_or_knocked_users: Set of users that have joined,
been invited to a room or are knocking on a room since
previous sync.
newly_left_rooms: Set of rooms user has left since previous sync
newly_left_users: Set of users that have left a room we're in since
previous sync
"""
# Take a copy since these fields will be mutated later.
newly_joined_or_invited_or_knocked_users = set(
newly_joined_or_invited_or_knocked_users
)
newly_left_users = set(newly_left_users)
# We want to figure out what user IDs the client should refetch
# device keys for, and which users we aren't going to track changes
# for anymore.
#
# For the first step we check:
# a. if any users we share a room with have updated their devices,
# and
# b. we also check if we've joined any new rooms, or if a user has
# joined a room we're in.
#
# For the second step we just find any users we no longer share a
# room with by looking at all users that have left a room plus users
# that were in a room we've left.
users_that_have_changed = set()
# Step 1a, check for changes in devices of users we share a room
# with
users_that_have_changed = await self.get_device_changes_in_shared_rooms(
user_id,
joined_room_ids,
from_token=since_token,
now_token=now_token,
)
# Step 1b, check for newly joined rooms
for room_id in newly_joined_rooms:
joined_users = await self.store.get_users_in_room(room_id)
newly_joined_or_invited_or_knocked_users.update(joined_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
user_id, since_token.device_list_key
)
users_that_have_changed.update(user_signatures_changed)
# Now find users that we no longer track
for room_id in newly_left_rooms:
left_users = await self.store.get_users_in_room(room_id)
newly_left_users.update(left_users)
# Remove any users that we still share a room with.
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
for user_id, entries in left_users_rooms.items():
if any(rid in joined_room_ids for rid in entries):
newly_left_users.discard(user_id)
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
return result
async def on_federation_query_user_devices(self, user_id: str) -> JsonDict:
if not self.hs.is_mine(UserID.from_string(user_id)):

View File

@@ -291,20 +291,13 @@ class E2eKeysHandler:
# Only try and fetch keys for destinations that are not marked as
# down.
unfiltered_destinations = remote_queries_not_in_cache.keys()
filtered_destinations = set(
await filter_destinations_by_retry_limiter(
unfiltered_destinations,
self.clock,
self.store,
# Let's give an arbitrary grace period for those hosts that are
# only recently down
retry_due_within_ms=60 * 1000,
)
)
failures.update(
(dest, _NOT_READY_FOR_RETRY_FAILURE)
for dest in (unfiltered_destinations - filtered_destinations)
filtered_destinations = await filter_destinations_by_retry_limiter(
remote_queries_not_in_cache.keys(),
self.clock,
self.store,
# Let's give an arbitrary grace period for those hosts that are
# only recently down
retry_due_within_ms=60 * 1000,
)
await concurrently_execute(
@@ -1648,9 +1641,6 @@ def _check_device_signature(
raise SynapseError(400, "Invalid signature", Codes.INVALID_SIGNATURE)
_NOT_READY_FOR_RETRY_FAILURE = {"status": 503, "message": "Not ready for retry"}
def _exception_to_failure(e: Exception) -> JsonDict:
if isinstance(e, SynapseError):
return {"status": e.code, "errcode": e.errcode, "message": str(e)}
@@ -1659,7 +1649,7 @@ def _exception_to_failure(e: Exception) -> JsonDict:
return {"status": e.code, "message": str(e)}
if isinstance(e, NotRetryingDestination):
return _NOT_READY_FOR_RETRY_FAILURE
return {"status": 503, "message": "Not ready for retry"}
# include ConnectionRefused and other errors
#

View File

@@ -34,7 +34,7 @@ from synapse.api.errors import (
from synapse.logging.opentracing import log_kv, trace
from synapse.storage.databases.main.e2e_room_keys import RoomKey
from synapse.types import JsonDict
from synapse.util.async_helpers import ReadWriteLock
from synapse.util.async_helpers import Linearizer
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -58,7 +58,7 @@ class E2eRoomKeysHandler:
# clients belonging to a user will receive and try to upload a new session at
# roughly the same time. Also used to lock out uploads when the key is being
# changed.
self._upload_lock = ReadWriteLock()
self._upload_linearizer = Linearizer("upload_room_keys_lock")
@trace
async def get_room_keys(
@@ -89,7 +89,7 @@ class E2eRoomKeysHandler:
# we deliberately take the lock to get keys so that changing the version
# works atomically
async with self._upload_lock.read(user_id):
async with self._upload_linearizer.queue(user_id):
# make sure the backup version exists
try:
await self.store.get_e2e_room_keys_version_info(user_id, version)
@@ -132,7 +132,7 @@ class E2eRoomKeysHandler:
"""
# lock for consistency with uploading
async with self._upload_lock.write(user_id):
async with self._upload_linearizer.queue(user_id):
# make sure the backup version exists
try:
version_info = await self.store.get_e2e_room_keys_version_info(
@@ -193,7 +193,7 @@ class E2eRoomKeysHandler:
# TODO: Validate the JSON to make sure it has the right keys.
# XXX: perhaps we should use a finer grained lock here?
async with self._upload_lock.write(user_id):
async with self._upload_linearizer.queue(user_id):
# Check that the version we're trying to upload is the current version
try:
version_info = await self.store.get_e2e_room_keys_version_info(user_id)
@@ -355,7 +355,7 @@ class E2eRoomKeysHandler:
# TODO: Validate the JSON to make sure it has the right keys.
# lock everyone out until we've switched version
async with self._upload_lock.write(user_id):
async with self._upload_linearizer.queue(user_id):
new_version = await self.store.create_e2e_room_keys_version(
user_id, version_info
)
@@ -382,7 +382,7 @@ class E2eRoomKeysHandler:
}
"""
async with self._upload_lock.read(user_id):
async with self._upload_linearizer.queue(user_id):
try:
res = await self.store.get_e2e_room_keys_version_info(user_id, version)
except StoreError as e:
@@ -407,7 +407,7 @@ class E2eRoomKeysHandler:
NotFoundError: if this backup version doesn't exist
"""
async with self._upload_lock.write(user_id):
async with self._upload_linearizer.queue(user_id):
try:
await self.store.delete_e2e_room_keys_version(user_id, version)
except StoreError as e:
@@ -437,7 +437,7 @@ class E2eRoomKeysHandler:
raise SynapseError(
400, "Version in body does not match", Codes.INVALID_PARAM
)
async with self._upload_lock.write(user_id):
async with self._upload_linearizer.queue(user_id):
try:
old_info = await self.store.get_e2e_room_keys_version_info(
user_id, version

View File

@@ -507,15 +507,13 @@ class PaginationHandler:
# Initially fetch the events from the database. With any luck, we can return
# these without blocking on backfill (handled below).
events, next_key = (
await self.store.paginate_room_events_by_topological_ordering(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
events, next_key = await self.store.paginate_room_events(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
if pagin_config.direction == Direction.BACKWARDS:
@@ -584,15 +582,13 @@ class PaginationHandler:
# If we did backfill something, refetch the events from the database to
# catch anything new that might have been added since we last fetched.
if did_backfill:
events, next_key = (
await self.store.paginate_room_events_by_topological_ordering(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
events, next_key = await self.store.paginate_room_events(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
else:
# Otherwise, we can backfill in the background for eventual

View File

@@ -74,17 +74,6 @@ class ProfileHandler:
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
async def get_profile(self, user_id: str, ignore_backoff: bool = True) -> JsonDict:
"""
Get a user's profile as a JSON dictionary.
Args:
user_id: The user to fetch the profile of.
ignore_backoff: True to ignore backoff when fetching over federation.
Returns:
A JSON dictionary. For local queries this will include the displayname and avatar_url
fields. For remote queries it may contain arbitrary information.
"""
target_user = UserID.from_string(user_id)
if self.hs.is_mine(target_user):
@@ -118,15 +107,6 @@ class ProfileHandler:
raise e.to_synapse_error()
async def get_displayname(self, target_user: UserID) -> Optional[str]:
"""
Fetch a user's display name from their profile.
Args:
target_user: The user to fetch the display name of.
Returns:
The user's display name or None if unset.
"""
if self.hs.is_mine(target_user):
try:
displayname = await self.store.get_profile_displayname(target_user)
@@ -223,15 +203,6 @@ class ProfileHandler:
await self._update_join_states(requester, target_user)
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
"""
Fetch a user's avatar URL from their profile.
Args:
target_user: The user to fetch the avatar URL of.
Returns:
The user's avatar URL or None if unset.
"""
if self.hs.is_mine(target_user):
try:
avatar_url = await self.store.get_profile_avatar_url(target_user)
@@ -432,12 +403,6 @@ class ProfileHandler:
async def _update_join_states(
self, requester: Requester, target_user: UserID
) -> None:
"""
Update the membership events of each room the user is joined to with the
new profile information.
Note that this stomps over any custom display name or avatar URL in member events.
"""
if not self.hs.is_mine(target_user):
return

View File

@@ -286,14 +286,8 @@ class ReceiptEventSource(EventSource[MultiWriterStreamToken, JsonMapping]):
room_ids: Iterable[str],
is_guest: bool,
explicit_room_id: Optional[str] = None,
to_key: Optional[MultiWriterStreamToken] = None,
) -> Tuple[List[JsonMapping], MultiWriterStreamToken]:
"""
Find read receipts for given rooms (> `from_token` and <= `to_token`)
"""
if to_key is None:
to_key = self.get_current_key()
to_key = self.get_current_key()
if from_key == to_key:
return [], to_key

View File

@@ -1188,8 +1188,6 @@ class RoomCreationHandler:
)
events_to_send.append((power_event, power_context))
else:
# Please update the docs for `default_power_level_content_override` when
# updating the `events` dict below
power_level_content: JsonDict = {
"users": {creator_id: 100},
"users_default": 0,
@@ -1750,7 +1748,7 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
from_key=from_key,
to_key=to_key,
limit=limit or 10,
direction=Direction.FORWARDS,
order="ASC",
)
events = list(room_events)

File diff suppressed because it is too large Load Diff

View File

@@ -293,9 +293,7 @@ class StatsHandler:
"history_visibility"
)
elif delta.event_type == EventTypes.RoomEncryption:
room_state["encryption"] = event_content.get(
EventContentFields.ENCRYPTION_ALGORITHM
)
room_state["encryption"] = event_content.get("algorithm")
elif delta.event_type == EventTypes.Name:
room_state["name"] = event_content.get("name")
elif delta.event_type == EventTypes.Topic:

View File

@@ -43,7 +43,6 @@ from prometheus_client import Counter
from synapse.api.constants import (
AccountDataTypes,
Direction,
EventContentFields,
EventTypes,
JoinRules,
@@ -65,7 +64,6 @@ from synapse.logging.opentracing import (
)
from synapse.storage.databases.main.event_push_actions import RoomNotifCounts
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
from synapse.storage.databases.main.stream import PaginateFunction
from synapse.storage.roommember import MemberSummary
from synapse.types import (
DeviceListUpdates,
@@ -86,7 +84,7 @@ from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.caches.lrucache import LruCache
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
from synapse.util.metrics import Measure
from synapse.util.metrics import Measure, measure_func
from synapse.visibility import filter_events_for_client
if TYPE_CHECKING:
@@ -881,49 +879,22 @@ class SyncHandler:
since_key = since_token.room_key
while limited and len(recents) < timeline_limit and max_repeat:
# For initial `/sync`, we want to view a historical section of the
# timeline; to fetch events by `topological_ordering` (best
# representation of the room DAG as others were seeing it at the time).
# This also aligns with the order that `/messages` returns events in.
#
# For incremental `/sync`, we want to get all updates for rooms since
# the last `/sync` (regardless if those updates arrived late or happened
# a while ago in the past); to fetch events by `stream_ordering` (in the
# order they were received by the server).
#
# Relevant spec issue: https://github.com/matrix-org/matrix-spec/issues/1917
#
# FIXME: Using workaround for mypy,
# https://github.com/python/mypy/issues/10740#issuecomment-1997047277 and
# https://github.com/python/mypy/issues/17479
paginate_room_events_by_topological_ordering: PaginateFunction = (
self.store.paginate_room_events_by_topological_ordering
)
paginate_room_events_by_stream_ordering: PaginateFunction = (
self.store.paginate_room_events_by_stream_ordering
)
pagination_method: PaginateFunction = (
# Use `topographical_ordering` for historical events
paginate_room_events_by_topological_ordering
if since_key is None
# Use `stream_ordering` for updates
else paginate_room_events_by_stream_ordering
)
events, end_key = await pagination_method(
room_id=room_id,
# The bounds are reversed so we can paginate backwards
# (from newer to older events) starting at to_bound.
# This ensures we fill the `limit` with the newest events first,
from_key=end_key,
to_key=since_key,
direction=Direction.BACKWARDS,
# We add one so we can determine if there are enough events to saturate
# the limit or not (see `limited`)
limit=load_limit + 1,
)
# We want to return the events in ascending order (the last event is the
# most recent).
events.reverse()
# If we have a since_key then we are trying to get any events
# that have happened since `since_key` up to `end_key`, so we
# can just use `get_room_events_stream_for_room`.
# Otherwise, we want to return the last N events in the room
# in topological ordering.
if since_key:
events, end_key = await self.store.get_room_events_stream_for_room(
room_id,
limit=load_limit + 1,
from_key=since_key,
to_key=end_key,
)
else:
events, end_key = await self.store.get_recent_events_for_room(
room_id, limit=load_limit + 1, end_token=end_key
)
log_kv({"loaded_recents": len(events)})
@@ -1381,7 +1352,7 @@ class SyncHandler:
await_full_state = True
lazy_load_members = False
state_at_timeline_end = await self._state_storage_controller.get_state_ids_at(
state_at_timeline_end = await self._state_storage_controller.get_state_at(
room_id,
stream_position=end_token,
state_filter=state_filter,
@@ -1509,13 +1480,11 @@ class SyncHandler:
else:
# We can get here if the user has ignored the senders of all
# the recent events.
state_at_timeline_start = (
await self._state_storage_controller.get_state_ids_at(
room_id,
stream_position=end_token,
state_filter=state_filter,
await_full_state=await_full_state,
)
state_at_timeline_start = await self._state_storage_controller.get_state_at(
room_id,
stream_position=end_token,
state_filter=state_filter,
await_full_state=await_full_state,
)
if batch.limited:
@@ -1533,14 +1502,14 @@ class SyncHandler:
# about them).
state_filter = StateFilter.all()
state_at_previous_sync = await self._state_storage_controller.get_state_ids_at(
state_at_previous_sync = await self._state_storage_controller.get_state_at(
room_id,
stream_position=since_token,
state_filter=state_filter,
await_full_state=await_full_state,
)
state_at_timeline_end = await self._state_storage_controller.get_state_ids_at(
state_at_timeline_end = await self._state_storage_controller.get_state_at(
room_id,
stream_position=end_token,
state_filter=state_filter,
@@ -1779,15 +1748,8 @@ class SyncHandler:
)
if include_device_list_updates:
# include_device_list_updates can only be True if we have a
# since token.
assert since_token is not None
device_lists = await self._device_handler.generate_sync_entry_for_device_list(
user_id=user_id,
since_token=since_token,
now_token=sync_result_builder.now_token,
joined_room_ids=sync_result_builder.joined_room_ids,
device_lists = await self._generate_sync_entry_for_device_list(
sync_result_builder,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
@@ -1899,14 +1861,8 @@ class SyncHandler:
newly_left_users,
) = sync_result_builder.calculate_user_changes()
# include_device_list_updates can only be True if we have a
# since token.
assert since_token is not None
device_lists = await self._device_handler.generate_sync_entry_for_device_list(
user_id=user_id,
since_token=since_token,
now_token=sync_result_builder.now_token,
joined_room_ids=sync_result_builder.joined_room_ids,
device_lists = await self._generate_sync_entry_for_device_list(
sync_result_builder,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
@@ -2083,6 +2039,94 @@ class SyncHandler:
return sync_result_builder
@measure_func("_generate_sync_entry_for_device_list")
async def _generate_sync_entry_for_device_list(
self,
sync_result_builder: "SyncResultBuilder",
newly_joined_rooms: AbstractSet[str],
newly_joined_or_invited_or_knocked_users: AbstractSet[str],
newly_left_rooms: AbstractSet[str],
newly_left_users: AbstractSet[str],
) -> DeviceListUpdates:
"""Generate the DeviceListUpdates section of sync
Args:
sync_result_builder
newly_joined_rooms: Set of rooms user has joined since previous sync
newly_joined_or_invited_or_knocked_users: Set of users that have joined,
been invited to a room or are knocking on a room since
previous sync.
newly_left_rooms: Set of rooms user has left since previous sync
newly_left_users: Set of users that have left a room we're in since
previous sync
"""
user_id = sync_result_builder.sync_config.user.to_string()
since_token = sync_result_builder.since_token
assert since_token is not None
# Take a copy since these fields will be mutated later.
newly_joined_or_invited_or_knocked_users = set(
newly_joined_or_invited_or_knocked_users
)
newly_left_users = set(newly_left_users)
# We want to figure out what user IDs the client should refetch
# device keys for, and which users we aren't going to track changes
# for anymore.
#
# For the first step we check:
# a. if any users we share a room with have updated their devices,
# and
# b. we also check if we've joined any new rooms, or if a user has
# joined a room we're in.
#
# For the second step we just find any users we no longer share a
# room with by looking at all users that have left a room plus users
# that were in a room we've left.
users_that_have_changed = set()
joined_room_ids = sync_result_builder.joined_room_ids
# Step 1a, check for changes in devices of users we share a room
# with
users_that_have_changed = (
await self._device_handler.get_device_changes_in_shared_rooms(
user_id,
joined_room_ids,
from_token=since_token,
now_token=sync_result_builder.now_token,
)
)
# Step 1b, check for newly joined rooms
for room_id in newly_joined_rooms:
joined_users = await self.store.get_users_in_room(room_id)
newly_joined_or_invited_or_knocked_users.update(joined_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
user_id, since_token.device_list_key
)
users_that_have_changed.update(user_signatures_changed)
# Now find users that we no longer track
for room_id in newly_left_rooms:
left_users = await self.store.get_users_in_room(room_id)
newly_left_users.update(left_users)
# Remove any users that we still share a room with.
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
for user_id, entries in left_users_rooms.items():
if any(rid in joined_room_ids for rid in entries):
newly_left_users.discard(user_id)
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
@trace
async def _generate_sync_entry_for_to_device(
self, sync_result_builder: "SyncResultBuilder"
@@ -2224,11 +2268,7 @@ class SyncHandler:
user=user,
from_key=presence_key,
is_guest=sync_config.is_guest,
include_offline=(
True
if self.hs_config.server.presence_include_offline_users_on_sync
else include_offline
),
include_offline=include_offline,
)
assert presence_key
sync_result_builder.now_token = now_token.copy_and_replace(
@@ -2468,7 +2508,7 @@ class SyncHandler:
continue
if room_id in sync_result_builder.joined_room_ids or has_join:
old_state_ids = await self._state_storage_controller.get_state_ids_at(
old_state_ids = await self._state_storage_controller.get_state_at(
room_id,
since_token,
state_filter=StateFilter.from_types([(EventTypes.Member, user_id)]),
@@ -2499,7 +2539,7 @@ class SyncHandler:
else:
if not old_state_ids:
old_state_ids = (
await self._state_storage_controller.get_state_ids_at(
await self._state_storage_controller.get_state_at(
room_id,
since_token,
state_filter=StateFilter.from_types(
@@ -2595,10 +2635,9 @@ class SyncHandler:
# a "gap" in the timeline, as described by the spec for /sync.
room_to_events = await self.store.get_room_events_stream_for_rooms(
room_ids=sync_result_builder.joined_room_ids,
from_key=now_token.room_key,
to_key=since_token.room_key,
from_key=since_token.room_key,
to_key=now_token.room_key,
limit=timeline_limit + 1,
direction=Direction.BACKWARDS,
)
# We loop through all room ids, even if there are no new events, in case
@@ -2609,9 +2648,6 @@ class SyncHandler:
newly_joined = room_id in newly_joined_rooms
if room_entry:
events, start_key = room_entry
# We want to return the events in ascending order (the last event is the
# most recent).
events.reverse()
prev_batch_token = now_token.copy_and_replace(
StreamKeyType.ROOM, start_key

View File

@@ -565,12 +565,7 @@ class TypingNotificationEventSource(EventSource[int, JsonMapping]):
room_ids: Iterable[str],
is_guest: bool,
explicit_room_id: Optional[str] = None,
to_key: Optional[int] = None,
) -> Tuple[List[JsonMapping], int]:
"""
Find typing notifications for given rooms (> `from_token` and <= `to_token`)
"""
with Measure(self.clock, "typing.get_new_events"):
from_key = int(from_key)
handler = self.get_typing_handler()
@@ -579,9 +574,7 @@ class TypingNotificationEventSource(EventSource[int, JsonMapping]):
for room_id in room_ids:
if room_id not in handler._room_serials:
continue
if handler._room_serials[room_id] <= from_key or (
to_key is not None and handler._room_serials[room_id] > to_key
):
if handler._room_serials[room_id] <= from_key:
continue
events.append(self._make_event_for(room_id))

View File

@@ -1057,11 +1057,11 @@ class _MultipartParserProtocol(protocol.Protocol):
if not self.parser:
def on_header_field(data: bytes, start: int, end: int) -> None:
if data[start:end].lower() == b"location":
if data[start:end] == b"Location":
self.has_redirect = True
if data[start:end].lower() == b"content-disposition":
if data[start:end] == b"Content-Disposition":
self.in_disposition = True
if data[start:end].lower() == b"content-type":
if data[start:end] == b"Content-Type":
self.in_content_type = True
def on_header_value(data: bytes, start: int, end: int) -> None:
@@ -1088,6 +1088,7 @@ class _MultipartParserProtocol(protocol.Protocol):
return
# otherwise we are in the file part
else:
logger.info("Writing multipart file data to stream")
try:
self.stream.write(data[start:end])
except Exception as e:

View File

@@ -90,7 +90,7 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import set_tag, start_active_span, tags
from synapse.types import JsonDict
from synapse.util import json_decoder
from synapse.util.async_helpers import AwakenableSleeper, Linearizer, timeout_deferred
from synapse.util.async_helpers import AwakenableSleeper, timeout_deferred
from synapse.util.metrics import Measure
from synapse.util.stringutils import parse_and_validate_server_name
@@ -475,8 +475,6 @@ class MatrixFederationHttpClient:
use_proxy=True,
)
self.remote_download_linearizer = Linearizer("remote_download_linearizer", 6)
def wake_destination(self, destination: str) -> None:
"""Called when the remote server may have come back online."""
@@ -1488,44 +1486,35 @@ class MatrixFederationHttpClient:
)
headers = dict(response.headers.getAllRawHeaders())
expected_size = response.length
expected_size = response.length
# if we don't get an expected length then use the max length
if expected_size == UNKNOWN_LENGTH:
expected_size = max_size
else:
if int(expected_size) > max_size:
msg = "Requested file is too large > %r bytes" % (max_size,)
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg, Codes.TOO_LARGE)
read_body, _ = await download_ratelimiter.can_do_action(
requester=None,
key=ip_address,
n_actions=expected_size,
logger.debug(
f"File size unknown, assuming file is max allowable size: {max_size}"
)
if not read_body:
msg = "Requested file size exceeds ratelimits"
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(
HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED
)
read_body, _ = await download_ratelimiter.can_do_action(
requester=None,
key=ip_address,
n_actions=expected_size,
)
if not read_body:
msg = "Requested file size exceeds ratelimits"
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
try:
async with self.remote_download_linearizer.queue(ip_address):
# add a byte of headroom to max size as function errs at >=
d = read_body_with_max_size(response, output_stream, expected_size + 1)
d.addTimeout(self.default_timeout_seconds, self.reactor)
length = await make_deferred_yieldable(d)
# add a byte of headroom to max size as function errs at >=
d = read_body_with_max_size(response, output_stream, expected_size + 1)
d.addTimeout(self.default_timeout_seconds, self.reactor)
length = await make_deferred_yieldable(d)
except BodyExceededMaxSize:
msg = "Requested file is too large > %r bytes" % (expected_size,)
logger.warning(
@@ -1571,13 +1560,6 @@ class MatrixFederationHttpClient:
request.method,
request.uri.decode("ascii"),
)
# if we didn't know the length upfront, decrement the actual size from ratelimiter
if response.length == UNKNOWN_LENGTH:
download_ratelimiter.record_action(
requester=None, key=ip_address, n_actions=length
)
return length, headers
async def federation_get_file(
@@ -1648,37 +1630,29 @@ class MatrixFederationHttpClient:
)
headers = dict(response.headers.getAllRawHeaders())
expected_size = response.length
expected_size = response.length
# if we don't get an expected length then use the max length
if expected_size == UNKNOWN_LENGTH:
expected_size = max_size
else:
if int(expected_size) > max_size:
msg = "Requested file is too large > %r bytes" % (max_size,)
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg, Codes.TOO_LARGE)
read_body, _ = await download_ratelimiter.can_do_action(
requester=None,
key=ip_address,
n_actions=expected_size,
logger.debug(
f"File size unknown, assuming file is max allowable size: {max_size}"
)
if not read_body:
msg = "Requested file size exceeds ratelimits"
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(
HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED
)
read_body, _ = await download_ratelimiter.can_do_action(
requester=None,
key=ip_address,
n_actions=expected_size,
)
if not read_body:
msg = "Requested file size exceeds ratelimits"
logger.warning(
"{%s} [%s] %s",
request.txn_id,
request.destination,
msg,
)
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
# this should be a multipart/mixed response with the boundary string in the header
try:
@@ -1698,12 +1672,11 @@ class MatrixFederationHttpClient:
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg)
try:
async with self.remote_download_linearizer.queue(ip_address):
# add a byte of headroom to max size as `_MultipartParserProtocol.dataReceived` errs at >=
deferred = read_multipart_response(
response, output_stream, boundary, expected_size + 1
)
deferred.addTimeout(self.default_timeout_seconds, self.reactor)
# add a byte of headroom to max size as `_MultipartParserProtocol.dataReceived` errs at >=
deferred = read_multipart_response(
response, output_stream, boundary, expected_size + 1
)
deferred.addTimeout(self.default_timeout_seconds, self.reactor)
except BodyExceededMaxSize:
msg = "Requested file is too large > %r bytes" % (expected_size,)
logger.warning(
@@ -1770,13 +1743,6 @@ class MatrixFederationHttpClient:
request.method,
request.uri.decode("ascii"),
)
# if we didn't know the length upfront, decrement the actual size from ratelimiter
if response.length == UNKNOWN_LENGTH:
download_ratelimiter.record_action(
requester=None, key=ip_address, n_actions=length
)
return length, headers, multipart_response.json

View File

@@ -62,15 +62,6 @@ HOP_BY_HOP_HEADERS = {
"Upgrade",
}
if hasattr(Headers, "_canonicalNameCaps"):
# Twisted < 24.7.0rc1
_canonicalHeaderName = Headers()._canonicalNameCaps # type: ignore[attr-defined]
else:
# Twisted >= 24.7.0rc1
# But note that `_encodeName` still exists on prior versions,
# it just encodes differently
_canonicalHeaderName = Headers()._encodeName
def parse_connection_header_value(
connection_header_value: Optional[bytes],
@@ -94,10 +85,11 @@ def parse_connection_header_value(
The set of header names that should not be copied over from the remote response.
The keys are capitalized in canonical capitalization.
"""
headers = Headers()
extra_headers_to_remove: Set[str] = set()
if connection_header_value:
extra_headers_to_remove = {
_canonicalHeaderName(connection_option.strip()).decode("ascii")
headers._canonicalNameCaps(connection_option.strip()).decode("ascii")
for connection_option in connection_header_value.split(b",")
}

View File

@@ -658,7 +658,7 @@ class SynapseSite(ProxySite):
)
self.site_tag = site_tag
self.reactor: ISynapseReactor = reactor
self.reactor = reactor
assert config.http_options is not None
proxied = config.http_options.x_forwarded
@@ -683,7 +683,7 @@ class SynapseSite(ProxySite):
self.access_logger = logging.getLogger(logger_name)
self.server_version_string = server_version_string.encode("ascii")
def log(self, request: SynapseRequest) -> None: # type: ignore[override]
def log(self, request: SynapseRequest) -> None:
pass

View File

@@ -28,7 +28,6 @@ from types import TracebackType
from typing import (
TYPE_CHECKING,
Awaitable,
BinaryIO,
Dict,
Generator,
List,
@@ -38,28 +37,19 @@ from typing import (
)
import attr
from zope.interface import implementer
from twisted.internet import interfaces
from twisted.internet.defer import Deferred
from twisted.internet.interfaces import IConsumer
from twisted.python.failure import Failure
from twisted.protocols.basic import FileSender
from twisted.web.server import Request
from synapse.api.errors import Codes, cs_error
from synapse.http.server import finish_request, respond_with_json
from synapse.http.site import SynapseRequest
from synapse.logging.context import (
defer_to_threadpool,
make_deferred_yieldable,
run_in_background,
)
from synapse.logging.context import make_deferred_yieldable
from synapse.util import Clock
from synapse.util.async_helpers import DeferredEvent
from synapse.util.stringutils import is_ascii
if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.storage.databases.main.media_repository import LocalMedia
@@ -132,7 +122,6 @@ def respond_404(request: SynapseRequest) -> None:
async def respond_with_file(
hs: "HomeServer",
request: SynapseRequest,
media_type: str,
file_path: str,
@@ -149,7 +138,7 @@ async def respond_with_file(
add_file_headers(request, media_type, file_size, upload_name)
with open(file_path, "rb") as f:
await ThreadedFileSender(hs).beginFileTransfer(f, request)
await make_deferred_yieldable(FileSender().beginFileTransfer(f, request))
finish_request(request)
else:
@@ -612,151 +601,3 @@ def _parseparam(s: bytes) -> Generator[bytes, None, None]:
f = s[:end]
yield f.strip()
s = s[end:]
@implementer(interfaces.IPushProducer)
class ThreadedFileSender:
"""
A producer that sends the contents of a file to a consumer, reading from the
file on a thread.
This works by having a loop in a threadpool repeatedly reading from the
file, until the consumer pauses the producer. There is then a loop in the
main thread that waits until the consumer resumes the producer and then
starts reading in the threadpool again.
This is done to ensure that we're never waiting in the threadpool, as
otherwise its easy to starve it of threads.
"""
# How much data to read in one go.
CHUNK_SIZE = 2**14
# How long we wait for the consumer to be ready again before aborting the
# read.
TIMEOUT_SECONDS = 90.0
def __init__(self, hs: "HomeServer") -> None:
self.reactor = hs.get_reactor()
self.thread_pool = hs.get_media_sender_thread_pool()
self.file: Optional[BinaryIO] = None
self.deferred: "Deferred[None]" = Deferred()
self.consumer: Optional[interfaces.IConsumer] = None
# Signals if the thread should keep reading/sending data. Set means
# continue, clear means pause.
self.wakeup_event = DeferredEvent(self.reactor)
# Signals if the thread should terminate, e.g. because the consumer has
# gone away.
self.stop_writing = False
def beginFileTransfer(
self, file: BinaryIO, consumer: interfaces.IConsumer
) -> "Deferred[None]":
"""
Begin transferring a file
"""
self.file = file
self.consumer = consumer
self.consumer.registerProducer(self, True)
# We set the wakeup signal as we should start producing immediately.
self.wakeup_event.set()
run_in_background(self.start_read_loop)
return make_deferred_yieldable(self.deferred)
def resumeProducing(self) -> None:
"""interfaces.IPushProducer"""
self.wakeup_event.set()
def pauseProducing(self) -> None:
"""interfaces.IPushProducer"""
self.wakeup_event.clear()
def stopProducing(self) -> None:
"""interfaces.IPushProducer"""
# Unregister the consumer so we don't try and interact with it again.
if self.consumer:
self.consumer.unregisterProducer()
self.consumer = None
# Terminate the loop.
self.stop_writing = True
self.wakeup_event.set()
if not self.deferred.called:
self.deferred.errback(Exception("Consumer asked us to stop producing"))
async def start_read_loop(self) -> None:
"""This is the loop that drives reading/writing"""
try:
while not self.stop_writing:
# Start the loop in the threadpool to read data.
more_data = await defer_to_threadpool(
self.reactor, self.thread_pool, self._on_thread_read_loop
)
if not more_data:
# Reached EOF, we can just return.
return
if not self.wakeup_event.is_set():
ret = await self.wakeup_event.wait(self.TIMEOUT_SECONDS)
if not ret:
raise Exception("Timed out waiting to resume")
except Exception:
self._error(Failure())
finally:
self._finish()
def _on_thread_read_loop(self) -> bool:
"""This is the loop that happens on a thread.
Returns:
Whether there is more data to send.
"""
while not self.stop_writing and self.wakeup_event.is_set():
# The file should always have been set before we get here.
assert self.file is not None
chunk = self.file.read(self.CHUNK_SIZE)
if not chunk:
return False
self.reactor.callFromThread(self._write, chunk)
return True
def _write(self, chunk: bytes) -> None:
"""Called from the thread to write a chunk of data"""
if self.consumer:
self.consumer.write(chunk)
def _error(self, failure: Failure) -> None:
"""Called when there was a fatal error"""
if self.consumer:
self.consumer.unregisterProducer()
self.consumer = None
if not self.deferred.called:
self.deferred.errback(failure)
def _finish(self) -> None:
"""Called when we have finished writing (either on success or
failure)."""
if self.file:
self.file.close()
self.file = None
if self.consumer:
self.consumer.unregisterProducer()
self.consumer = None
if not self.deferred.called:
self.deferred.callback(None)

View File

@@ -430,7 +430,6 @@ class MediaRepository:
media_id: str,
name: Optional[str],
max_timeout_ms: int,
allow_authenticated: bool = True,
federation: bool = False,
) -> None:
"""Responds to requests for local media, if exists, or returns 404.
@@ -443,7 +442,6 @@ class MediaRepository:
the filename in the Content-Disposition header of the response.
max_timeout_ms: the maximum number of milliseconds to wait for the
media to be uploaded.
allow_authenticated: whether media marked as authenticated may be served to this request
federation: whether the local media being fetched is for a federation request
Returns:
@@ -453,10 +451,6 @@ class MediaRepository:
if not media_info:
return
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
if media_info.authenticated:
raise NotFoundError()
self.mark_recently_accessed(None, media_id)
media_type = media_info.media_type
@@ -487,7 +481,6 @@ class MediaRepository:
max_timeout_ms: int,
ip_address: str,
use_federation_endpoint: bool,
allow_authenticated: bool = True,
) -> None:
"""Respond to requests for remote media.
@@ -502,8 +495,6 @@ class MediaRepository:
ip_address: the IP address of the requester
use_federation_endpoint: whether to request the remote media over the new
federation `/download` endpoint
allow_authenticated: whether media marked as authenticated may be served to this
request
Returns:
Resolves once a response has successfully been written to request
@@ -535,7 +526,6 @@ class MediaRepository:
self.download_ratelimiter,
ip_address,
use_federation_endpoint,
allow_authenticated,
)
# We deliberately stream the file outside the lock
@@ -552,13 +542,7 @@ class MediaRepository:
respond_404(request)
async def get_remote_media_info(
self,
server_name: str,
media_id: str,
max_timeout_ms: int,
ip_address: str,
use_federation: bool,
allow_authenticated: bool,
self, server_name: str, media_id: str, max_timeout_ms: int, ip_address: str
) -> RemoteMedia:
"""Gets the media info associated with the remote file, downloading
if necessary.
@@ -569,10 +553,6 @@ class MediaRepository:
max_timeout_ms: the maximum number of milliseconds to wait for the
media to be uploaded.
ip_address: IP address of the requester
use_federation: if a download is necessary, whether to request the remote file
over the federation `/download` endpoint
allow_authenticated: whether media marked as authenticated may be served to this
request
Returns:
The media info of the file
@@ -593,8 +573,7 @@ class MediaRepository:
max_timeout_ms,
self.download_ratelimiter,
ip_address,
use_federation,
allow_authenticated,
False,
)
# Ensure we actually use the responder so that it releases resources
@@ -612,7 +591,6 @@ class MediaRepository:
download_ratelimiter: Ratelimiter,
ip_address: str,
use_federation_endpoint: bool,
allow_authenticated: bool,
) -> Tuple[Optional[Responder], RemoteMedia]:
"""Looks for media in local cache, if not there then attempt to
download from remote server.
@@ -634,11 +612,6 @@ class MediaRepository:
"""
media_info = await self.store.get_cached_remote_media(server_name, media_id)
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
# if it isn't cached then don't fetch it or if it's authenticated then don't serve it
if not media_info or media_info.authenticated:
raise NotFoundError()
# file_id is the ID we use to track the file locally. If we've already
# seen the file then reuse the existing ID, otherwise generate a new
# one.
@@ -812,11 +785,6 @@ class MediaRepository:
logger.info("Stored remote media in file %r", fname)
if self.hs.config.media.enable_authenticated_media:
authenticated = True
else:
authenticated = False
return RemoteMedia(
media_origin=server_name,
media_id=media_id,
@@ -827,7 +795,6 @@ class MediaRepository:
filesystem_id=file_id,
last_access_ts=time_now_ms,
quarantined_by=None,
authenticated=authenticated,
)
async def _federation_download_remote_file(
@@ -941,11 +908,6 @@ class MediaRepository:
logger.debug("Stored remote media in file %r", fname)
if self.hs.config.media.enable_authenticated_media:
authenticated = True
else:
authenticated = False
return RemoteMedia(
media_origin=server_name,
media_id=media_id,
@@ -956,7 +918,6 @@ class MediaRepository:
filesystem_id=file_id,
last_access_ts=time_now_ms,
quarantined_by=None,
authenticated=authenticated,
)
def _get_thumbnail_requirements(
@@ -1062,12 +1023,7 @@ class MediaRepository:
t_len = os.path.getsize(output_path)
await self.store.store_local_thumbnail(
media_id,
t_width,
t_height,
t_type,
t_method,
t_len,
media_id, t_width, t_height, t_type, t_method, t_len
)
return output_path

View File

@@ -49,11 +49,15 @@ from zope.interface import implementer
from twisted.internet import interfaces
from twisted.internet.defer import Deferred
from twisted.internet.interfaces import IConsumer
from twisted.protocols.basic import FileSender
from synapse.api.errors import NotFoundError
from synapse.logging.context import defer_to_thread, run_in_background
from synapse.logging.context import (
defer_to_thread,
make_deferred_yieldable,
run_in_background,
)
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
from synapse.media._base import ThreadedFileSender
from synapse.util import Clock
from synapse.util.file_consumer import BackgroundFileConsumer
@@ -209,7 +213,7 @@ class MediaStorage:
local_path = os.path.join(self.local_media_directory, path)
if os.path.exists(local_path):
logger.debug("responding with local file %s", local_path)
return FileResponder(self.hs, open(local_path, "rb"))
return FileResponder(open(local_path, "rb"))
logger.debug("local file %s did not exist", local_path)
for provider in self.storage_providers:
@@ -332,12 +336,13 @@ class FileResponder(Responder):
is closed when finished streaming.
"""
def __init__(self, hs: "HomeServer", open_file: BinaryIO):
self.hs = hs
def __init__(self, open_file: IO):
self.open_file = open_file
def write_to_consumer(self, consumer: IConsumer) -> Deferred:
return ThreadedFileSender(self.hs).beginFileTransfer(self.open_file, consumer)
return make_deferred_yieldable(
FileSender().beginFileTransfer(self.open_file, consumer)
)
def __exit__(
self,

View File

@@ -145,7 +145,6 @@ class FileStorageProviderBackend(StorageProvider):
def __init__(self, hs: "HomeServer", config: str):
self.hs = hs
self.reactor = hs.get_reactor()
self.cache_directory = hs.config.media.media_store_path
self.base_directory = config
@@ -166,7 +165,7 @@ class FileStorageProviderBackend(StorageProvider):
shutil_copyfile: Callable[[str, str], str] = shutil.copyfile
with start_active_span("shutil_copyfile"):
await defer_to_thread(
self.reactor,
self.hs.get_reactor(),
shutil_copyfile,
primary_fname,
backup_fname,
@@ -178,7 +177,7 @@ class FileStorageProviderBackend(StorageProvider):
backup_fname = os.path.join(self.base_directory, path)
if os.path.isfile(backup_fname):
return FileResponder(self.hs, open(backup_fname, "rb"))
return FileResponder(open(backup_fname, "rb"))
return None

View File

@@ -26,7 +26,7 @@ from typing import TYPE_CHECKING, List, Optional, Tuple, Type
from PIL import Image
from synapse.api.errors import Codes, NotFoundError, SynapseError, cs_error
from synapse.api.errors import Codes, SynapseError, cs_error
from synapse.config.repository import THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP
from synapse.http.server import respond_with_json
from synapse.http.site import SynapseRequest
@@ -36,11 +36,9 @@ from synapse.media._base import (
ThumbnailInfo,
respond_404,
respond_with_file,
respond_with_multipart_responder,
respond_with_responder,
)
from synapse.media.media_storage import FileResponder, MediaStorage
from synapse.storage.databases.main.media_repository import LocalMedia
from synapse.media.media_storage import MediaStorage
if TYPE_CHECKING:
from synapse.media.media_repository import MediaRepository
@@ -259,7 +257,6 @@ class ThumbnailProvider:
media_storage: MediaStorage,
):
self.hs = hs
self.reactor = hs.get_reactor()
self.media_repo = media_repo
self.media_storage = media_storage
self.store = hs.get_datastores().main
@@ -274,8 +271,6 @@ class ThumbnailProvider:
method: str,
m_type: str,
max_timeout_ms: int,
for_federation: bool,
allow_authenticated: bool = True,
) -> None:
media_info = await self.media_repo.get_local_media_info(
request, media_id, max_timeout_ms
@@ -283,12 +278,6 @@ class ThumbnailProvider:
if not media_info:
return
# if the media the thumbnail is generated from is authenticated, don't serve the
# thumbnail over an unauthenticated endpoint
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
if media_info.authenticated:
raise NotFoundError()
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
await self._select_and_respond_with_thumbnail(
request,
@@ -301,8 +290,6 @@ class ThumbnailProvider:
media_id,
url_cache=bool(media_info.url_cache),
server_name=None,
for_federation=for_federation,
media_info=media_info,
)
async def select_or_generate_local_thumbnail(
@@ -314,21 +301,14 @@ class ThumbnailProvider:
desired_method: str,
desired_type: str,
max_timeout_ms: int,
for_federation: bool,
allow_authenticated: bool = True,
) -> None:
media_info = await self.media_repo.get_local_media_info(
request, media_id, max_timeout_ms
)
if not media_info:
return
# if the media the thumbnail is generated from is authenticated, don't serve the
# thumbnail over an unauthenticated endpoint
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
if media_info.authenticated:
raise NotFoundError()
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
for info in thumbnail_infos:
t_w = info.width == desired_width
@@ -346,16 +326,10 @@ class ThumbnailProvider:
responder = await self.media_storage.fetch_media(file_info)
if responder:
if for_federation:
await respond_with_multipart_responder(
self.hs.get_clock(), request, responder, media_info
)
return
else:
await respond_with_responder(
request, responder, info.type, info.length
)
return
await respond_with_responder(
request, responder, info.type, info.length
)
return
logger.debug("We don't have a thumbnail of that size. Generating")
@@ -370,15 +344,7 @@ class ThumbnailProvider:
)
if file_path:
if for_federation:
await respond_with_multipart_responder(
self.hs.get_clock(),
request,
FileResponder(self.hs, open(file_path, "rb")),
media_info,
)
else:
await respond_with_file(self.hs, request, desired_type, file_path)
await respond_with_file(request, desired_type, file_path)
else:
logger.warning("Failed to generate thumbnail")
raise SynapseError(400, "Failed to generate thumbnail.")
@@ -394,28 +360,14 @@ class ThumbnailProvider:
desired_type: str,
max_timeout_ms: int,
ip_address: str,
use_federation: bool,
allow_authenticated: bool = True,
) -> None:
media_info = await self.media_repo.get_remote_media_info(
server_name,
media_id,
max_timeout_ms,
ip_address,
use_federation,
allow_authenticated,
server_name, media_id, max_timeout_ms, ip_address
)
if not media_info:
respond_404(request)
return
# if the media the thumbnail is generated from is authenticated, don't serve the
# thumbnail over an unauthenticated endpoint
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
if media_info.authenticated:
respond_404(request)
return
thumbnail_infos = await self.store.get_remote_media_thumbnails(
server_name, media_id
)
@@ -456,7 +408,7 @@ class ThumbnailProvider:
)
if file_path:
await respond_with_file(self.hs, request, desired_type, file_path)
await respond_with_file(request, desired_type, file_path)
else:
logger.warning("Failed to generate thumbnail")
raise SynapseError(400, "Failed to generate thumbnail.")
@@ -472,29 +424,16 @@ class ThumbnailProvider:
m_type: str,
max_timeout_ms: int,
ip_address: str,
use_federation: bool,
allow_authenticated: bool = True,
) -> None:
# TODO: Don't download the whole remote file
# We should proxy the thumbnail from the remote server instead of
# downloading the remote file and generating our own thumbnails.
media_info = await self.media_repo.get_remote_media_info(
server_name,
media_id,
max_timeout_ms,
ip_address,
use_federation,
allow_authenticated,
server_name, media_id, max_timeout_ms, ip_address
)
if not media_info:
return
# if the media the thumbnail is generated from is authenticated, don't serve the
# thumbnail over an unauthenticated endpoint
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
if media_info.authenticated:
raise NotFoundError()
thumbnail_infos = await self.store.get_remote_media_thumbnails(
server_name, media_id
)
@@ -509,7 +448,6 @@ class ThumbnailProvider:
media_info.filesystem_id,
url_cache=False,
server_name=server_name,
for_federation=False,
)
async def _select_and_respond_with_thumbnail(
@@ -523,8 +461,6 @@ class ThumbnailProvider:
media_id: str,
file_id: str,
url_cache: bool,
for_federation: bool,
media_info: Optional[LocalMedia] = None,
server_name: Optional[str] = None,
) -> None:
"""
@@ -540,8 +476,6 @@ class ThumbnailProvider:
file_id: The ID of the media that a thumbnail is being requested for.
url_cache: True if this is from a URL cache.
server_name: The server name, if this is a remote thumbnail.
for_federation: whether the request is from the federation /thumbnail request
media_info: metadata about the media being requested.
"""
logger.debug(
"_select_and_respond_with_thumbnail: media_id=%s desired=%sx%s (%s) thumbnail_infos=%s",
@@ -577,20 +511,13 @@ class ThumbnailProvider:
responder = await self.media_storage.fetch_media(file_info)
if responder:
if for_federation:
assert media_info is not None
await respond_with_multipart_responder(
self.hs.get_clock(), request, responder, media_info
)
return
else:
await respond_with_responder(
request,
responder,
file_info.thumbnail.type,
file_info.thumbnail.length,
)
return
await respond_with_responder(
request,
responder,
file_info.thumbnail.type,
file_info.thumbnail.length,
)
return
# If we can't find the thumbnail we regenerate it. This can happen
# if e.g. we've deleted the thumbnails but still have the original
@@ -631,18 +558,12 @@ class ThumbnailProvider:
)
responder = await self.media_storage.fetch_media(file_info)
if for_federation:
assert media_info is not None
await respond_with_multipart_responder(
self.hs.get_clock(), request, responder, media_info
)
else:
await respond_with_responder(
request,
responder,
file_info.thumbnail.type,
file_info.thumbnail.length,
)
await respond_with_responder(
request,
responder,
file_info.thumbnail.type,
file_info.thumbnail.length,
)
else:
# This might be because:
# 1. We can't create thumbnails for the given media (corrupted or

View File

@@ -773,7 +773,6 @@ class Notifier:
stream_token = await self.event_sources.bound_future_token(stream_token)
start = self.clock.time_msec()
logged = False
while True:
current_token = self.event_sources.get_current_token()
if stream_token.is_before_or_eq(current_token):
@@ -784,13 +783,11 @@ class Notifier:
if now - start > 10_000:
return False
if not logged:
logger.info(
"Waiting for current token to reach %s; currently at %s",
stream_token,
current_token,
)
logged = True
logger.info(
"Waiting for current token to reach %s; currently at %s",
stream_token,
current_token,
)
# TODO: be better
await self.clock.sleep(0.5)

View File

@@ -18,8 +18,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
import logging
from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Optional, Tuple
from typing import TYPE_CHECKING, Callable
from synapse.http.server import HttpServer, JsonResource
from synapse.rest import admin
@@ -68,64 +67,11 @@ from synapse.rest.client import (
voip,
)
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from synapse.server import HomeServer
RegisterServletsFunc = Callable[["HomeServer", HttpServer], None]
CLIENT_SERVLET_FUNCTIONS: Tuple[RegisterServletsFunc, ...] = (
versions.register_servlets,
initial_sync.register_servlets,
room.register_deprecated_servlets,
events.register_servlets,
room.register_servlets,
login.register_servlets,
profile.register_servlets,
presence.register_servlets,
directory.register_servlets,
voip.register_servlets,
pusher.register_servlets,
push_rule.register_servlets,
logout.register_servlets,
sync.register_servlets,
filter.register_servlets,
account.register_servlets,
register.register_servlets,
auth.register_servlets,
receipts.register_servlets,
read_marker.register_servlets,
room_keys.register_servlets,
keys.register_servlets,
tokenrefresh.register_servlets,
tags.register_servlets,
account_data.register_servlets,
reporting.register_servlets,
openid.register_servlets,
notifications.register_servlets,
devices.register_servlets,
thirdparty.register_servlets,
sendtodevice.register_servlets,
user_directory.register_servlets,
room_upgrade_rest_servlet.register_servlets,
capabilities.register_servlets,
account_validity.register_servlets,
relations.register_servlets,
password_policy.register_servlets,
knock.register_servlets,
appservice_ping.register_servlets,
admin.register_servlets_for_client_rest_resource,
mutual_rooms.register_servlets,
login_token_request.register_servlets,
rendezvous.register_servlets,
auth_issuer.register_servlets,
)
SERVLET_GROUPS: Dict[str, Iterable[RegisterServletsFunc]] = {
"client": CLIENT_SERVLET_FUNCTIONS,
}
class ClientRestResource(JsonResource):
"""Matrix Client API REST resource.
@@ -137,56 +83,80 @@ class ClientRestResource(JsonResource):
* etc
"""
def __init__(self, hs: "HomeServer", servlet_groups: Optional[List[str]] = None):
def __init__(self, hs: "HomeServer"):
JsonResource.__init__(self, hs, canonical_json=False)
if hs.config.media.can_load_media_repo:
# This import is here to prevent a circular import failure
from synapse.rest.client import media
SERVLET_GROUPS["media"] = (media.register_servlets,)
self.register_servlets(self, hs, servlet_groups)
self.register_servlets(self, hs)
@staticmethod
def register_servlets(
client_resource: HttpServer,
hs: "HomeServer",
servlet_groups: Optional[Iterable[str]] = None,
) -> None:
def register_servlets(client_resource: HttpServer, hs: "HomeServer") -> None:
# Some servlets are only registered on the main process (and not worker
# processes).
is_main_process = hs.config.worker.worker_app is None
if not servlet_groups:
servlet_groups = SERVLET_GROUPS.keys()
versions.register_servlets(hs, client_resource)
for servlet_group in servlet_groups:
# Fail on unknown servlet groups.
if servlet_group not in SERVLET_GROUPS:
if servlet_group == "media":
logger.warn(
"media.can_load_media_repo needs to be configured for the media servlet to be available"
)
raise RuntimeError(
f"Attempting to register unknown client servlet: '{servlet_group}'"
)
# Deprecated in r0
initial_sync.register_servlets(hs, client_resource)
room.register_deprecated_servlets(hs, client_resource)
for servletfunc in SERVLET_GROUPS[servlet_group]:
if not is_main_process and servletfunc in [
pusher.register_servlets,
logout.register_servlets,
auth.register_servlets,
tokenrefresh.register_servlets,
reporting.register_servlets,
openid.register_servlets,
thirdparty.register_servlets,
room_upgrade_rest_servlet.register_servlets,
account_validity.register_servlets,
admin.register_servlets_for_client_rest_resource,
mutual_rooms.register_servlets,
login_token_request.register_servlets,
rendezvous.register_servlets,
auth_issuer.register_servlets,
]:
continue
# Partially deprecated in r0
events.register_servlets(hs, client_resource)
servletfunc(hs, client_resource)
room.register_servlets(hs, client_resource)
login.register_servlets(hs, client_resource)
profile.register_servlets(hs, client_resource)
presence.register_servlets(hs, client_resource)
directory.register_servlets(hs, client_resource)
voip.register_servlets(hs, client_resource)
if is_main_process:
pusher.register_servlets(hs, client_resource)
push_rule.register_servlets(hs, client_resource)
if is_main_process:
logout.register_servlets(hs, client_resource)
sync.register_servlets(hs, client_resource)
filter.register_servlets(hs, client_resource)
account.register_servlets(hs, client_resource)
register.register_servlets(hs, client_resource)
if is_main_process:
auth.register_servlets(hs, client_resource)
receipts.register_servlets(hs, client_resource)
read_marker.register_servlets(hs, client_resource)
room_keys.register_servlets(hs, client_resource)
keys.register_servlets(hs, client_resource)
if is_main_process:
tokenrefresh.register_servlets(hs, client_resource)
tags.register_servlets(hs, client_resource)
account_data.register_servlets(hs, client_resource)
if is_main_process:
reporting.register_servlets(hs, client_resource)
openid.register_servlets(hs, client_resource)
notifications.register_servlets(hs, client_resource)
devices.register_servlets(hs, client_resource)
if is_main_process:
thirdparty.register_servlets(hs, client_resource)
sendtodevice.register_servlets(hs, client_resource)
user_directory.register_servlets(hs, client_resource)
if is_main_process:
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
capabilities.register_servlets(hs, client_resource)
if is_main_process:
account_validity.register_servlets(hs, client_resource)
relations.register_servlets(hs, client_resource)
password_policy.register_servlets(hs, client_resource)
knock.register_servlets(hs, client_resource)
appservice_ping.register_servlets(hs, client_resource)
if hs.config.server.enable_media_repo:
from synapse.rest.client import media
media.register_servlets(hs, client_resource)
# moving to /_synapse/admin
if is_main_process:
admin.register_servlets_for_client_rest_resource(hs, client_resource)
# unstable
if is_main_process:
mutual_rooms.register_servlets(hs, client_resource)
login_token_request.register_servlets(hs, client_resource)
rendezvous.register_servlets(hs, client_resource)
auth_issuer.register_servlets(hs, client_resource)

View File

@@ -31,9 +31,7 @@ from synapse.rest.admin import admin_patterns, assert_requester_is_admin
from synapse.types import JsonDict, UserID
if TYPE_CHECKING:
from typing_extensions import assert_never
from synapse.server import HomeServer, HomeServerConfig
from synapse.server import HomeServer
class ExperimentalFeature(str, Enum):
@@ -41,16 +39,8 @@ class ExperimentalFeature(str, Enum):
Currently supported per-user features
"""
MSC3026 = "msc3026"
MSC3881 = "msc3881"
MSC3575 = "msc3575"
def is_globally_enabled(self, config: "HomeServerConfig") -> bool:
if self is ExperimentalFeature.MSC3881:
return config.experimental.msc3881_enabled
if self is ExperimentalFeature.MSC3575:
return config.experimental.msc3575_enabled
assert_never(self)
class ExperimentalFeaturesRestServlet(RestServlet):

View File

@@ -256,15 +256,9 @@ class KeyChangesServlet(RestServlet):
user_id = requester.user.to_string()
device_list_updates = await self.device_handler.get_user_ids_changed(
user_id, from_token
)
results = await self.device_handler.get_user_ids_changed(user_id, from_token)
response: JsonDict = {}
response["changed"] = list(device_list_updates.changed)
response["left"] = list(device_list_updates.left)
return 200, response
return 200, results
class OneTimeKeyServlet(RestServlet):

View File

@@ -47,7 +47,7 @@ from synapse.util.stringutils import parse_and_validate_server_name
logger = logging.getLogger(__name__)
class PreviewURLServlet(RestServlet):
class UnstablePreviewURLServlet(RestServlet):
"""
Same as `GET /_matrix/media/r0/preview_url`, this endpoint provides a generic preview API
for URLs which outputs Open Graph (https://ogp.me/) responses (with some Matrix
@@ -65,7 +65,9 @@ class PreviewURLServlet(RestServlet):
* Matrix cannot be used to distribute the metadata between homeservers.
"""
PATTERNS = [re.compile(r"^/_matrix/client/v1/media/preview_url$")]
PATTERNS = [
re.compile(r"^/_matrix/client/unstable/org.matrix.msc3916/media/preview_url$")
]
def __init__(
self,
@@ -93,8 +95,10 @@ class PreviewURLServlet(RestServlet):
respond_with_json_bytes(request, 200, og, send_cors=True)
class MediaConfigResource(RestServlet):
PATTERNS = [re.compile(r"^/_matrix/client/v1/media/config$")]
class UnstableMediaConfigResource(RestServlet):
PATTERNS = [
re.compile(r"^/_matrix/client/unstable/org.matrix.msc3916/media/config$")
]
def __init__(self, hs: "HomeServer"):
super().__init__()
@@ -108,10 +112,10 @@ class MediaConfigResource(RestServlet):
respond_with_json(request, 200, self.limits_dict, send_cors=True)
class ThumbnailResource(RestServlet):
class UnstableThumbnailResource(RestServlet):
PATTERNS = [
re.compile(
"/_matrix/client/v1/media/thumbnail/(?P<server_name>[^/]*)/(?P<media_id>[^/]*)$"
"/_matrix/client/unstable/org.matrix.msc3916/media/thumbnail/(?P<server_name>[^/]*)/(?P<media_id>[^/]*)$"
)
]
@@ -155,25 +159,11 @@ class ThumbnailResource(RestServlet):
if self._is_mine_server_name(server_name):
if self.dynamic_thumbnails:
await self.thumbnailer.select_or_generate_local_thumbnail(
request,
media_id,
width,
height,
method,
m_type,
max_timeout_ms,
False,
request, media_id, width, height, method, m_type, max_timeout_ms
)
else:
await self.thumbnailer.respond_local_thumbnail(
request,
media_id,
width,
height,
method,
m_type,
max_timeout_ms,
False,
request, media_id, width, height, method, m_type, max_timeout_ms
)
self.media_repo.mark_recently_accessed(None, media_id)
else:
@@ -201,7 +191,6 @@ class ThumbnailResource(RestServlet):
m_type,
max_timeout_ms,
ip_address,
True,
)
self.media_repo.mark_recently_accessed(server_name, media_id)
@@ -271,9 +260,11 @@ class DownloadResource(RestServlet):
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
media_repo = hs.get_media_repository()
if hs.config.media.url_preview_enabled:
PreviewURLServlet(hs, media_repo, media_repo.media_storage).register(
UnstablePreviewURLServlet(hs, media_repo, media_repo.media_storage).register(
http_server
)
MediaConfigResource(hs).register(http_server)
ThumbnailResource(hs, media_repo, media_repo.media_storage).register(http_server)
UnstableMediaConfigResource(hs).register(http_server)
UnstableThumbnailResource(hs, media_repo, media_repo.media_storage).register(
http_server
)
DownloadResource(hs, media_repo).register(http_server)

Some files were not shown because too many files have changed in this diff Show More