Compare commits

..

2 Commits

Author SHA1 Message Date
Andrew Morgan
3125e23a8e newsfile 2025-09-11 16:35:00 +01:00
anoa's Codex Agent
ed3218e164 Update method of locating package files
`pkg_resources` has been deprecated and will be removed from setuptools
on 2025-11-09 or later.
2025-09-11 16:33:04 +01:00
368 changed files with 993 additions and 4043 deletions

View File

@@ -120,7 +120,7 @@ jobs:
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Install Cosign
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
uses: sigstore/cosign-installer@d58896d6a1865668819e1d91763c7751a165e159 # v3.9.2
- name: Calculate docker image tag
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0

View File

@@ -1,126 +1,3 @@
# Synapse 1.139.2 (2025-10-07)
## Bugfixes
- Fix a bug introduced in 1.139.1 where a client could receive an Internal Server Error if they set `device_keys: null` in the request to [`POST /_matrix/client/v3/keys/upload`](https://spec.matrix.org/v1.16/client-server-api/#post_matrixclientv3keysupload). ([\#19023](https://github.com/element-hq/synapse/issues/19023))
# Synapse 1.139.1 (2025-10-07)
## Security Fixes
- Fix [CVE-2025-61672](https://www.cve.org/CVERecord?id=CVE-2025-61672) / [GHSA-fh66-fcv5-jjfr](https://github.com/element-hq/synapse/security/advisories/GHSA-fh66-fcv5-jjfr). Lack of validation for device keys in Synapse before 1.139.1 allows an attacker registered on the victim homeserver to degrade federation functionality, unpredictably breaking outbound federation to other homeservers. ([\#17097](https://github.com/element-hq/synapse/issues/17097))
## Deprecations and Removals
- Drop support for unstable field names from the long-accepted [MSC2732](https://github.com/matrix-org/matrix-spec-proposals/pull/2732) (Olm fallback keys) proposal. This change allows unit tests to pass following the security patch above. ([\#18996](https://github.com/element-hq/synapse/issues/18996))
# Synapse 1.139.0 (2025-09-30)
### `/register` requests from old application service implementations may break when using MAS
If you are using Matrix Authentication Service (MAS), as of this release any
Application Services that do not set `inhibit_login=true` when calling `POST
/_matrix/client/v3/register` will receive the error
`IO.ELEMENT.MSC4190.M_APPSERVICE_LOGIN_UNSUPPORTED` in response. Please see [the
upgrade
notes](https://element-hq.github.io/synapse/develop/upgrade.html#register-requests-from-old-application-service-implementations-may-break-when-using-mas)
for more information.
No significant changes since 1.139.0rc3.
# Synapse 1.139.0rc3 (2025-09-25)
## Bugfixes
- Fix a bug introduced in 1.139.0rc1 where `run_coroutine_in_background(...)` incorrectly handled logcontexts, resulting in partially broken logging. ([\#18964](https://github.com/element-hq/synapse/issues/18964))
# Synapse 1.139.0rc2 (2025-09-23)
## Internal Changes
- Drop support for Ubuntu 24.10 Oracular Oriole, and add support for Ubuntu 25.04 Plucky Puffin. ([\#18962](https://github.com/element-hq/synapse/issues/18962))
# Synapse 1.139.0rc1 (2025-09-23)
## Features
- Add experimental support for [MSC4308: Thread Subscriptions extension to Sliding Sync](https://github.com/matrix-org/matrix-spec-proposals/pull/4308) when [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-spec-proposals/pull/4306) and [MSC4186: Simplified Sliding Sync](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) are enabled. ([\#18695](https://github.com/element-hq/synapse/issues/18695))
- Update push rules for experimental [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-doc/issues/4306) to follow a newer draft. ([\#18846](https://github.com/element-hq/synapse/issues/18846))
- Add `get_media_upload_limits_for_user` and `on_media_upload_limit_exceeded` module API callbacks to the media repository. ([\#18848](https://github.com/element-hq/synapse/issues/18848))
- Support [MSC4169](https://github.com/matrix-org/matrix-spec-proposals/pull/4169) for backwards-compatible redaction sending using the `/send` endpoint. Contributed by @SpiritCroc @ Beeper. ([\#18898](https://github.com/element-hq/synapse/issues/18898))
- Add an in-memory cache to `_get_e2e_cross_signing_signatures_for_devices` to reduce DB load. ([\#18899](https://github.com/element-hq/synapse/issues/18899))
- Update [MSC4190](https://github.com/matrix-org/matrix-spec-proposals/pull/4190) support to return correct errors and allow appservices to reset cross-signing keys without user-interactive authentication. Contributed by @tulir @ Beeper. ([\#18946](https://github.com/element-hq/synapse/issues/18946))
## Bugfixes
- Ensure all PDUs sent via `/send` pass canonical JSON checks. ([\#18641](https://github.com/element-hq/synapse/issues/18641))
- Fix bug where we did not send invite revocations over federation. ([\#18823](https://github.com/element-hq/synapse/issues/18823))
- Fix prefixed support for [MSC4133](https://github.com/matrix-org/matrix-spec-proposals/pull/4133). ([\#18875](https://github.com/element-hq/synapse/issues/18875))
- Fix open redirect in legacy SSO flow with the `idp` query parameter. ([\#18909](https://github.com/element-hq/synapse/issues/18909))
- Fix a performance regression related to the experimental Delayed Events ([MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140)) feature. ([\#18926](https://github.com/element-hq/synapse/issues/18926))
## Updates to the Docker image
- Suppress "Applying schema" log noise bulk when `SYNAPSE_LOG_TESTING` is set. ([\#18878](https://github.com/element-hq/synapse/issues/18878))
## Improved Documentation
- Clarify Python dependency constraints in our deprecation policy. ([\#18856](https://github.com/element-hq/synapse/issues/18856))
- Clarify necessary `jwt_config` parameter in OIDC documentation for authentik. Contributed by @maxkratz. ([\#18931](https://github.com/element-hq/synapse/issues/18931))
## Deprecations and Removals
- Remove obsolete and experimental `/sync/e2ee` endpoint. ([\#18583](https://github.com/element-hq/synapse/issues/18583))
## Internal Changes
- Fix `LaterGauge` metrics to collect from all servers. ([\#18791](https://github.com/element-hq/synapse/issues/18791))
- Configure Synapse to run [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-spec-proposals/pull/4306) Complement tests. ([\#18819](https://github.com/element-hq/synapse/issues/18819))
- Remove `sentinel` logcontext usage where we log in `setup`, `start` and `exit`. ([\#18870](https://github.com/element-hq/synapse/issues/18870))
- Use the `Enum`'s value for the dictionary key when responding to an admin request for experimental features. ([\#18874](https://github.com/element-hq/synapse/issues/18874))
- Start background tasks after we fork the process (daemonize). ([\#18886](https://github.com/element-hq/synapse/issues/18886))
- Better explain how we manage the logcontext in `run_in_background(...)` and `run_as_background_process(...)`. ([\#18900](https://github.com/element-hq/synapse/issues/18900), [\#18906](https://github.com/element-hq/synapse/issues/18906))
- Remove `sentinel` logcontext usage in `Clock` utilities like `looping_call` and `call_later`. ([\#18907](https://github.com/element-hq/synapse/issues/18907))
- Replace usages of the deprecated `pkg_resources` interface in preparation of setuptools dropping it soon. ([\#18910](https://github.com/element-hq/synapse/issues/18910))
- Split loading config from homeserver `setup`. ([\#18933](https://github.com/element-hq/synapse/issues/18933))
- Fix `run_in_background` not being awaited properly in some tests causing `LoggingContext` problems. ([\#18937](https://github.com/element-hq/synapse/issues/18937))
- Fix `run_as_background_process` not being awaited properly causing `LoggingContext` problems in experimental [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140): Delayed events implementation. ([\#18938](https://github.com/element-hq/synapse/issues/18938))
- Introduce `Clock.call_when_running(...)` to wrap startup code in a logcontext, ensuring we can identify which server generated the logs. ([\#18944](https://github.com/element-hq/synapse/issues/18944))
- Introduce `Clock.add_system_event_trigger(...)` to wrap system event callback code in a logcontext, ensuring we can identify which server generated the logs. ([\#18945](https://github.com/element-hq/synapse/issues/18945))
### Updates to locked dependencies
* Bump actions/setup-go from 5.5.0 to 6.0.0. ([\#18891](https://github.com/element-hq/synapse/issues/18891))
* Bump actions/setup-python from 5.6.0 to 6.0.0. ([\#18890](https://github.com/element-hq/synapse/issues/18890))
* Bump authlib from 1.6.1 to 1.6.3. ([\#18921](https://github.com/element-hq/synapse/issues/18921))
* Bump jsonschema from 4.25.0 to 4.25.1. ([\#18897](https://github.com/element-hq/synapse/issues/18897))
* Bump log from 0.4.27 to 0.4.28. ([\#18892](https://github.com/element-hq/synapse/issues/18892))
* Bump phonenumbers from 9.0.12 to 9.0.13. ([\#18893](https://github.com/element-hq/synapse/issues/18893))
* Bump pydantic from 2.11.7 to 2.11.9. ([\#18922](https://github.com/element-hq/synapse/issues/18922))
* Bump serde from 1.0.219 to 1.0.223. ([\#18920](https://github.com/element-hq/synapse/issues/18920))
* Bump serde_json from 1.0.143 to 1.0.145. ([\#18919](https://github.com/element-hq/synapse/issues/18919))
* Bump sigstore/cosign-installer from 3.9.2 to 3.10.0. ([\#18917](https://github.com/element-hq/synapse/issues/18917))
* Bump towncrier from 24.8.0 to 25.8.0. ([\#18894](https://github.com/element-hq/synapse/issues/18894))
* Bump types-psycopg2 from 2.9.21.20250809 to 2.9.21.20250915. ([\#18918](https://github.com/element-hq/synapse/issues/18918))
* Bump types-requests from 2.32.4.20250611 to 2.32.4.20250809. ([\#18895](https://github.com/element-hq/synapse/issues/18895))
* Bump types-setuptools from 80.9.0.20250809 to 80.9.0.20250822. ([\#18924](https://github.com/element-hq/synapse/issues/18924))
# Synapse 1.138.0 (2025-09-09)
No significant changes since 1.138.0rc1.

23
Cargo.lock generated
View File

@@ -1250,28 +1250,18 @@ dependencies = [
[[package]]
name = "serde"
version = "1.0.224"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6aaeb1e94f53b16384af593c71e20b095e958dab1d26939c1b70645c5cfbcc0b"
dependencies = [
"serde_core",
"serde_derive",
]
[[package]]
name = "serde_core"
version = "1.0.224"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32f39390fa6346e24defbcdd3d9544ba8a19985d0af74df8501fbfe9a64341ab"
checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.224"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87ff78ab5e8561c9a675bfc1785cb07ae721f0ee53329a595cefd8c04c2ac4e0"
checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00"
dependencies = [
"proc-macro2",
"quote",
@@ -1280,15 +1270,14 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.145"
version = "1.0.143"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "402a6f66d8c709116cf22f558eab210f5a50187f702eb4d7e5ef38d9a7f1c79c"
checksum = "d401abef1d108fbd9cbaebc3e46611f4b1021f714a0597a71f41ee463f5f4a5a"
dependencies = [
"itoa",
"memchr",
"ryu",
"serde",
"serde_core",
]
[[package]]

View File

@@ -0,0 +1 @@
Remove obsolete and experimental `/sync/e2ee` endpoint.

1
changelog.d/18791.misc Normal file
View File

@@ -0,0 +1 @@
Fix `LaterGauge` metrics to collect from all servers.

1
changelog.d/18819.misc Normal file
View File

@@ -0,0 +1 @@
Configure Synapse to run MSC4306: Thread Subscriptions Complement tests.

1
changelog.d/18823.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug where we did not send invite revocations over federation.

View File

@@ -0,0 +1 @@
Update push rules for experimental [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-doc/issues/4306) to follow newer draft.

1
changelog.d/18874.misc Normal file
View File

@@ -0,0 +1 @@
Use the `Enum`'s value for the dictionary key when responding to an admin request for experimental features.

1
changelog.d/18875.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix prefixed support for MSC4133.

1
changelog.d/18878.docker Normal file
View File

@@ -0,0 +1 @@
Suppress "Applying schema" log noise bulk when `SYNAPSE_LOG_TESTING` is set.

1
changelog.d/18886.misc Normal file
View File

@@ -0,0 +1 @@
Start background tasks after we fork the process (daemonize).

1
changelog.d/18900.misc Normal file
View File

@@ -0,0 +1 @@
Better explain how we manage the logcontext in `run_in_background(...)` and `run_as_background_process(...)`.

1
changelog.d/18910.misc Normal file
View File

@@ -0,0 +1 @@
Replace usages of the deprecated `pkg_resources` interface in preparation of setuptools dropping it soon.

36
debian/changelog vendored
View File

@@ -1,39 +1,3 @@
matrix-synapse-py3 (1.139.2) stable; urgency=medium
* New Synapse release 1.139.2.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 16:29:47 +0100
matrix-synapse-py3 (1.139.1) stable; urgency=medium
* New Synapse release 1.139.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 11:46:51 +0100
matrix-synapse-py3 (1.139.0) stable; urgency=medium
* New Synapse release 1.139.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Sep 2025 11:58:55 +0100
matrix-synapse-py3 (1.139.0~rc3) stable; urgency=medium
* New Synapse release 1.139.0rc3.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Sep 2025 12:13:23 +0100
matrix-synapse-py3 (1.139.0~rc2) stable; urgency=medium
* New Synapse release 1.139.0rc2.
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Sep 2025 15:31:42 +0100
matrix-synapse-py3 (1.139.0~rc1) stable; urgency=medium
* New Synapse release 1.139.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Sep 2025 13:24:50 +0100
matrix-synapse-py3 (1.138.0) stable; urgency=medium
* New Synapse release 1.138.0.

View File

@@ -1,11 +1,13 @@
# Deprecation Policy
Deprecation Policy for Platform Dependencies
============================================
Synapse has a number of **platform dependencies** (Python, Rust, PostgreSQL, and SQLite)
and **application dependencies** (Python and Rust packages). This document outlines the
policy towards which versions we support, and when we drop support for versions in the
future.
Synapse has a number of platform dependencies, including Python, Rust,
PostgreSQL and SQLite. This document outlines the policy towards which versions
we support, and when we drop support for versions in the future.
## Platform Dependencies
Policy
------
Synapse follows the upstream support life cycles for Python and PostgreSQL,
i.e. when a version reaches End of Life Synapse will withdraw support for that
@@ -24,8 +26,8 @@ The oldest supported version of SQLite is the version
[provided](https://packages.debian.org/bullseye/libsqlite3-0) by
[Debian oldstable](https://wiki.debian.org/DebianOldStable).
### Context
Context
-------
It is important for system admins to have a clear understanding of the platform
requirements of Synapse and its deprecation policies so that they can
@@ -48,42 +50,4 @@ the ecosystem.
On a similar note, SQLite does not generally have a concept of "supported
release"; bugfixes are published for the latest minor release only. We chose to
track Debian's oldstable as this is relatively conservative, predictably updated
and is consistent with the `.deb` packages released by Matrix.org.
## Application dependencies
For application-level Python dependencies, we often specify loose version constraints
(ex. `>=X.Y.Z`) to be forwards compatible with any new versions. Upper bounds (`<A.B.C`)
are only added when necessary to prevent known incompatibilities.
When selecting a minimum version, while we are mindful of the impact on downstream
package maintainers, our primary focus is on the maintainability and progress of Synapse
itself.
For developers, a Python dependency version can be considered a "no-brainer" upgrade once it is
available in both the latest [Debian Stable](https://packages.debian.org/stable/) and
[Ubuntu LTS](https://launchpad.net/ubuntu) repositories. No need to burden yourself with
extra scrutiny or consideration at this point.
We aggressively update Rust dependencies. Since these are statically linked and managed
entirely by `cargo` during build, they *can* pose no ongoing maintenance burden on others.
This allows us to freely upgrade to leverage the latest ecosystem advancements assuming
they don't have their own system-level dependencies.
### Context
Because Python dependencies can easily be managed in a virtual environment, we are less
concerned about the criteria for selecting minimum versions. The only thing of concern
is making sure we're not making it unnecessarily difficult for downstream package
maintainers. Generally, this just means avoiding the bleeding edge for a few months.
The situation for Rust dependencies is fundamentally different. For packagers, the
concerns around Python dependency versions do not apply. The `cargo` tool handles
downloading and building all libraries to satisfy dependencies, and these libraries are
statically linked into the final binary. This means that from a packager's perspective,
the Rust dependency versions are an internal build detail, not a runtime dependency to
be managed on the target system. Consequently, we have even greater flexibility to
upgrade Rust dependencies as needed for the project. Some distros (e.g. Fedora) do
package Rust libraries, but this appears to be the outlier rather than the norm.
and is consistent with the `.deb` packages released by Matrix.org.

View File

@@ -64,68 +64,3 @@ If multiple modules implement this callback, they will be considered in order. I
returns `True`, Synapse falls through to the next one. The value of the first callback that
returns `False` will be used. If this happens, Synapse will not call any of the subsequent
implementations of this callback.
### `get_media_upload_limits_for_user`
_First introduced in Synapse v1.139.0_
```python
async def get_media_upload_limits_for_user(user_id: str, size: int) -> Optional[List[synapse.module_api.MediaUploadLimit]]
```
**<span style="color:red">
Caution: This callback is currently experimental. The method signature or behaviour
may change without notice.
</span>**
Called when processing a request to store content in the media repository. This can be used to dynamically override
the [media upload limits configuration](../usage/configuration/config_documentation.html#media_upload_limits).
The arguments passed to this callback are:
* `user_id`: The Matrix user ID of the user (e.g. `@alice:example.com`) making the request.
If the callback returns a list then it will be used as the limits instead of those in the configuration (if any).
If an empty list is returned then no limits are applied (**warning:** users will be able
to upload as much data as they desire).
If multiple modules implement this callback, they will be considered in order. If a
callback returns `None`, Synapse falls through to the next one. The value of the first
callback that does not return `None` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback.
If there are no registered modules, or if all modules return `None`, then
the default
[media upload limits configuration](../usage/configuration/config_documentation.html#media_upload_limits)
will be used.
### `on_media_upload_limit_exceeded`
_First introduced in Synapse v1.139.0_
```python
async def on_media_upload_limit_exceeded(user_id: str, limit: synapse.module_api.MediaUploadLimit, sent_bytes: int, attempted_bytes: int) -> None
```
**<span style="color:red">
Caution: This callback is currently experimental. The method signature or behaviour
may change without notice.
</span>**
Called when a user attempts to upload media that would exceed a
[configured media upload limit](../usage/configuration/config_documentation.html#media_upload_limits).
This callback will only be called on workers which handle
[POST /_matrix/media/v3/upload](https://spec.matrix.org/v1.15/client-server-api/#post_matrixmediav3upload)
requests.
This could be used to inform the user that they have reached a media upload limit through
some external method.
The arguments passed to this callback are:
* `user_id`: The Matrix user ID of the user (e.g. `@alice:example.com`) making the request.
* `limit`: The `synapse.module_api.MediaUploadLimit` representing the limit that was reached.
* `sent_bytes`: The number of bytes already sent during the period of the limit.
* `attempted_bytes`: The number of bytes that the user attempted to send.

View File

@@ -186,7 +186,6 @@ oidc_providers:
4. Note the slug of your application, Client ID and Client Secret.
Note: RSA keys must be used for signing for Authentik, ECC keys do not work.
Note: The provider must have a signing key set and must not use an encryption key.
Synapse config:
```yaml
@@ -205,12 +204,6 @@ oidc_providers:
config:
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.preferred_username|capitalize }}" # TO BE FILLED: If your users have names in Authentik and you want those in Synapse, this should be replaced with user.name|capitalize.
[...]
jwt_config:
enabled: true
secret: "your client secret" # TO BE FILLED (same as `client_secret` above)
algorithm: "RS256"
# (...other fields)
```
### Dex

View File

@@ -117,29 +117,6 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.139.0
## Drop support for Ubuntu 24.10 Oracular Oriole, and add support for Ubuntu 25.04 Plucky Puffin
Ubuntu 24.10 Oracular Oriole [has been end-of-life since 10 Jul
2025](https://endoflife.date/ubuntu). This release drops support for Ubuntu
24.10, and in its place adds support for Ubuntu 25.04 Plucky Puffin.
## `/register` requests from old application service implementations may break when using MAS
Application Services that do not set `inhibit_login=true` when calling `POST
/_matrix/client/v3/register` will receive the error
`IO.ELEMENT.MSC4190.M_APPSERVICE_LOGIN_UNSUPPORTED` in response. This is a
result of [MSC4190: Device management for application
services](https://github.com/matrix-org/matrix-spec-proposals/pull/4190) which
adds new endpoints for application services to create encryption-ready devices
with other than `/login` or `/register` without `inhibit_login=true`.
If an application service you use starts to fail with the mentioned error,
ensure it is up to date. If it is, then kindly let the author know that they
need to update their implementation to call `/register` with
`inhibit_login=true`.
# Upgrading to v1.136.0
## Deprecate `run_as_background_process` exported as part of the module API interface in favor of `ModuleApi.run_as_background_process`

View File

@@ -2168,12 +2168,9 @@ max_upload_size: 60M
### `media_upload_limits`
*(array)* A list of media upload limits defining how much data a given user can upload in a given time period.
These limits are applied in addition to the `max_upload_size` limit above (which applies to individual uploads).
An empty list means no limits are applied.
These settings can be overridden using the `get_media_upload_limits_for_user` module API [callback](../../modules/media_repository_callbacks.md#get_media_upload_limits_for_user).
Defaults to `[]`.
Example configuration:

24
poetry.lock generated
View File

@@ -34,15 +34,15 @@ tests-mypy = ["mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" a
[[package]]
name = "authlib"
version = "1.6.3"
version = "1.6.1"
description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients."
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"all\" or extra == \"jwt\" or extra == \"oidc\""
files = [
{file = "authlib-1.6.3-py2.py3-none-any.whl", hash = "sha256:7ea0f082edd95a03b7b72edac65ec7f8f68d703017d7e37573aee4fc603f2a48"},
{file = "authlib-1.6.3.tar.gz", hash = "sha256:9f7a982cc395de719e4c2215c5707e7ea690ecf84f1ab126f28c053f4219e610"},
{file = "authlib-1.6.1-py2.py3-none-any.whl", hash = "sha256:e9d2031c34c6309373ab845afc24168fe9e93dc52d252631f52642f21f5ed06e"},
{file = "authlib-1.6.1.tar.gz", hash = "sha256:4dffdbb1460ba6ec8c17981a4c67af7d8af131231b5a36a88a1e8c80c111cdfd"},
]
[package.dependencies]
@@ -1774,14 +1774,14 @@ files = [
[[package]]
name = "pydantic"
version = "2.11.9"
version = "2.11.7"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "pydantic-2.11.9-py3-none-any.whl", hash = "sha256:c42dd626f5cfc1c6950ce6205ea58c93efa406da65f479dcb4029d5934857da2"},
{file = "pydantic-2.11.9.tar.gz", hash = "sha256:6b8ffda597a14812a7975c90b82a8a2e777d9257aba3453f973acd3c032a18e2"},
{file = "pydantic-2.11.7-py3-none-any.whl", hash = "sha256:dde5df002701f6de26248661f6835bbe296a47bf73990135c7d07ce741b9623b"},
{file = "pydantic-2.11.7.tar.gz", hash = "sha256:d989c3c6cb79469287b1569f7447a17848c998458d49ebe294e975b9baf0f0db"},
]
[package.dependencies]
@@ -2971,14 +2971,14 @@ files = [
[[package]]
name = "types-psycopg2"
version = "2.9.21.20250915"
version = "2.9.21.20250809"
description = "Typing stubs for psycopg2"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_psycopg2-2.9.21.20250915-py3-none-any.whl", hash = "sha256:eefe5ccdc693fc086146e84c9ba437bb278efe1ef330b299a0cb71169dc6c55f"},
{file = "types_psycopg2-2.9.21.20250915.tar.gz", hash = "sha256:bfeb8f54c32490e7b5edc46215ab4163693192bc90407b4a023822de9239f5c8"},
{file = "types_psycopg2-2.9.21.20250809-py3-none-any.whl", hash = "sha256:59b7b0ed56dcae9efae62b8373497274fc1a0484bdc5135cdacbe5a8f44e1d7b"},
{file = "types_psycopg2-2.9.21.20250809.tar.gz", hash = "sha256:b7c2cbdcf7c0bd16240f59ba694347329b0463e43398de69784ea4dee45f3c6d"},
]
[[package]]
@@ -3026,14 +3026,14 @@ urllib3 = ">=2"
[[package]]
name = "types-setuptools"
version = "80.9.0.20250822"
version = "80.9.0.20250809"
description = "Typing stubs for setuptools"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_setuptools-80.9.0.20250822-py3-none-any.whl", hash = "sha256:53bf881cb9d7e46ed12c76ef76c0aaf28cfe6211d3fab12e0b83620b1a8642c3"},
{file = "types_setuptools-80.9.0.20250822.tar.gz", hash = "sha256:070ea7716968ec67a84c7f7768d9952ff24d28b65b6594797a464f1b3066f965"},
{file = "types_setuptools-80.9.0.20250809-py3-none-any.whl", hash = "sha256:7c6539b4c7ac7b4ab4db2be66d8a58fb1e28affa3ee3834be48acafd94f5976a"},
{file = "types_setuptools-80.9.0.20250809.tar.gz", hash = "sha256:e986ba37ffde364073d76189e1d79d9928fb6f5278c7d07589cde353d0218864"},
]
[[package]]

View File

@@ -101,7 +101,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.139.2"
version = "1.138.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"

View File

@@ -1,5 +1,5 @@
$schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.139/synapse-config.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.138/synapse-config.schema.json
type: object
properties:
modules:
@@ -2415,15 +2415,8 @@ properties:
A list of media upload limits defining how much data a given user can
upload in a given time period.
These limits are applied in addition to the `max_upload_size` limit above
(which applies to individual uploads).
An empty list means no limits are applied.
These settings can be overridden using the `get_media_upload_limits_for_user`
module API [callback](../../modules/media_repository_callbacks.md#get_media_upload_limits_for_user).
default: []
items:
time_period:

View File

@@ -32,7 +32,7 @@ DISTS = (
"debian:sid", # (rolling distro, no EOL)
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)
"ubuntu:noble", # 24.04 LTS (EOL 2029-06)
"ubuntu:plucky", # 25.04 (EOL 2026-01)
"ubuntu:oracular", # 24.10 (EOL 2025-07)
"debian:trixie", # (EOL not specified yet)
)

View File

@@ -68,18 +68,6 @@ PROMETHEUS_METRIC_MISSING_FROM_LIST_TO_CHECK = ErrorCode(
category="per-homeserver-tenant-metrics",
)
PREFER_SYNAPSE_CLOCK_CALL_WHEN_RUNNING = ErrorCode(
"prefer-synapse-clock-call-when-running",
"`synapse.util.Clock.call_when_running` should be used instead of `reactor.callWhenRunning`",
category="synapse-reactor-clock",
)
PREFER_SYNAPSE_CLOCK_ADD_SYSTEM_EVENT_TRIGGER = ErrorCode(
"prefer-synapse-clock-add-system-event-trigger",
"`synapse.util.Clock.add_system_event_trigger` should be used instead of `reactor.addSystemEventTrigger`",
category="synapse-reactor-clock",
)
class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
@@ -241,77 +229,9 @@ class SynapsePlugin(Plugin):
):
return check_is_cacheable_wrapper
if fullname in (
"twisted.internet.interfaces.IReactorCore.callWhenRunning",
"synapse.types.ISynapseThreadlessReactor.callWhenRunning",
"synapse.types.ISynapseReactor.callWhenRunning",
):
return check_call_when_running
if fullname in (
"twisted.internet.interfaces.IReactorCore.addSystemEventTrigger",
"synapse.types.ISynapseThreadlessReactor.addSystemEventTrigger",
"synapse.types.ISynapseReactor.addSystemEventTrigger",
):
return check_add_system_event_trigger
return None
def check_call_when_running(ctx: MethodSigContext) -> CallableType:
"""
Ensure that the `reactor.callWhenRunning` callsites aren't used.
`synapse.util.Clock.call_when_running` should always be used instead of
`reactor.callWhenRunning`.
Since `reactor.callWhenRunning` is a reactor callback, the callback will start out
with the sentinel logcontext. `synapse.util.Clock` starts a default logcontext as we
want to know which server the logs came from.
Args:
ctx: The `FunctionSigContext` from mypy.
"""
signature: CallableType = ctx.default_signature
ctx.api.fail(
(
"Expected all `reactor.callWhenRunning` calls to use `synapse.util.Clock.call_when_running` instead. "
"This is so all Synapse code runs with a logcontext as we want to know which server the logs came from."
),
ctx.context,
code=PREFER_SYNAPSE_CLOCK_CALL_WHEN_RUNNING,
)
return signature
def check_add_system_event_trigger(ctx: MethodSigContext) -> CallableType:
"""
Ensure that the `reactor.addSystemEventTrigger` callsites aren't used.
`synapse.util.Clock.add_system_event_trigger` should always be used instead of
`reactor.addSystemEventTrigger`.
Since `reactor.addSystemEventTrigger` is a reactor callback, the callback will start out
with the sentinel logcontext. `synapse.util.Clock` starts a default logcontext as we
want to know which server the logs came from.
Args:
ctx: The `FunctionSigContext` from mypy.
"""
signature: CallableType = ctx.default_signature
ctx.api.fail(
(
"Expected all `reactor.addSystemEventTrigger` calls to use `synapse.util.Clock.add_system_event_trigger` instead. "
"This is so all Synapse code runs with a logcontext as we want to know which server the logs came from."
),
ctx.context,
code=PREFER_SYNAPSE_CLOCK_ADD_SYSTEM_EVENT_TRIGGER,
)
return signature
def analyze_prometheus_metric_classes(ctx: ClassDefContext) -> None:
"""
Cross-check the list of Prometheus metric classes against the

View File

@@ -30,7 +30,7 @@ from signedjson.sign import sign_json
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.util.json import json_encoder
from synapse.util import json_encoder
def main() -> None:

View File

@@ -54,11 +54,11 @@ from twisted.internet import defer, reactor as reactor_
from synapse.config.database import DatabaseConnectionConfig
from synapse.config.homeserver import HomeServerConfig
from synapse.logging.context import (
LoggingContext,
make_deferred_yieldable,
run_in_background,
)
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.notifier import ReplicationNotifier
from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn
from synapse.storage.databases.main import FilteringWorkerStore
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
@@ -98,7 +98,8 @@ from synapse.storage.databases.state.bg_updates import StateBackgroundUpdateStor
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.types import ISynapseReactor
from synapse.util import SYNAPSE_VERSION
from synapse.util import SYNAPSE_VERSION, Clock
from synapse.util.stringutils import random_string
# Cast safety: Twisted does some naughty magic which replaces the
# twisted.internet.reactor module with a Reactor instance at runtime.
@@ -317,16 +318,31 @@ class Store(
)
class MockHomeserver(HomeServer):
DATASTORE_CLASS = DataStore
class MockHomeserver:
def __init__(self, config: HomeServerConfig):
super().__init__(
hostname=config.server.server_name,
config=config,
reactor=reactor,
version_string=f"Synapse/{SYNAPSE_VERSION}",
)
self.clock = Clock(reactor)
self.config = config
self.hostname = config.server.server_name
self.version_string = SYNAPSE_VERSION
self.instance_id = random_string(5)
def get_clock(self) -> Clock:
return self.clock
def get_reactor(self) -> ISynapseReactor:
return reactor
def get_instance_id(self) -> str:
return self.instance_id
def get_instance_name(self) -> str:
return "master"
def should_send_federation(self) -> bool:
return False
def get_replication_notifier(self) -> ReplicationNotifier:
return ReplicationNotifier()
class Porter:
@@ -335,12 +351,12 @@ class Porter:
sqlite_config: Dict[str, Any],
progress: "Progress",
batch_size: int,
hs: HomeServer,
hs_config: HomeServerConfig,
):
self.sqlite_config = sqlite_config
self.progress = progress
self.batch_size = batch_size
self.hs = hs
self.hs_config = hs_config
async def setup_table(self, table: str) -> Tuple[str, int, int, int, int]:
if table in APPEND_ONLY_TABLES:
@@ -660,7 +676,8 @@ class Porter:
engine = create_engine(db_config.config)
server_name = self.hs.hostname
hs = MockHomeserver(self.hs_config)
server_name = hs.hostname
with make_conn(
db_config=db_config,
@@ -671,16 +688,16 @@ class Porter:
engine.check_database(
db_conn, allow_outdated_version=allow_outdated_version
)
prepare_database(db_conn, engine, config=self.hs.config)
prepare_database(db_conn, engine, config=self.hs_config)
# Type safety: ignore that we're using Mock homeservers here.
store = Store(
DatabasePool(
self.hs,
hs, # type: ignore[arg-type]
db_config,
engine,
),
db_conn,
self.hs,
hs, # type: ignore[arg-type]
)
db_conn.commit()
@@ -778,7 +795,7 @@ class Porter:
return
self.postgres_store = self.build_db_store(
self.hs.config.database.get_single_database()
self.hs_config.database.get_single_database()
)
await self.remove_ignored_background_updates_from_database()
@@ -1567,8 +1584,6 @@ def main() -> None:
config = HomeServerConfig()
config.parse_config_dict(hs_config, "", "")
hs = MockHomeserver(config)
def start(stdscr: Optional["curses.window"] = None) -> None:
progress: Progress
if stdscr:
@@ -1580,14 +1595,15 @@ def main() -> None:
sqlite_config=sqlite_config,
progress=progress,
batch_size=args.batch_size,
hs=hs,
hs_config=config,
)
@defer.inlineCallbacks
def run() -> Generator["defer.Deferred[Any]", Any, None]:
yield defer.ensureDeferred(porter.run())
with LoggingContext("synapse_port_db_run"):
yield defer.ensureDeferred(porter.run())
hs.get_clock().call_when_running(run)
reactor.callWhenRunning(run)
reactor.run()

View File

@@ -74,7 +74,7 @@ def run_background_updates(hs: HomeServer) -> None:
)
)
hs.get_clock().call_when_running(run)
reactor.callWhenRunning(run)
reactor.run()

View File

@@ -43,9 +43,9 @@ from synapse.logging.opentracing import (
from synapse.metrics import SERVER_NAME_LABEL
from synapse.synapse_rust.http_client import HttpClient
from synapse.types import JsonDict, Requester, UserID, create_requester
from synapse.util import json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
from synapse.util.json import json_decoder
from . import introspection_response_timer

View File

@@ -48,9 +48,9 @@ from synapse.logging.opentracing import (
from synapse.metrics import SERVER_NAME_LABEL
from synapse.synapse_rust.http_client import HttpClient
from synapse.types import Requester, UserID, create_requester
from synapse.util import json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
from synapse.util.json import json_decoder
from . import introspection_response_timer

View File

@@ -30,7 +30,7 @@ from typing import Any, Dict, List, Optional, Union
from twisted.web import http
from synapse.util.json import json_decoder
from synapse.util import json_decoder
if typing.TYPE_CHECKING:
from synapse.config.homeserver import HomeServerConfig
@@ -140,9 +140,6 @@ class Codes(str, Enum):
# Part of MSC4155
INVITE_BLOCKED = "ORG.MATRIX.MSC4155.M_INVITE_BLOCKED"
# Part of MSC4190
APPSERVICE_LOGIN_UNSUPPORTED = "IO.ELEMENT.MSC4190.M_APPSERVICE_LOGIN_UNSUPPORTED"
# Part of MSC4306: Thread Subscriptions
MSC4306_CONFLICTING_UNSUBSCRIPTION = (
"IO.ELEMENT.MSC4306.M_CONFLICTING_UNSUBSCRIPTION"

View File

@@ -26,7 +26,7 @@ from synapse.api.errors import LimitExceededError
from synapse.config.ratelimiting import RatelimitSettings
from synapse.storage.databases.main import DataStore
from synapse.types import Requester
from synapse.util.clock import Clock
from synapse.util import Clock
if TYPE_CHECKING:
# To avoid circular imports:

View File

@@ -22,7 +22,6 @@
"""Contains the URL paths to prefix various aspects of the server with."""
import hmac
import urllib.parse
from hashlib import sha256
from typing import Optional
from urllib.parse import urlencode, urljoin
@@ -97,21 +96,11 @@ class LoginSSORedirectURIBuilder:
serialized_query_parameters = urlencode({"redirectUrl": client_redirect_url})
if idp_id:
# Since this is a user-controlled string, make it safe to include in a URL path.
url_encoded_idp_id = urllib.parse.quote(
idp_id,
# Since this defaults to `safe="/"`, we have to override it. We're
# working with an individual URL path parameter so there shouldn't be
# any slashes in it which could change the request path.
safe="",
encoding="utf8",
)
resultant_url = urljoin(
# We have to add a trailing slash to the base URL to ensure that the
# last path segment is not stripped away when joining with another path.
f"{base_url}/",
f"{url_encoded_idp_id}?{serialized_query_parameters}",
f"{idp_id}?{serialized_query_parameters}",
)
else:
resultant_url = f"{base_url}?{serialized_query_parameters}"

View File

@@ -72,7 +72,7 @@ from synapse.events.auto_accept_invites import InviteAutoAccepter
from synapse.events.presence_router import load_legacy_presence_router
from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, PreserveLoggingContext
from synapse.logging.context import PreserveLoggingContext
from synapse.logging.opentracing import init_tracer
from synapse.metrics import install_gc_manager, register_threadpool
from synapse.metrics.background_process_metrics import run_as_background_process
@@ -183,23 +183,25 @@ def start_reactor(
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
install_gc_manager()
run_command()
# Reset the logging context when we start the reactor (whenever we yield control
# to the reactor, the `sentinel` logging context needs to be set so we don't
# leak the current logging context and erroneously apply it to the next task the
# reactor event loop picks up)
with PreserveLoggingContext():
run_command()
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
#
# We also need to drop the logcontext before forking if we're daemonizing,
# otherwise the cputime metrics get confused about the per-thread resource usage
# appearing to go backwards.
with PreserveLoggingContext():
if daemonize:
assert pid_file is not None
if daemonize:
assert pid_file is not None
if print_pidfile:
print(pid_file)
if print_pidfile:
print(pid_file)
daemonize_process(pid_file, logger)
run()
daemonize_process(pid_file, logger)
run()
def quit_with_error(error_string: str) -> NoReturn:
@@ -241,7 +243,7 @@ def redirect_stdio_to_logs() -> None:
def register_start(
hs: "HomeServer", cb: Callable[P, Awaitable], *args: P.args, **kwargs: P.kwargs
cb: Callable[P, Awaitable], *args: P.args, **kwargs: P.kwargs
) -> None:
"""Register a callback with the reactor, to be called once it is running
@@ -278,8 +280,7 @@ def register_start(
# on as normal.
os._exit(1)
clock = hs.get_clock()
clock.call_when_running(lambda: defer.ensureDeferred(wrapper()))
reactor.callWhenRunning(lambda: defer.ensureDeferred(wrapper()))
def listen_metrics(bind_addresses: StrCollection, port: int) -> None:
@@ -518,9 +519,7 @@ async def start(hs: "HomeServer") -> None:
# numbers of DNS requests don't starve out other users of the threadpool.
resolver_threadpool = ThreadPool(name="gai_resolver")
resolver_threadpool.start()
hs.get_clock().add_system_event_trigger(
"during", "shutdown", resolver_threadpool.stop
)
reactor.addSystemEventTrigger("during", "shutdown", resolver_threadpool.stop)
reactor.installNameResolver(
GAIResolver(reactor, getThreadPool=lambda: resolver_threadpool)
)
@@ -602,12 +601,10 @@ async def start(hs: "HomeServer") -> None:
hs.get_datastores().main.db_pool.start_profiling()
hs.get_pusherpool().start()
def log_shutdown() -> None:
with LoggingContext("log_shutdown"):
logger.info("Shutting down...")
# Log when we start the shut down process.
hs.get_clock().add_system_event_trigger("before", "shutdown", log_shutdown)
hs.get_reactor().addSystemEventTrigger(
"before", "shutdown", logger.info, "Shutting down..."
)
setup_sentry(hs)
setup_sdnotify(hs)
@@ -722,7 +719,7 @@ def setup_sdnotify(hs: "HomeServer") -> None:
# we're not using systemd.
sdnotify(b"READY=1\nMAINPID=%i" % (os.getpid(),))
hs.get_clock().add_system_event_trigger(
hs.get_reactor().addSystemEventTrigger(
"before", "shutdown", sdnotify, b"STOPPING=1"
)

View File

@@ -24,7 +24,7 @@ import logging
import os
import sys
import tempfile
from typing import List, Mapping, Optional, Sequence, Tuple
from typing import List, Mapping, Optional, Sequence
from twisted.internet import defer, task
@@ -256,7 +256,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
return self.base_directory
def load_config(argv_options: List[str]) -> Tuple[HomeServerConfig, argparse.Namespace]:
def start(config_options: List[str]) -> None:
parser = argparse.ArgumentParser(description="Synapse Admin Command")
HomeServerConfig.add_arguments_to_parser(parser)
@@ -282,15 +282,11 @@ def load_config(argv_options: List[str]) -> Tuple[HomeServerConfig, argparse.Nam
export_data_parser.set_defaults(func=export_data_command)
try:
config, args = HomeServerConfig.load_config_with_parser(parser, argv_options)
config, args = HomeServerConfig.load_config_with_parser(parser, config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
return config, args
def start(config: HomeServerConfig, args: argparse.Namespace) -> None:
if config.worker.worker_app is not None:
assert config.worker.worker_app == "synapse.app.admin_cmd"
@@ -329,7 +325,7 @@ def start(config: HomeServerConfig, args: argparse.Namespace) -> None:
# command.
async def run() -> None:
with LoggingContext(name="command"):
with LoggingContext("command"):
await _base.start(ss)
await args.func(ss, args)
@@ -341,6 +337,5 @@ def start(config: HomeServerConfig, args: argparse.Namespace) -> None:
if __name__ == "__main__":
homeserver_config, args = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config, args)
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -21,14 +21,13 @@
import sys
from synapse.app.generic_worker import load_config, start
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
def main() -> None:
homeserver_config = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config)
with LoggingContext("main"):
start(sys.argv[1:])
if __name__ == "__main__":

View File

@@ -21,14 +21,13 @@
import sys
from synapse.app.generic_worker import load_config, start
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
def main() -> None:
homeserver_config = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config)
with LoggingContext("main"):
start(sys.argv[1:])
if __name__ == "__main__":

View File

@@ -20,14 +20,13 @@
import sys
from synapse.app.generic_worker import load_config, start
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
def main() -> None:
homeserver_config = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config)
with LoggingContext("main"):
start(sys.argv[1:])
if __name__ == "__main__":

View File

@@ -21,14 +21,13 @@
import sys
from synapse.app.generic_worker import load_config, start
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
def main() -> None:
homeserver_config = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config)
with LoggingContext("main"):
start(sys.argv[1:])
if __name__ == "__main__":

View File

@@ -21,14 +21,13 @@
import sys
from synapse.app.generic_worker import load_config, start
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
def main() -> None:
homeserver_config = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config)
with LoggingContext("main"):
start(sys.argv[1:])
if __name__ == "__main__":

View File

@@ -21,14 +21,13 @@
import sys
from synapse.app.generic_worker import load_config, start
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
def main() -> None:
homeserver_config = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config)
with LoggingContext("main"):
start(sys.argv[1:])
if __name__ == "__main__":

View File

@@ -310,26 +310,13 @@ class GenericWorkerServer(HomeServer):
self.get_replication_command_handler().start_replication(self)
def load_config(argv_options: List[str]) -> HomeServerConfig:
"""
Parse the commandline and config files (does not generate config)
Args:
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
Returns:
Config object.
"""
def start(config_options: List[str]) -> None:
try:
config = HomeServerConfig.load_config("Synapse worker", argv_options)
config = HomeServerConfig.load_config("Synapse worker", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
return config
def start(config: HomeServerConfig) -> None:
# For backwards compatibility let any of the old app names.
assert config.worker.worker_app in (
"synapse.app.appservice",
@@ -368,10 +355,7 @@ def start(config: HomeServerConfig) -> None:
except Exception as e:
handle_startup_exception(e)
async def start() -> None:
await _base.start(hs)
register_start(hs, start)
register_start(_base.start, hs)
# redirect stdio to the logs, if configured.
if not hs.config.logging.no_redirect_stdio:
@@ -381,9 +365,8 @@ def start(config: HomeServerConfig) -> None:
def main() -> None:
homeserver_config = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config)
with LoggingContext("main"):
start(sys.argv[1:])
if __name__ == "__main__":

View File

@@ -308,21 +308,17 @@ class SynapseHomeServer(HomeServer):
logger.warning("Unrecognized listener type: %s", listener.type)
def load_or_generate_config(argv_options: List[str]) -> HomeServerConfig:
def setup(config_options: List[str]) -> SynapseHomeServer:
"""
Parse the commandline and config files
Supports generation of config files, so is used for the main homeserver app.
Args:
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
config_options_options: The options passed to Synapse. Usually `sys.argv[1:]`.
Returns:
A homeserver instance.
"""
try:
config = HomeServerConfig.load_or_generate_config(
"Synapse Homeserver", argv_options
"Synapse Homeserver", config_options
)
except ConfigError as e:
sys.stderr.write("\n")
@@ -336,20 +332,6 @@ def load_or_generate_config(argv_options: List[str]) -> HomeServerConfig:
# generating config files and shouldn't try to continue.
sys.exit(0)
return config
def setup(config: HomeServerConfig) -> SynapseHomeServer:
"""
Create and setup a Synapse homeserver instance given a configuration.
Args:
config: The configuration for the homeserver.
Returns:
A homeserver instance.
"""
if config.worker.worker_app:
raise ConfigError(
"You have specified `worker_app` in the config but are attempting to start a non-worker "
@@ -405,7 +387,7 @@ def setup(config: HomeServerConfig) -> SynapseHomeServer:
hs.get_datastores().main.db_pool.updates.start_doing_background_updates()
register_start(hs, start)
register_start(start)
return hs
@@ -423,12 +405,10 @@ def run(hs: HomeServer) -> None:
def main() -> None:
homeserver_config = load_or_generate_config(sys.argv[1:])
with LoggingContext("main"):
# check base requirements
check_requirements()
hs = setup(homeserver_config)
hs = setup(sys.argv[1:])
# redirect stdio to the logs, if configured.
if not hs.config.logging.no_redirect_stdio:

View File

@@ -21,14 +21,13 @@
import sys
from synapse.app.generic_worker import load_config, start
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
def main() -> None:
homeserver_config = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config)
with LoggingContext("main"):
start(sys.argv[1:])
if __name__ == "__main__":

View File

@@ -21,14 +21,13 @@
import sys
from synapse.app.generic_worker import load_config, start
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
def main() -> None:
homeserver_config = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config)
with LoggingContext("main"):
start(sys.argv[1:])
if __name__ == "__main__":

View File

@@ -21,14 +21,13 @@
import sys
from synapse.app.generic_worker import load_config, start
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
def main() -> None:
homeserver_config = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config)
with LoggingContext("main"):
start(sys.argv[1:])
if __name__ == "__main__":

View File

@@ -21,14 +21,13 @@
import sys
from synapse.app.generic_worker import load_config, start
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
def main() -> None:
homeserver_config = load_config(sys.argv[1:])
with LoggingContext(name="main"):
start(homeserver_config)
with LoggingContext("main"):
start(sys.argv[1:])
if __name__ == "__main__":

View File

@@ -84,7 +84,7 @@ from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.databases.main import DataStore
from synapse.types import DeviceListUpdates, JsonMapping
from synapse.util.clock import Clock
from synapse.util import Clock
if TYPE_CHECKING:
from synapse.server import HomeServer

View File

@@ -646,16 +646,12 @@ class RootConfig:
@classmethod
def load_or_generate_config(
cls: Type[TRootConfig], description: str, argv_options: List[str]
cls: Type[TRootConfig], description: str, argv: List[str]
) -> Optional[TRootConfig]:
"""Parse the commandline and config files
Supports generation of config files, so is used for the main homeserver app.
Args:
description: TODO
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
Returns:
Config object, or None if --generate-config or --generate-keys was set
"""
@@ -751,7 +747,7 @@ class RootConfig:
)
cls.invoke_all_static("add_arguments", parser)
config_args = parser.parse_args(argv_options)
config_args = parser.parse_args(argv)
config_files = find_config_files(search_paths=config_args.config_path)

View File

@@ -556,9 +556,6 @@ class ExperimentalConfig(Config):
# MSC4133: Custom profile fields
self.msc4133_enabled: bool = experimental.get("msc4133_enabled", False)
# MSC4169: Backwards-compatible redaction sending using `/send`
self.msc4169_enabled: bool = experimental.get("msc4169_enabled", False)
# MSC4210: Remove legacy mentions
self.msc4210_enabled: bool = experimental.get("msc4210_enabled", False)
@@ -593,5 +590,5 @@ class ExperimentalConfig(Config):
self.msc4293_enabled: bool = experimental.get("msc4293_enabled", False)
# MSC4306: Thread Subscriptions
# (and MSC4308: Thread Subscriptions extension to Sliding Sync)
# (and MSC4308: sliding sync extension for thread subscriptions)
self.msc4306_enabled: bool = experimental.get("msc4306_enabled", False)

View File

@@ -120,19 +120,11 @@ def parse_thumbnail_requirements(
@attr.s(auto_attribs=True, slots=True, frozen=True)
class MediaUploadLimit:
"""
Represents a limit on the amount of data a user can upload in a given time
period.
These can be configured through the `media_upload_limits` [config option](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#media_upload_limits)
or via the `get_media_upload_limits_for_user` module API [callback](https://element-hq.github.io/synapse/latest/modules/media_repository_callbacks.html#get_media_upload_limits_for_user).
"""
"""A limit on the amount of data a user can upload in a given time
period."""
max_bytes: int
"""The maximum number of bytes that can be uploaded in the given time period."""
time_period_ms: int
"""The time period in milliseconds."""
class ContentRepositoryConfig(Config):

View File

@@ -38,7 +38,7 @@ from synapse.storage.databases.main import DataStore
from synapse.synapse_rust.events import EventInternalMetadata
from synapse.types import EventID, JsonDict, StrCollection
from synapse.types.state import StateFilter
from synapse.util.clock import Clock
from synapse.util import Clock
from synapse.util.stringutils import random_string
if TYPE_CHECKING:

View File

@@ -178,7 +178,7 @@ from synapse.types import (
StrCollection,
get_domain_from_id,
)
from synapse.util.clock import Clock
from synapse.util import Clock
from synapse.util.metrics import Measure
from synapse.util.retryutils import filter_destinations_by_retry_limiter

View File

@@ -26,7 +26,7 @@ from synapse.api.constants import EduTypes
from synapse.api.errors import HttpResponseException
from synapse.events import EventBase
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction, serialize_and_filter_pdus
from synapse.federation.units import Edu, Transaction
from synapse.logging.opentracing import (
extract_text_map,
set_tag,
@@ -36,7 +36,7 @@ from synapse.logging.opentracing import (
)
from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import JsonDict
from synapse.util.json import json_decoder
from synapse.util import json_decoder
from synapse.util.metrics import measure_func
if TYPE_CHECKING:
@@ -119,7 +119,7 @@ class TransactionManager:
transaction_id=txn_id,
origin=self.server_name,
destination=destination,
pdus=serialize_and_filter_pdus(pdus),
pdus=[p.get_pdu_json() for p in pdus],
edus=[edu.get_dict() for edu in edus],
)

View File

@@ -135,7 +135,7 @@ class PublicRoomList(BaseFederationServlet):
if not self.allow_access:
raise FederationDeniedError(origin)
limit: Optional[int] = parse_integer_from_args(query, "limit", 0)
limit = parse_integer_from_args(query, "limit", 0)
since_token = parse_string_from_args(query, "since", None)
include_all_networks = parse_boolean_from_args(
query, "include_all_networks", default=False

View File

@@ -62,7 +62,7 @@ class DeactivateAccountHandler:
# Start the user parter loop so it can resume parting users from rooms where
# it left off (if it has work left to do).
if hs.config.worker.worker_app is None:
hs.get_clock().call_when_running(self._start_user_parting)
hs.get_reactor().callWhenRunning(self._start_user_parting)
else:
self._notify_account_deactivated_client = (
ReplicationNotifyAccountDeactivatedServlet.make_client(hs)

View File

@@ -18,15 +18,12 @@ from typing import TYPE_CHECKING, List, Optional, Set, Tuple
from twisted.internet.interfaces import IDelayedCall
from synapse.api.constants import EventTypes
from synapse.api.errors import ShadowBanError, SynapseError
from synapse.api.errors import ShadowBanError
from synapse.api.ratelimiting import Ratelimiter
from synapse.config.workers import MAIN_PROCESS_INSTANCE_NAME
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.opentracing import set_tag
from synapse.metrics import SERVER_NAME_LABEL, event_processing_positions
from synapse.metrics.background_process_metrics import (
run_as_background_process,
)
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.delayed_events import (
ReplicationAddedDelayedEventRestServlet,
)
@@ -48,7 +45,6 @@ from synapse.types import (
)
from synapse.util.events import generate_fake_event_id
from synapse.util.metrics import Measure
from synapse.util.sentinel import Sentinel
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -150,37 +146,10 @@ class DelayedEventsHandler:
)
async def _unsafe_process_new_event(self) -> None:
# We purposefully fetch the current max room stream ordering before
# doing anything else, as it could increment duing processing of state
# deltas. We want to avoid updating `delayed_events_stream_pos` past
# the stream ordering of the state deltas we've processed. Otherwise
# we'll leave gaps in our processing.
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
# Check that there are actually any delayed events to process. If not, bail early.
delayed_events_count = await self._store.get_count_of_delayed_events()
if delayed_events_count == 0:
# There are no delayed events to process. Update the
# `delayed_events_stream_pos` to the latest `events` stream pos and
# exit early.
self._event_pos = room_max_stream_ordering
logger.debug(
"No delayed events to process. Updating `delayed_events_stream_pos` to max stream ordering (%s)",
room_max_stream_ordering,
)
await self._store.update_delayed_events_stream_pos(room_max_stream_ordering)
event_processing_positions.labels(
name="delayed_events", **{SERVER_NAME_LABEL: self.server_name}
).set(room_max_stream_ordering)
return
# If self._event_pos is None then means we haven't fetched it from the DB yet
if self._event_pos is None:
self._event_pos = await self._store.get_delayed_events_stream_pos()
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self._event_pos > room_max_stream_ordering:
# apparently, we've processed more events than exist in the database!
# this can happen if events are removed with history purge or similar.
@@ -198,7 +167,7 @@ class DelayedEventsHandler:
self._clock, name="delayed_events_delta", server_name=self.server_name
):
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self._event_pos >= room_max_stream_ordering:
if self._event_pos == room_max_stream_ordering:
return
logger.debug(
@@ -233,81 +202,23 @@ class DelayedEventsHandler:
Process current state deltas to cancel other users' pending delayed events
that target the same state.
"""
# Get the senders of each delta's state event (as sender information is
# not currently stored in the `current_state_deltas` table).
event_id_and_sender_dict = await self._store.get_senders_for_event_ids(
[delta.event_id for delta in deltas if delta.event_id is not None]
)
# Note: No need to batch as `get_current_state_deltas` will only ever
# return 100 rows at a time.
for delta in deltas:
logger.debug(
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
)
# `delta.event_id` and `delta.sender` can be `None` in a few valid
# cases (see the docstring of
# `get_current_state_delta_membership_changes_for_user` for details).
if delta.event_id is None:
# TODO: Differentiate between this being caused by a state reset
# which removed a user from a room, or the homeserver
# purposefully having left the room. We can do so by checking
# whether there are any local memberships still left in the
# room. If so, then this is the result of a state reset.
#
# If it is a state reset, we should avoid cancelling new,
# delayed state events due to old state resurfacing. So we
# should skip and log a warning in this case.
#
# If the homeserver has left the room, then we should cancel all
# delayed state events intended for this room, as there is no
# need to try and send a delayed event into a room we've left.
logger.warning(
"Skipping state delta (%r, %r) without corresponding event ID. "
"This can happen if the homeserver has left the room (in which "
"case this can be ignored), or if there has been a state reset "
"which has caused the sender to be kicked out of the room",
logger.debug(
"Not handling delta for deleted state: %r %r",
delta.event_type,
delta.state_key,
)
continue
sender_str = event_id_and_sender_dict.get(
delta.event_id, Sentinel.UNSET_SENTINEL
logger.debug(
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
)
if sender_str is None:
# An event exists, but the `sender` field was "null" and Synapse
# incorrectly accepted the event. This is not expected.
logger.error(
"Skipping state delta with event ID '%s' as 'sender' was None. "
"This is unexpected - please report it as a bug!",
delta.event_id,
)
continue
if sender_str is Sentinel.UNSET_SENTINEL:
# We have an event ID, but the event was not found in the
# datastore. This can happen if a room, or its history, is
# purged. State deltas related to the room are left behind, but
# the event no longer exists.
#
# As we cannot get the sender of this event, we can't calculate
# whether to cancel delayed events related to this one. So we skip.
logger.debug(
"Skipping state delta with event ID '%s' - the room, or its history, may have been purged",
delta.event_id,
)
continue
try:
sender = UserID.from_string(sender_str)
except SynapseError as e:
logger.error(
"Skipping state delta with Matrix User ID '%s' that failed to parse: %s",
sender_str,
e,
)
event = await self._store.get_event(delta.event_id, allow_none=True)
if not event:
continue
sender = UserID.from_string(event.sender)
next_send_ts = await self._store.cancel_delayed_state_events(
room_id=delta.room_id,
@@ -417,7 +328,7 @@ class DelayedEventsHandler:
requester,
(requester.user.to_string(), requester.device_id),
)
await make_deferred_yieldable(self._initialized_from_db)
await self._initialized_from_db
next_send_ts = await self._store.cancel_delayed_event(
delay_id=delay_id,
@@ -443,7 +354,7 @@ class DelayedEventsHandler:
requester,
(requester.user.to_string(), requester.device_id),
)
await make_deferred_yieldable(self._initialized_from_db)
await self._initialized_from_db
next_send_ts = await self._store.restart_delayed_event(
delay_id=delay_id,
@@ -469,7 +380,7 @@ class DelayedEventsHandler:
# Use standard request limiter for sending delayed events on-demand,
# as an on-demand send is similar to sending a regular event.
await self._request_ratelimiter.ratelimit(requester)
await make_deferred_yieldable(self._initialized_from_db)
await self._initialized_from_db
event, next_send_ts = await self._store.process_target_delayed_event(
delay_id=delay_id,

View File

@@ -1002,7 +1002,7 @@ class DeviceWriterHandler(DeviceHandler):
# rolling-restarting Synapse.
if self._is_main_device_list_writer:
# On start up check if there are any updates pending.
hs.get_clock().call_when_running(self._handle_new_device_update_async)
hs.get_reactor().callWhenRunning(self._handle_new_device_update_async)
self.device_list_updater = DeviceListUpdater(hs, self)
hs.get_federation_registry().register_edu_handler(
EduTypes.DEVICE_LIST_UPDATE,

View File

@@ -34,7 +34,7 @@ from synapse.logging.opentracing import (
set_tag,
)
from synapse.types import JsonDict, Requester, StreamKeyType, UserID, get_domain_from_id
from synapse.util.json import json_encoder
from synapse.util import json_encoder
from synapse.util.stringutils import random_string
if TYPE_CHECKING:

View File

@@ -44,9 +44,9 @@ from synapse.types import (
get_domain_from_id,
get_verify_key_from_cross_signing_key,
)
from synapse.util import json_decoder
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.cancellation import cancellable
from synapse.util.json import json_decoder
from synapse.util.retryutils import (
NotRetryingDestination,
filter_destinations_by_retry_limiter,
@@ -57,6 +57,7 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
ONE_TIME_KEY_UPLOAD = "one_time_key_upload_lock"
@@ -847,22 +848,14 @@ class E2eKeysHandler:
"""
time_now = self.clock.time_msec()
# TODO: Validate the JSON to make sure it has the right keys.
device_keys = keys.get("device_keys", None)
if device_keys:
log_kv(
{
"message": "Updating device_keys for user.",
"user_id": user_id,
"device_id": device_id,
}
)
await self.upload_device_keys_for_user(
user_id=user_id,
device_id=device_id,
keys={"device_keys": device_keys},
)
else:
log_kv({"message": "Did not update device_keys", "reason": "not a dict"})
one_time_keys = keys.get("one_time_keys", None)
if one_time_keys:
@@ -880,9 +873,10 @@ class E2eKeysHandler:
log_kv(
{"message": "Did not update one_time_keys", "reason": "no keys given"}
)
fallback_keys = keys.get("fallback_keys")
if fallback_keys:
fallback_keys = keys.get("fallback_keys") or keys.get(
"org.matrix.msc2732.fallback_keys"
)
if fallback_keys and isinstance(fallback_keys, dict):
log_kv(
{
"message": "Updating fallback_keys for device.",
@@ -891,6 +885,8 @@ class E2eKeysHandler:
}
)
await self.store.set_e2e_fallback_keys(user_id, device_id, fallback_keys)
elif fallback_keys:
log_kv({"message": "Did not update fallback_keys", "reason": "not a dict"})
else:
log_kv(
{"message": "Did not update fallback_keys", "reason": "no keys given"}

View File

@@ -39,8 +39,8 @@ from synapse.http import RequestTimedOutError
from synapse.http.client import SimpleHttpClient
from synapse.http.site import SynapseRequest
from synapse.types import JsonDict, Requester
from synapse.util import json_decoder
from synapse.util.hash import sha256_and_url_safe_base64
from synapse.util.json import json_decoder
from synapse.util.stringutils import (
assert_valid_client_secret,
random_string,

View File

@@ -81,10 +81,9 @@ from synapse.types import (
create_requester,
)
from synapse.types.state import StateFilter
from synapse.util import log_failure, unwrapFirstError
from synapse.util import json_decoder, json_encoder, log_failure, unwrapFirstError
from synapse.util.async_helpers import Linearizer, gather_results
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.json import json_decoder, json_encoder
from synapse.util.metrics import measure_func
from synapse.visibility import get_effective_room_visibility_from_state
@@ -1014,37 +1013,14 @@ class EventCreationHandler:
await self.clock.sleep(random.randint(1, 10))
raise ShadowBanError()
room_version = None
if (
event_dict["type"] == EventTypes.Redaction
and "redacts" in event_dict["content"]
and self.hs.config.experimental.msc4169_enabled
):
if ratelimit:
room_id = event_dict["room_id"]
try:
room_version = await self.store.get_room_version(room_id)
except NotFoundError:
# The room doesn't exist.
raise AuthError(403, f"User {requester.user} not in room {room_id}")
if not room_version.updated_redaction_rules:
# Legacy room versions need the "redacts" field outside of the event's
# content. However clients may still send it within the content, so move
# the field if necessary for compatibility.
redacts = event_dict.get("redacts") or event_dict["content"].pop(
"redacts", None
)
if redacts is not None and "redacts" not in event_dict:
event_dict["redacts"] = redacts
if ratelimit:
if room_version is None:
room_id = event_dict["room_id"]
try:
room_version = await self.store.get_room_version(room_id)
except NotFoundError:
raise AuthError(403, f"User {requester.user} not in room {room_id}")
if room_version.updated_redaction_rules:
redacts = event_dict["content"].get("redacts")
else:

View File

@@ -67,9 +67,8 @@ from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
from synapse.module_api import ModuleApi
from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart
from synapse.util import Clock, json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.clock import Clock
from synapse.util.json import json_decoder
from synapse.util.macaroons import MacaroonGenerator, OidcSessionData
from synapse.util.templates import _localpart_from_email_filter

View File

@@ -541,7 +541,7 @@ class WorkerPresenceHandler(BasePresenceHandler):
self.send_stop_syncing, UPDATE_SYNCING_USERS_MS
)
hs.get_clock().add_system_event_trigger(
hs.get_reactor().addSystemEventTrigger(
"before",
"shutdown",
run_as_background_process,
@@ -842,7 +842,7 @@ class PresenceHandler(BasePresenceHandler):
# have not yet been persisted
self.unpersisted_users_changes: Set[str] = set()
hs.get_clock().add_system_event_trigger(
hs.get_reactor().addSystemEventTrigger(
"before",
"shutdown",
run_as_background_process,
@@ -1548,7 +1548,7 @@ class PresenceHandler(BasePresenceHandler):
self.clock, name="presence_delta", server_name=self.server_name
):
room_max_stream_ordering = self.store.get_room_max_stream_ordering()
if self._event_pos >= room_max_stream_ordering:
if self._event_pos == room_max_stream_ordering:
return
logger.debug(

View File

@@ -211,7 +211,7 @@ class SlidingSyncHandler:
Args:
sync_config: Sync configuration
to_token: The latest point in the stream to sync up to.
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from. Token of the end of the
previous batch. May be `None` if this is the initial sync request.
"""

View File

@@ -27,7 +27,7 @@ from typing import (
cast,
)
from typing_extensions import TypeAlias, assert_never
from typing_extensions import assert_never
from synapse.api.constants import AccountDataTypes, EduTypes
from synapse.handlers.receipts import ReceiptEventSource
@@ -40,7 +40,6 @@ from synapse.types import (
SlidingSyncStreamToken,
StrCollection,
StreamToken,
ThreadSubscriptionsToken,
)
from synapse.types.handlers.sliding_sync import (
HaveSentRoomFlag,
@@ -55,13 +54,6 @@ from synapse.util.async_helpers import (
gather_optional_coroutines,
)
_ThreadSubscription: TypeAlias = (
SlidingSyncResult.Extensions.ThreadSubscriptionsExtension.ThreadSubscription
)
_ThreadUnsubscription: TypeAlias = (
SlidingSyncResult.Extensions.ThreadSubscriptionsExtension.ThreadUnsubscription
)
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -76,7 +68,6 @@ class SlidingSyncExtensionHandler:
self.event_sources = hs.get_event_sources()
self.device_handler = hs.get_device_handler()
self.push_rules_handler = hs.get_push_rules_handler()
self._enable_thread_subscriptions = hs.config.experimental.msc4306_enabled
@trace
async def get_extensions_response(
@@ -102,7 +93,7 @@ class SlidingSyncExtensionHandler:
actual_room_ids: The actual room IDs in the the Sliding Sync response.
actual_room_response_map: A map of room ID to room results in the the
Sliding Sync response.
to_token: The latest point in the stream to sync up to.
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
@@ -165,32 +156,18 @@ class SlidingSyncExtensionHandler:
from_token=from_token,
)
thread_subs_coro = None
if (
sync_config.extensions.thread_subscriptions is not None
and self._enable_thread_subscriptions
):
thread_subs_coro = self.get_thread_subscriptions_extension_response(
sync_config=sync_config,
thread_subscriptions_request=sync_config.extensions.thread_subscriptions,
to_token=to_token,
from_token=from_token,
)
(
to_device_response,
e2ee_response,
account_data_response,
receipts_response,
typing_response,
thread_subs_response,
) = await gather_optional_coroutines(
to_device_coro,
e2ee_coro,
account_data_coro,
receipts_coro,
typing_coro,
thread_subs_coro,
)
return SlidingSyncResult.Extensions(
@@ -199,7 +176,6 @@ class SlidingSyncExtensionHandler:
account_data=account_data_response,
receipts=receipts_response,
typing=typing_response,
thread_subscriptions=thread_subs_response,
)
def find_relevant_room_ids_for_extension(
@@ -901,72 +877,3 @@ class SlidingSyncExtensionHandler:
return SlidingSyncResult.Extensions.TypingExtension(
room_id_to_typing_map=room_id_to_typing_map,
)
async def get_thread_subscriptions_extension_response(
self,
sync_config: SlidingSyncConfig,
thread_subscriptions_request: SlidingSyncConfig.Extensions.ThreadSubscriptionsExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.ThreadSubscriptionsExtension]:
"""Handle Thread Subscriptions extension (MSC4308)
Args:
sync_config: Sync configuration
thread_subscriptions_request: The thread_subscriptions extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
Returns:
the response (None if empty or thread subscriptions are disabled)
"""
if not thread_subscriptions_request.enabled:
return None
limit = thread_subscriptions_request.limit
if from_token:
from_stream_id = from_token.stream_token.thread_subscriptions_key
else:
from_stream_id = StreamToken.START.thread_subscriptions_key
to_stream_id = to_token.thread_subscriptions_key
updates = await self.store.get_latest_updated_thread_subscriptions_for_user(
user_id=sync_config.user.to_string(),
from_id=from_stream_id,
to_id=to_stream_id,
limit=limit,
)
if len(updates) == 0:
return None
subscribed_threads: Dict[str, Dict[str, _ThreadSubscription]] = {}
unsubscribed_threads: Dict[str, Dict[str, _ThreadUnsubscription]] = {}
for stream_id, room_id, thread_root_id, subscribed, automatic in updates:
if subscribed:
subscribed_threads.setdefault(room_id, {})[thread_root_id] = (
_ThreadSubscription(
automatic=automatic,
bump_stamp=stream_id,
)
)
else:
unsubscribed_threads.setdefault(room_id, {})[thread_root_id] = (
_ThreadUnsubscription(bump_stamp=stream_id)
)
prev_batch = None
if len(updates) == limit:
# Tell the client about a potential gap where there may be more
# thread subscriptions for it to backpaginate.
# We subtract one because the 'later in the stream' bound is inclusive,
# and we already saw the element at index 0.
prev_batch = ThreadSubscriptionsToken(updates[0][0] - 1)
return SlidingSyncResult.Extensions.ThreadSubscriptionsExtension(
subscribed=subscribed_threads,
unsubscribed=unsubscribed_threads,
prev_batch=prev_batch,
)

View File

@@ -13,6 +13,7 @@
#
import enum
import logging
from itertools import chain
from typing import (
@@ -74,7 +75,6 @@ from synapse.types.handlers.sliding_sync import (
)
from synapse.types.state import StateFilter
from synapse.util import MutableOverlayMapping
from synapse.util.sentinel import Sentinel
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -83,6 +83,12 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
# type of a dictionary lookup and subsequent type narrowing.
UNSET_SENTINEL = object()
# Helper definition for the types that we might return. We do this to avoid
# copying data between types (which can be expensive for many rooms).
RoomsForUserType = Union[RoomsForUserStateReset, RoomsForUser, RoomsForUserSlidingSync]

View File

@@ -9,7 +9,7 @@ from synapse.storage.databases.main.thread_subscriptions import (
AutomaticSubscriptionConflicted,
ThreadSubscription,
)
from synapse.types import EventOrderings, StreamKeyType, UserID
from synapse.types import EventOrderings, UserID
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -22,7 +22,6 @@ class ThreadSubscriptionsHandler:
self.store = hs.get_datastores().main
self.event_handler = hs.get_event_handler()
self.auth = hs.get_auth()
self._notifier = hs.get_notifier()
async def get_thread_subscription_settings(
self,
@@ -133,15 +132,6 @@ class ThreadSubscriptionsHandler:
errcode=Codes.MSC4306_CONFLICTING_UNSUBSCRIPTION,
)
if outcome is not None:
# wake up user streams (e.g. sliding sync) on the same worker
self._notifier.on_new_event(
StreamKeyType.THREAD_SUBSCRIPTIONS,
# outcome is a stream_id
outcome,
users=[user_id.to_string()],
)
return outcome
async def unsubscribe_user_from_thread(
@@ -172,19 +162,8 @@ class ThreadSubscriptionsHandler:
logger.info("rejecting thread subscriptions change (thread not accessible)")
raise NotFoundError("No such thread root")
outcome = await self.store.unsubscribe_user_from_thread(
return await self.store.unsubscribe_user_from_thread(
user_id.to_string(),
event.room_id,
thread_root_event_id,
)
if outcome is not None:
# wake up user streams (e.g. sliding sync) on the same worker
self._notifier.on_new_event(
StreamKeyType.THREAD_SUBSCRIPTIONS,
# outcome is a stream_id
outcome,
users=[user_id.to_string()],
)
return outcome

View File

@@ -27,7 +27,7 @@ from twisted.web.client import PartialDownloadError
from synapse.api.constants import LoginType
from synapse.api.errors import Codes, LoginError, SynapseError
from synapse.util.json import json_decoder
from synapse.util import json_decoder
if TYPE_CHECKING:
from synapse.server import HomeServer

View File

@@ -87,8 +87,8 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import set_tag, start_active_span, tags
from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import ISynapseReactor, StrSequence
from synapse.util import json_decoder
from synapse.util.async_helpers import timeout_deferred
from synapse.util.json import json_decoder
if TYPE_CHECKING:
from synapse.server import HomeServer

View File

@@ -49,7 +49,7 @@ from synapse.http.federation.well_known_resolver import WellKnownResolver
from synapse.http.proxyagent import ProxyAgent
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.types import ISynapseReactor
from synapse.util.clock import Clock
from synapse.util import Clock
logger = logging.getLogger(__name__)

View File

@@ -27,6 +27,7 @@ from typing import Callable, Dict, Optional, Tuple
import attr
from twisted.internet import defer
from twisted.internet.interfaces import IReactorTime
from twisted.web.client import RedirectAgent
from twisted.web.http import stringToDatetime
from twisted.web.http_headers import Headers
@@ -34,10 +35,8 @@ from twisted.web.iweb import IAgent, IResponse
from synapse.http.client import BodyExceededMaxSize, read_body_with_max_size
from synapse.logging.context import make_deferred_yieldable
from synapse.types import ISynapseThreadlessReactor
from synapse.util import Clock, json_decoder
from synapse.util.caches.ttlcache import TTLCache
from synapse.util.clock import Clock
from synapse.util.json import json_decoder
from synapse.util.metrics import Measure
# period to cache .well-known results for by default
@@ -89,7 +88,7 @@ class WellKnownResolver:
def __init__(
self,
server_name: str,
reactor: ISynapseThreadlessReactor,
reactor: IReactorTime,
agent: IAgent,
user_agent: bytes,
well_known_cache: Optional[TTLCache[bytes, Optional[bytes]]] = None,

View File

@@ -89,8 +89,8 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import set_tag, start_active_span, tags
from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import JsonDict
from synapse.util import json_decoder
from synapse.util.async_helpers import AwakenableSleeper, Linearizer, timeout_deferred
from synapse.util.json import json_decoder
from synapse.util.metrics import Measure
from synapse.util.stringutils import parse_and_validate_server_name

View File

@@ -52,11 +52,10 @@ from zope.interface import implementer
from twisted.internet import defer, interfaces, reactor
from twisted.internet.defer import CancelledError
from twisted.internet.interfaces import IReactorTime
from twisted.python import failure
from twisted.web import resource
from synapse.types import ISynapseThreadlessReactor
try:
from twisted.web.pages import notFound
except ImportError:
@@ -78,11 +77,10 @@ from synapse.api.errors import (
from synapse.config.homeserver import HomeServerConfig
from synapse.logging.context import defer_to_thread, preserve_fn, run_in_background
from synapse.logging.opentracing import active_span, start_active_span, trace_servlet
from synapse.util import Clock, json_encoder
from synapse.util.caches import intern_dict
from synapse.util.cancellation import is_function_cancellable
from synapse.util.clock import Clock
from synapse.util.iterutils import chunk_seq
from synapse.util.json import json_encoder
if TYPE_CHECKING:
import opentracing
@@ -412,7 +410,7 @@ class DirectServeJsonResource(_AsyncResource):
clock: Optional[Clock] = None,
):
if clock is None:
clock = Clock(cast(ISynapseThreadlessReactor, reactor))
clock = Clock(cast(IReactorTime, reactor))
super().__init__(clock, extract_context)
self.canonical_json = canonical_json
@@ -591,7 +589,7 @@ class DirectServeHtmlResource(_AsyncResource):
clock: Optional[Clock] = None,
):
if clock is None:
clock = Clock(cast(ISynapseThreadlessReactor, reactor))
clock = Clock(cast(IReactorTime, reactor))
super().__init__(clock, extract_context)

View File

@@ -51,7 +51,7 @@ from synapse.api.errors import Codes, SynapseError
from synapse.http import redact_uri
from synapse.http.server import HttpServer
from synapse.types import JsonDict, RoomAlias, RoomID, StrCollection
from synapse.util.json import json_decoder
from synapse.util import json_decoder
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -130,16 +130,6 @@ def parse_integer(
return parse_integer_from_args(args, name, default, required, negative)
@overload
def parse_integer_from_args(
args: Mapping[bytes, Sequence[bytes]],
name: str,
default: int,
required: Literal[False] = False,
negative: bool = False,
) -> int: ...
@overload
def parse_integer_from_args(
args: Mapping[bytes, Sequence[bytes]],

View File

@@ -802,15 +802,12 @@ def run_in_background(
deferred returned by the function completes.
To explain how the log contexts work here:
- When `run_in_background` is called, the calling logcontext is stored
("original"), we kick off the background task in the current context, and we
restore that original context before returning.
- For a completed deferred, that's the end of the story.
- For an incomplete deferred, when the background task finishes, we don't want to
leak our context into the reactor which would erroneously get attached to the
next operation picked up by the event loop. We add a callback to the deferred
which will clear the logging context after it finishes and yields control back to
the reactor.
- When this function is called, the current context is stored ("original"), we kick
off the background task, and we restore that original context before returning
- When the background task finishes, we don't want to leak our context into the
reactor which would erroneously get attached to the next operation picked up by
the event loop. We add a callback to the deferred which will clear the logging
context after it finishes and yields control back to the reactor.
Useful for wrapping functions that return a deferred or coroutine, which you don't
yield or await on (for instance because you want to pass it to
@@ -831,7 +828,6 @@ def run_in_background(
"""
calling_context = current_context()
try:
# (kick off the task in the current context)
res = f(*args, **kwargs)
except Exception:
# the assumption here is that the caller doesn't want to be disturbed
@@ -859,36 +855,22 @@ def run_in_background(
# The deferred has already completed
if d.called and not d.paused:
# If the function messes with logcontexts, we can assume it follows the Synapse
# logcontext rules (Rules for functions returning awaitables: "If the awaitable
# is already complete, the function returns with the same logcontext it started
# with."). If it function doesn't touch logcontexts at all, we can also assume
# the logcontext is unchanged.
#
# Either way, the function should have maintained the calling logcontext, so we
# can avoid messing with it further. Additionally, if the deferred has already
# completed, then it would be a mistake to then add a deferred callback (below)
# to reset the logcontext to the sentinel logcontext as that would run
# immediately (remember our goal is to maintain the calling logcontext when we
# return).
# The function should have maintained the logcontext, so we can
# optimise out the messing about
return d
# Since the function we called may follow the Synapse logcontext rules (Rules for
# functions returning awaitables: "If the awaitable is incomplete, the function
# clears the logcontext before returning"), the function may have reset the
# logcontext before returning, so we need to restore the calling logcontext now
# before we return ourselves.
# The function may have reset the context before returning, so we need to restore it
# now.
#
# Our goal is to have the caller logcontext unchanged after firing off the
# background task and returning.
set_current_context(calling_context)
# If the function we called is playing nice and following the Synapse logcontext
# rules, it will restore original calling logcontext when the deferred completes;
# but there is nothing waiting for it, so it will get leaked into the reactor (which
# would then get picked up by the next thing the reactor does). We therefore need to
# reset the logcontext here (set the `sentinel` logcontext) before yielding control
# back to the reactor.
# The original logcontext will be restored when the deferred completes, but
# there is nothing waiting for it, so it will get leaked into the reactor (which
# would then get picked up by the next thing the reactor does). We therefore
# need to reset the logcontext here (set the `sentinel` logcontext) before
# yielding control back to the reactor.
#
# (If this feels asymmetric, consider it this way: we are
# effectively forking a new thread of execution. We are
@@ -910,9 +892,10 @@ def run_coroutine_in_background(
Useful for wrapping coroutines that you don't yield or await on (for
instance because you want to pass it to deferred.gatherResults()).
This is a special case of `run_in_background` where we can accept a coroutine
directly rather than a function. We can do this because coroutines do not continue
running once they have yielded.
This is a special case of `run_in_background` where we can accept a
coroutine directly rather than a function. We can do this because coroutines
do not run until called, and so calling an async function without awaiting
cannot change the log contexts.
This is an ergonomic helper so we can do this:
```python
@@ -923,7 +906,33 @@ def run_coroutine_in_background(
run_in_background(lambda: func1(arg1))
```
"""
return run_in_background(lambda: coroutine)
calling_context = current_context()
# Wrap the coroutine in a deferred, which will have the side effect of executing the
# coroutine in the background.
d = defer.ensureDeferred(coroutine)
# The function may have reset the context before returning, so we need to restore it
# now.
#
# Our goal is to have the caller logcontext unchanged after firing off the
# background task and returning.
set_current_context(calling_context)
# The original logcontext will be restored when the deferred completes, but
# there is nothing waiting for it, so it will get leaked into the reactor (which
# would then get picked up by the next thing the reactor does). We therefore
# need to reset the logcontext here (set the `sentinel` logcontext) before
# yielding control back to the reactor.
#
# (If this feels asymmetric, consider it this way: we are
# effectively forking a new thread of execution. We are
# probably currently within a ``with LoggingContext()`` block,
# which is supposed to have a single entry and exit point. But
# by spawning off another deferred, we are effectively
# adding a new exit point.)
d.addBoth(_set_context_cb, SENTINEL_CONTEXT)
return d
T = TypeVar("T")

View File

@@ -60,18 +60,8 @@ class PeriodicallyFlushingMemoryHandler(MemoryHandler):
else:
reactor_to_use = reactor
# Call our hook when the reactor start up
#
# type-ignore: Ideally, we'd use `Clock.call_when_running(...)`, but
# `PeriodicallyFlushingMemoryHandler` is instantiated via Python logging
# configuration, so it's not straightforward to pass in the homeserver's clock
# (and we don't want to burden other peoples logging config with the details).
#
# The important reason why we want to use `Clock.call_when_running` is so that
# the callback runs with a logcontext as we want to know which server the logs
# came from. But since we don't log anything in the callback, it's safe to
# ignore the lint here.
reactor_to_use.callWhenRunning(on_reactor_running) # type: ignore[prefer-synapse-clock-call-when-running]
# call our hook when the reactor start up
reactor_to_use.callWhenRunning(on_reactor_running)
def shouldFlush(self, record: LogRecord) -> bool:
"""

View File

@@ -204,7 +204,7 @@ from twisted.web.http import Request
from twisted.web.http_headers import Headers
from synapse.config import ConfigError
from synapse.util.json import json_decoder, json_encoder
from synapse.util import json_decoder, json_encoder
if TYPE_CHECKING:
from synapse.http.site import SynapseRequest

View File

@@ -54,8 +54,8 @@ from synapse.logging.context import (
make_deferred_yieldable,
run_in_background,
)
from synapse.util import Clock
from synapse.util.async_helpers import DeferredEvent
from synapse.util.clock import Clock
from synapse.util.stringutils import is_ascii
if TYPE_CHECKING:

View File

@@ -179,13 +179,11 @@ class MediaRepository:
# We get the media upload limits and sort them in descending order of
# time period, so that we can apply some optimizations.
self.default_media_upload_limits = hs.config.media.media_upload_limits
self.default_media_upload_limits.sort(
self.media_upload_limits = hs.config.media.media_upload_limits
self.media_upload_limits.sort(
key=lambda limit: limit.time_period_ms, reverse=True
)
self.media_repository_callbacks = hs.get_module_api_callbacks().media_repository
def _start_update_recently_accessed(self) -> Deferred:
return run_as_background_process(
"update_recently_accessed_media",
@@ -342,27 +340,16 @@ class MediaRepository:
# Check that the user has not exceeded any of the media upload limits.
# Use limits from module API if provided
media_upload_limits = (
await self.media_repository_callbacks.get_media_upload_limits_for_user(
auth_user.to_string()
)
)
# Otherwise use the default limits from config
if media_upload_limits is None:
# Note: the media upload limits are sorted so larger time periods are
# first.
media_upload_limits = self.default_media_upload_limits
# This is the total size of media uploaded by the user in the last
# `time_period_ms` milliseconds, or None if we haven't checked yet.
uploaded_media_size: Optional[int] = None
for limit in media_upload_limits:
# Note: the media upload limits are sorted so larger time periods are
# first.
for limit in self.media_upload_limits:
# We only need to check the amount of media uploaded by the user in
# this latest (smaller) time period if the amount of media uploaded
# in a previous (larger) time period is below the limit.
# in a previous (larger) time period is above the limit.
#
# This optimization means that in the common case where the user
# hasn't uploaded much media, we only need to query the database
@@ -376,12 +363,6 @@ class MediaRepository:
)
if uploaded_media_size + content_length > limit.max_bytes:
await self.media_repository_callbacks.on_media_upload_limit_exceeded(
user_id=auth_user.to_string(),
limit=limit,
sent_bytes=uploaded_media_size,
attempted_bytes=content_length,
)
raise SynapseError(
400, "Media upload limit exceeded", Codes.RESOURCE_LIMIT_EXCEEDED
)

View File

@@ -55,7 +55,7 @@ from synapse.api.errors import NotFoundError
from synapse.logging.context import defer_to_thread, run_in_background
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
from synapse.media._base import ThreadedFileSender
from synapse.util.clock import Clock
from synapse.util import Clock
from synapse.util.file_consumer import BackgroundFileConsumer
from ..types import JsonDict

View File

@@ -27,7 +27,7 @@ import attr
from synapse.media.preview_html import parse_html_description
from synapse.types import JsonDict
from synapse.util.json import json_decoder
from synapse.util import json_decoder
if TYPE_CHECKING:
from lxml import etree

View File

@@ -46,9 +46,9 @@ from synapse.media.oembed import OEmbedProvider
from synapse.media.preview_html import decode_body, parse_html_to_open_graph
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import JsonDict, UserID
from synapse.util import json_encoder
from synapse.util.async_helpers import ObservableDeferred
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.json import json_encoder
from synapse.util.stringutils import random_string
if TYPE_CHECKING:

View File

@@ -286,11 +286,9 @@ def run_as_background_process(
).dec()
# To explain how the log contexts work here:
# - When `run_as_background_process` is called, the current context is stored
# (using `PreserveLoggingContext`), we kick off the background task, and we
# restore the original context before returning (also part of
# `PreserveLoggingContext`).
# - The background task runs in its own new logcontext named after `desc`
# - When this function is called, the current context is stored (using
# `PreserveLoggingContext`), we kick off the background task, and we restore the
# original context before returning (also part of `PreserveLoggingContext`).
# - When the background task finishes, we don't want to leak our background context
# into the reactor which would erroneously get attached to the next operation
# picked up by the event loop. We use `PreserveLoggingContext` to set the

View File

@@ -50,7 +50,6 @@ from synapse.api.constants import ProfileFields
from synapse.api.errors import SynapseError
from synapse.api.presence import UserPresenceState
from synapse.config import ConfigError
from synapse.config.repository import MediaUploadLimit
from synapse.events import EventBase
from synapse.events.presence_router import (
GET_INTERESTED_USERS_CALLBACK,
@@ -95,9 +94,7 @@ from synapse.module_api.callbacks.account_validity_callbacks import (
)
from synapse.module_api.callbacks.media_repository_callbacks import (
GET_MEDIA_CONFIG_FOR_USER_CALLBACK,
GET_MEDIA_UPLOAD_LIMITS_FOR_USER_CALLBACK,
IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK,
ON_MEDIA_UPLOAD_LIMIT_EXCEEDED_CALLBACK,
)
from synapse.module_api.callbacks.ratelimit_callbacks import (
GET_RATELIMIT_OVERRIDE_FOR_USER_CALLBACK,
@@ -158,9 +155,9 @@ from synapse.types import (
create_requester,
)
from synapse.types.state import StateFilter
from synapse.util import Clock
from synapse.util.async_helpers import maybe_awaitable
from synapse.util.caches.descriptors import CachedFunction, cached as _cached
from synapse.util.clock import Clock
from synapse.util.frozenutils import freeze
if TYPE_CHECKING:
@@ -208,7 +205,6 @@ __all__ = [
"RoomAlias",
"UserProfile",
"RatelimitOverride",
"MediaUploadLimit",
]
logger = logging.getLogger(__name__)
@@ -466,12 +462,6 @@ class ModuleApi:
is_user_allowed_to_upload_media_of_size: Optional[
IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK
] = None,
get_media_upload_limits_for_user: Optional[
GET_MEDIA_UPLOAD_LIMITS_FOR_USER_CALLBACK
] = None,
on_media_upload_limit_exceeded: Optional[
ON_MEDIA_UPLOAD_LIMIT_EXCEEDED_CALLBACK
] = None,
) -> None:
"""Registers callbacks for media repository capabilities.
Added in Synapse v1.132.0.
@@ -479,8 +469,6 @@ class ModuleApi:
return self._callbacks.media_repository.register_callbacks(
get_media_config_for_user=get_media_config_for_user,
is_user_allowed_to_upload_media_of_size=is_user_allowed_to_upload_media_of_size,
get_media_upload_limits_for_user=get_media_upload_limits_for_user,
on_media_upload_limit_exceeded=on_media_upload_limit_exceeded,
)
def register_third_party_rules_callbacks(

View File

@@ -15,7 +15,6 @@
import logging
from typing import TYPE_CHECKING, Awaitable, Callable, List, Optional
from synapse.config.repository import MediaUploadLimit
from synapse.types import JsonDict
from synapse.util.async_helpers import delay_cancellation
from synapse.util.metrics import Measure
@@ -29,14 +28,6 @@ GET_MEDIA_CONFIG_FOR_USER_CALLBACK = Callable[[str], Awaitable[Optional[JsonDict
IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK = Callable[[str, int], Awaitable[bool]]
GET_MEDIA_UPLOAD_LIMITS_FOR_USER_CALLBACK = Callable[
[str], Awaitable[Optional[List[MediaUploadLimit]]]
]
ON_MEDIA_UPLOAD_LIMIT_EXCEEDED_CALLBACK = Callable[
[str, MediaUploadLimit, int, int], Awaitable[None]
]
class MediaRepositoryModuleApiCallbacks:
def __init__(self, hs: "HomeServer") -> None:
@@ -48,12 +39,6 @@ class MediaRepositoryModuleApiCallbacks:
self._is_user_allowed_to_upload_media_of_size_callbacks: List[
IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK
] = []
self._get_media_upload_limits_for_user_callbacks: List[
GET_MEDIA_UPLOAD_LIMITS_FOR_USER_CALLBACK
] = []
self._on_media_upload_limit_exceeded_callbacks: List[
ON_MEDIA_UPLOAD_LIMIT_EXCEEDED_CALLBACK
] = []
def register_callbacks(
self,
@@ -61,12 +46,6 @@ class MediaRepositoryModuleApiCallbacks:
is_user_allowed_to_upload_media_of_size: Optional[
IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK
] = None,
get_media_upload_limits_for_user: Optional[
GET_MEDIA_UPLOAD_LIMITS_FOR_USER_CALLBACK
] = None,
on_media_upload_limit_exceeded: Optional[
ON_MEDIA_UPLOAD_LIMIT_EXCEEDED_CALLBACK
] = None,
) -> None:
"""Register callbacks from module for each hook."""
if get_media_config_for_user is not None:
@@ -77,16 +56,6 @@ class MediaRepositoryModuleApiCallbacks:
is_user_allowed_to_upload_media_of_size
)
if get_media_upload_limits_for_user is not None:
self._get_media_upload_limits_for_user_callbacks.append(
get_media_upload_limits_for_user
)
if on_media_upload_limit_exceeded is not None:
self._on_media_upload_limit_exceeded_callbacks.append(
on_media_upload_limit_exceeded
)
async def get_media_config_for_user(self, user_id: str) -> Optional[JsonDict]:
for callback in self._get_media_config_for_user_callbacks:
with Measure(
@@ -114,47 +83,3 @@ class MediaRepositoryModuleApiCallbacks:
return res
return True
async def get_media_upload_limits_for_user(
self, user_id: str
) -> Optional[List[MediaUploadLimit]]:
"""
Get the first non-None list of MediaUploadLimits for the user from the registered callbacks.
If a list is returned it will be sorted in descending order of duration.
"""
for callback in self._get_media_upload_limits_for_user_callbacks:
with Measure(
self.clock,
name=f"{callback.__module__}.{callback.__qualname__}",
server_name=self.server_name,
):
res: Optional[List[MediaUploadLimit]] = await delay_cancellation(
callback(user_id)
)
if res is not None: # to allow [] to be returned meaning no limit
# We sort them in descending order of time period
res.sort(key=lambda limit: limit.time_period_ms, reverse=True)
return res
return None
async def on_media_upload_limit_exceeded(
self,
user_id: str,
limit: MediaUploadLimit,
sent_bytes: int,
attempted_bytes: int,
) -> None:
for callback in self._on_media_upload_limit_exceeded_callbacks:
with Measure(
self.clock,
name=f"{callback.__module__}.{callback.__qualname__}",
server_name=self.server_name,
):
# Use a copy of the data in case the module modifies it
limit_copy = MediaUploadLimit(
max_bytes=limit.max_bytes, time_period_ms=limit.time_period_ms
)
await delay_cancellation(
callback(user_id, limit_copy, sent_bytes, attempted_bytes)
)

View File

@@ -532,7 +532,6 @@ class Notifier:
StreamKeyType.TO_DEVICE,
StreamKeyType.TYPING,
StreamKeyType.UN_PARTIAL_STATED_ROOMS,
StreamKeyType.THREAD_SUBSCRIPTIONS,
],
new_token: int,
users: Optional[Collection[Union[str, UserID]]] = None,

View File

@@ -44,7 +44,6 @@ from synapse.replication.tcp.streams import (
UnPartialStatedEventStream,
UnPartialStatedRoomStream,
)
from synapse.replication.tcp.streams._base import ThreadSubscriptionsStream
from synapse.replication.tcp.streams.events import (
EventsStream,
EventsStreamEventRow,
@@ -256,12 +255,6 @@ class ReplicationDataHandler:
self._state_storage_controller.notify_event_un_partial_stated(
row.event_id
)
elif stream_name == ThreadSubscriptionsStream.NAME:
self.notifier.on_new_event(
StreamKeyType.THREAD_SUBSCRIPTIONS,
token,
users=[row.user_id for row in rows],
)
await self._presence_handler.process_replication_rows(
stream_name, instance_name, token, rows

View File

@@ -29,7 +29,7 @@ import logging
from typing import List, Optional, Tuple, Type, TypeVar
from synapse.replication.tcp.streams._base import StreamRow
from synapse.util.json import json_decoder, json_encoder
from synapse.util import json_decoder, json_encoder
logger = logging.getLogger(__name__)

View File

@@ -27,7 +27,7 @@ from prometheus_client import Counter, Histogram
from synapse.logging import opentracing
from synapse.logging.context import make_deferred_yieldable
from synapse.metrics import SERVER_NAME_LABEL
from synapse.util.json import json_decoder, json_encoder
from synapse.util import json_decoder, json_encoder
if TYPE_CHECKING:
from txredisapi import ConnectionHandler

View File

@@ -55,7 +55,7 @@ from synapse.replication.tcp.commands import (
ServerCommand,
parse_command_from_line,
)
from synapse.util.clock import Clock
from synapse.util import Clock
from synapse.util.stringutils import random_string
if TYPE_CHECKING:

View File

@@ -23,19 +23,10 @@
import logging
import re
from collections import Counter
from http import HTTPStatus
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Tuple, Union
from typing import TYPE_CHECKING, Any, Dict, Optional, Tuple
from typing_extensions import Self
from synapse._pydantic_compat import (
StrictBool,
StrictStr,
validator,
)
from synapse.api.auth.mas import MasDelegatedAuth
from synapse.api.errors import (
Codes,
InteractiveAuthIncompleteError,
InvalidAPICallError,
SynapseError,
@@ -46,13 +37,11 @@ from synapse.http.servlet import (
parse_integer,
parse_json_object_from_request,
parse_string,
validate_json_object,
)
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import log_kv, set_tag
from synapse.rest.client._base import client_patterns, interactive_auth_handler
from synapse.types import JsonDict, StreamToken
from synapse.types.rest import RequestBodyModel
from synapse.util.cancellation import cancellable
if TYPE_CHECKING:
@@ -70,6 +59,7 @@ class KeyUploadServlet(RestServlet):
"device_keys": {
"user_id": "<user_id>",
"device_id": "<device_id>",
"valid_until_ts": <millisecond_timestamp>,
"algorithms": [
"m.olm.curve25519-aes-sha2",
]
@@ -121,123 +111,12 @@ class KeyUploadServlet(RestServlet):
self._clock = hs.get_clock()
self._store = hs.get_datastores().main
class KeyUploadRequestBody(RequestBodyModel):
"""
The body of a `POST /_matrix/client/v3/keys/upload` request.
Based on https://spec.matrix.org/v1.16/client-server-api/#post_matrixclientv3keysupload.
"""
class DeviceKeys(RequestBodyModel):
algorithms: List[StrictStr]
"""The encryption algorithms supported by this device."""
device_id: StrictStr
"""The ID of the device these keys belong to. Must match the device ID used when logging in."""
keys: Mapping[StrictStr, StrictStr]
"""
Public identity keys. The names of the properties should be in the
format `<algorithm>:<device_id>`. The keys themselves should be encoded as
specified by the key algorithm.
"""
signatures: Mapping[StrictStr, Mapping[StrictStr, StrictStr]]
"""Signatures for the device key object. A map from user ID, to a map from "<algorithm>:<device_id>" to the signature."""
user_id: StrictStr
"""The ID of the user the device belongs to. Must match the user ID used when logging in."""
class KeyObject(RequestBodyModel):
key: StrictStr
"""The key, encoded using unpadded base64."""
fallback: Optional[StrictBool] = False
"""Whether this is a fallback key. Only used when handling fallback keys."""
signatures: Mapping[StrictStr, Mapping[StrictStr, StrictStr]]
"""Signature for the device. Mapped from user ID to another map of key signing identifier to the signature itself.
See the following for more detail: https://spec.matrix.org/v1.16/appendices/#signing-details
"""
device_keys: Optional[DeviceKeys] = None
"""Identity keys for the device. May be absent if no new identity keys are required."""
fallback_keys: Optional[Mapping[StrictStr, Union[StrictStr, KeyObject]]]
"""
The public key which should be used if the device's one-time keys are
exhausted. The fallback key is not deleted once used, but should be
replaced when additional one-time keys are being uploaded. The server
will notify the client of the fallback key being used through `/sync`.
There can only be at most one key per algorithm uploaded, and the server
will only persist one key per algorithm.
When uploading a signed key, an additional fallback: true key should be
included to denote that the key is a fallback key.
May be absent if a new fallback key is not required.
"""
@validator("fallback_keys", pre=True)
def validate_fallback_keys(cls: Self, v: Any) -> Any:
if v is None:
return v
if not isinstance(v, dict):
raise TypeError("fallback_keys must be a mapping")
for k in v.keys():
if not len(k.split(":")) == 2:
raise SynapseError(
code=HTTPStatus.BAD_REQUEST,
errcode=Codes.BAD_JSON,
msg=f"Invalid fallback_keys key {k!r}. "
'Expected "<algorithm>:<device_id>".',
)
return v
one_time_keys: Optional[Mapping[StrictStr, Union[StrictStr, KeyObject]]] = None
"""
One-time public keys for "pre-key" messages. The names of the properties
should be in the format `<algorithm>:<key_id>`.
The format of the key is determined by the key algorithm, see:
https://spec.matrix.org/v1.16/client-server-api/#key-algorithms.
"""
@validator("one_time_keys", pre=True)
def validate_one_time_keys(cls: Self, v: Any) -> Any:
if v is None:
return v
if not isinstance(v, dict):
raise TypeError("one_time_keys must be a mapping")
for k, _ in v.items():
if not len(k.split(":")) == 2:
raise SynapseError(
code=HTTPStatus.BAD_REQUEST,
errcode=Codes.BAD_JSON,
msg=f"Invalid one_time_keys key {k!r}. "
'Expected "<algorithm>:<device_id>".',
)
return v
async def on_POST(
self, request: SynapseRequest, device_id: Optional[str]
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
# Parse the request body. Validate separately, as the handler expects a
# plain dict, rather than any parsed object.
#
# Note: It would be nice to work with a parsed object, but the handler
# needs to encode portions of the request body as canonical JSON before
# storing the result in the DB. There's little point in converted to a
# parsed object and then back to a dict.
body = parse_json_object_from_request(request)
validate_json_object(body, self.KeyUploadRequestBody)
if device_id is not None:
# Providing the device_id should only be done for setting keys
@@ -270,31 +149,8 @@ class KeyUploadServlet(RestServlet):
400, "To upload keys, you must pass device_id when authenticating"
)
if "device_keys" in body and isinstance(body["device_keys"], dict):
# Validate the provided `user_id` and `device_id` fields in
# `device_keys` match that of the requesting user. We can't do
# this directly in the pydantic model as we don't have access
# to the requester yet.
#
# TODO: We could use ValidationInfo when we switch to Pydantic v2.
# https://docs.pydantic.dev/latest/concepts/validators/#validation-info
if body["device_keys"].get("user_id") != user_id:
raise SynapseError(
code=HTTPStatus.BAD_REQUEST,
errcode=Codes.BAD_JSON,
msg="Provided `user_id` in `device_keys` does not match that of the authenticated user",
)
if body["device_keys"].get("device_id") != device_id:
raise SynapseError(
code=HTTPStatus.BAD_REQUEST,
errcode=Codes.BAD_JSON,
msg="Provided `device_id` in `device_keys` does not match that of the authenticated user device",
)
result = await self.e2e_keys_handler.upload_keys_for_user(
user_id=user_id,
device_id=device_id,
keys=body,
user_id=user_id, device_id=device_id, keys=body
)
return 200, result
@@ -543,15 +399,10 @@ class SigningKeyUploadServlet(RestServlet):
if not keys_are_different:
return 200, {}
# MSC4190 can skip UIA for replacing cross-signing keys as well.
is_appservice_with_msc4190 = (
requester.app_service and requester.app_service.msc4190_device_management
)
# The keys are different; is x-signing set up? If no, then this is first-time
# setup, and that is allowed without UIA, per MSC3967.
# If yes, then we need to authenticate the change.
if is_cross_signing_setup and not is_appservice_with_msc4190:
if is_cross_signing_setup:
# With MSC3861, UIA is not possible. Instead, the auth service has to
# explicitly mark the master key as replaceable.
if self.hs.config.mas.enabled:

View File

@@ -216,13 +216,6 @@ class LoginRestServlet(RestServlet):
"This login method is only valid for application services"
)
if appservice.msc4190_device_management:
raise SynapseError(
400,
"This appservice has MSC4190 enabled, so appservice login cannot be used.",
errcode=Codes.APPSERVICE_LOGIN_UNSUPPORTED,
)
if appservice.is_rate_limited():
await self._address_ratelimiter.ratelimit(
None, request.getClientAddress().host

View File

@@ -782,12 +782,8 @@ class RegisterRestServlet(RestServlet):
user_id, appservice = await self.registration_handler.appservice_register(
username, as_token
)
if appservice.msc4190_device_management and not body.get("inhibit_login"):
raise SynapseError(
400,
"This appservice has MSC4190 enabled, so the inhibit_login parameter must be set to true.",
errcode=Codes.APPSERVICE_LOGIN_UNSUPPORTED,
)
if appservice.msc4190_device_management:
body["inhibit_login"] = True
return await self._create_registration_details(
user_id,
@@ -927,12 +923,6 @@ class RegisterAppServiceOnlyRestServlet(RestServlet):
"Registration has been disabled. Only m.login.application_service registrations are allowed.",
errcode=Codes.FORBIDDEN,
)
if not body.get("inhibit_login"):
raise SynapseError(
400,
"This server uses OAuth2, so the inhibit_login parameter must be set to true for appservice registrations.",
errcode=Codes.APPSERVICE_LOGIN_UNSUPPORTED,
)
kind = parse_string(request, "kind", default="user")

View File

@@ -23,8 +23,6 @@ import logging
from collections import defaultdict
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Tuple, Union
import attr
from synapse.api.constants import AccountDataTypes, EduTypes, Membership, PresenceState
from synapse.api.errors import Codes, StoreError, SynapseError
from synapse.api.filtering import FilterCollection
@@ -58,8 +56,8 @@ from synapse.logging.opentracing import log_kv, set_tag, trace_with_opname
from synapse.rest.admin.experimental_features import ExperimentalFeature
from synapse.types import JsonDict, Requester, SlidingSyncStreamToken, StreamToken
from synapse.types.rest.client import SlidingSyncBody
from synapse.util import json_decoder
from synapse.util.caches.lrucache import LruCache
from synapse.util.json import json_decoder
from ._base import client_patterns, set_timeline_upper_limit
@@ -363,6 +361,9 @@ class SyncRestServlet(RestServlet):
# https://github.com/matrix-org/matrix-doc/blob/54255851f642f84a4f1aaf7bc063eebe3d76752b/proposals/2732-olm-fallback-keys.md
# states that this field should always be included, as long as the server supports the feature.
response["org.matrix.msc2732.device_unused_fallback_key_types"] = (
sync_result.device_unused_fallback_key_types
)
response["device_unused_fallback_key_types"] = (
sync_result.device_unused_fallback_key_types
)
@@ -631,21 +632,12 @@ class SyncRestServlet(RestServlet):
class SlidingSyncRestServlet(RestServlet):
"""
API endpoint for MSC4186 Simplified Sliding Sync `/sync`, which was historically derived
from MSC3575 (Sliding Sync; now abandoned). Allows for clients to request a
API endpoint for MSC3575 Sliding Sync `/sync`. Allows for clients to request a
subset (sliding window) of rooms, state, and timeline events (just what they need)
in order to bootstrap quickly and subscribe to only what the client cares about.
Because the client can specify what it cares about, we can respond quickly and skip
all of the work we would normally have to do with a sync v2 response.
Extensions of various features are defined in:
- to-device messaging (MSC3885)
- end-to-end encryption (MSC3884)
- typing notifications (MSC3961)
- receipts (MSC3960)
- account data (MSC3959)
- thread subscriptions (MSC4308)
Request query parameters:
timeout: How long to wait for new events in milliseconds.
pos: Stream position token when asking for incremental deltas.
@@ -1082,48 +1074,9 @@ class SlidingSyncRestServlet(RestServlet):
"rooms": extensions.typing.room_id_to_typing_map,
}
# excludes both None and falsy `thread_subscriptions`
if extensions.thread_subscriptions:
serialized_extensions["io.element.msc4308.thread_subscriptions"] = (
_serialise_thread_subscriptions(extensions.thread_subscriptions)
)
return serialized_extensions
def _serialise_thread_subscriptions(
thread_subscriptions: SlidingSyncResult.Extensions.ThreadSubscriptionsExtension,
) -> JsonDict:
out: JsonDict = {}
if thread_subscriptions.subscribed:
out["subscribed"] = {
room_id: {
thread_root_id: attr.asdict(
change, filter=lambda _attr, v: v is not None
)
for thread_root_id, change in room_threads.items()
}
for room_id, room_threads in thread_subscriptions.subscribed.items()
}
if thread_subscriptions.unsubscribed:
out["unsubscribed"] = {
room_id: {
thread_root_id: attr.asdict(
change, filter=lambda _attr, v: v is not None
)
for thread_root_id, change in room_threads.items()
}
for room_id, room_threads in thread_subscriptions.unsubscribed.items()
}
if thread_subscriptions.prev_batch:
out["prev_batch"] = thread_subscriptions.prev_batch.to_string()
return out
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
SyncRestServlet(hs).register(http_server)

View File

@@ -1,39 +1,21 @@
from http import HTTPStatus
from typing import TYPE_CHECKING, Dict, Optional, Tuple
import attr
from typing_extensions import TypeAlias
from typing import TYPE_CHECKING, Optional, Tuple
from synapse.api.errors import Codes, NotFoundError, SynapseError
from synapse.http.server import HttpServer
from synapse.http.servlet import (
RestServlet,
parse_and_validate_json_object_from_request,
parse_integer,
parse_string,
)
from synapse.http.site import SynapseRequest
from synapse.rest.client._base import client_patterns
from synapse.types import (
JsonDict,
RoomID,
SlidingSyncStreamToken,
ThreadSubscriptionsToken,
)
from synapse.types.handlers.sliding_sync import SlidingSyncResult
from synapse.types import JsonDict, RoomID
from synapse.types.rest import RequestBodyModel
from synapse.util.pydantic_models import AnyEventId
if TYPE_CHECKING:
from synapse.server import HomeServer
_ThreadSubscription: TypeAlias = (
SlidingSyncResult.Extensions.ThreadSubscriptionsExtension.ThreadSubscription
)
_ThreadUnsubscription: TypeAlias = (
SlidingSyncResult.Extensions.ThreadSubscriptionsExtension.ThreadUnsubscription
)
class ThreadSubscriptionsRestServlet(RestServlet):
PATTERNS = client_patterns(
@@ -118,130 +100,6 @@ class ThreadSubscriptionsRestServlet(RestServlet):
return HTTPStatus.OK, {}
class ThreadSubscriptionsPaginationRestServlet(RestServlet):
PATTERNS = client_patterns(
"/io.element.msc4308/thread_subscriptions$",
unstable=True,
releases=(),
)
CATEGORY = "Thread Subscriptions requests (unstable)"
# Maximum number of thread subscriptions to return in one request.
MAX_LIMIT = 512
def __init__(self, hs: "HomeServer"):
self.auth = hs.get_auth()
self.is_mine = hs.is_mine
self.store = hs.get_datastores().main
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
limit = min(
parse_integer(request, "limit", default=100, negative=False),
ThreadSubscriptionsPaginationRestServlet.MAX_LIMIT,
)
from_end_opt = parse_string(request, "from", required=False)
to_start_opt = parse_string(request, "to", required=False)
_direction = parse_string(request, "dir", required=True, allowed_values=("b",))
if limit <= 0:
# condition needed because `negative=False` still allows 0
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"limit must be greater than 0",
errcode=Codes.INVALID_PARAM,
)
if from_end_opt is not None:
try:
# because of backwards pagination, the `from` token is actually the
# bound closest to the end of the stream
end_stream_id = ThreadSubscriptionsToken.from_string(
from_end_opt
).stream_id
except ValueError:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"`from` is not a valid token",
errcode=Codes.INVALID_PARAM,
)
else:
end_stream_id = self.store.get_max_thread_subscriptions_stream_id()
if to_start_opt is not None:
# because of backwards pagination, the `to` token is actually the
# bound closest to the start of the stream
try:
start_stream_id = ThreadSubscriptionsToken.from_string(
to_start_opt
).stream_id
except ValueError:
# we also accept sliding sync `pos` tokens on this parameter
try:
sliding_sync_pos = await SlidingSyncStreamToken.from_string(
self.store, to_start_opt
)
start_stream_id = (
sliding_sync_pos.stream_token.thread_subscriptions_key
)
except ValueError:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"`to` is not a valid token",
errcode=Codes.INVALID_PARAM,
)
else:
# the start of time is ID 1; the lower bound is exclusive though
start_stream_id = 0
subscriptions = (
await self.store.get_latest_updated_thread_subscriptions_for_user(
requester.user.to_string(),
from_id=start_stream_id,
to_id=end_stream_id,
limit=limit,
)
)
subscribed_threads: Dict[str, Dict[str, JsonDict]] = {}
unsubscribed_threads: Dict[str, Dict[str, JsonDict]] = {}
for stream_id, room_id, thread_root_id, subscribed, automatic in subscriptions:
if subscribed:
subscribed_threads.setdefault(room_id, {})[thread_root_id] = (
attr.asdict(
_ThreadSubscription(
automatic=automatic,
bump_stamp=stream_id,
)
)
)
else:
unsubscribed_threads.setdefault(room_id, {})[thread_root_id] = (
attr.asdict(_ThreadUnsubscription(bump_stamp=stream_id))
)
result: JsonDict = {}
if subscribed_threads:
result["subscribed"] = subscribed_threads
if unsubscribed_threads:
result["unsubscribed"] = unsubscribed_threads
if len(subscriptions) == limit:
# We hit the limit, so there might be more entries to return.
# Generate a new token that has moved backwards, ready for the next
# request.
min_returned_stream_id, _, _, _, _ = subscriptions[0]
result["end"] = ThreadSubscriptionsToken(
# We subtract one because the 'later in the stream' bound is inclusive,
# and we already saw the element at index 0.
stream_id=min_returned_stream_id - 1
).to_string()
return HTTPStatus.OK, result
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
if hs.config.experimental.msc4306_enabled:
ThreadSubscriptionsRestServlet(hs).register(http_server)
ThreadSubscriptionsPaginationRestServlet(hs).register(http_server)

View File

@@ -180,8 +180,6 @@ class VersionsRestServlet(RestServlet):
"org.matrix.msc4155": self.config.experimental.msc4155_enabled,
# MSC4306: Support for thread subscriptions
"org.matrix.msc4306": self.config.experimental.msc4306_enabled,
# MSC4169: Backwards-compatible redaction sending using `/send`
"com.beeper.msc4169": self.config.experimental.msc4169_enabled,
},
},
)

View File

@@ -38,8 +38,8 @@ from synapse.http.servlet import (
from synapse.storage.keys import FetchKeyResultForRemote
from synapse.types import JsonDict
from synapse.types.rest import RequestBodyModel
from synapse.util import json_decoder
from synapse.util.async_helpers import yieldable_gather_results
from synapse.util.json import json_decoder
if TYPE_CHECKING:
from synapse.server import HomeServer

View File

@@ -63,22 +63,6 @@ class PickIdpResource(DirectServeHtmlResource):
if not idp:
return await self._serve_id_picker(request, client_redirect_url)
# Validate the `idp` query parameter. We should only be working with known IdPs.
# No need waste further effort if we don't know about it.
#
# Although, we primarily prevent open redirect attacks by URL encoding all of
# the parameters we use in the redirect URL below, this validation also helps
# prevent Synapse from crafting arbitrary URLs and being used in open redirect
# attacks (defense in depth).
providers = self._sso_handler.get_identity_providers()
auth_provider = providers.get(idp)
if not auth_provider:
logger.info("Unknown idp %r", idp)
self._sso_handler.render_error(
request, "unknown_idp", "Unknown identity provider ID"
)
return
# Otherwise, redirect to the login SSO redirect endpoint for the given IdP
# (which will in turn take us to the the IdP's redirect URI).
#

View File

@@ -28,7 +28,7 @@ from synapse.api.errors import NotFoundError
from synapse.http.server import DirectServeJsonResource
from synapse.http.site import SynapseRequest
from synapse.types import JsonDict
from synapse.util.json import json_encoder
from synapse.util import json_encoder
from synapse.util.stringutils import parse_server_name
if TYPE_CHECKING:

Some files were not shown because too many files have changed in this diff Show More