Compare commits

..

2 Commits

Author SHA1 Message Date
Andrew Morgan
6d091f6e12 newsfile 2025-10-07 16:53:47 +01:00
Andrew Morgan
6e349db4da Allow Synapse modules to specify their own threadpool when calling defer_to_thread 2025-10-07 16:50:59 +01:00
89 changed files with 465 additions and 1471 deletions

View File

@@ -114,8 +114,8 @@ jobs:
os:
- ubuntu-24.04
- ubuntu-24.04-arm
- macos-13 # This uses x86-64
- macos-14 # This uses arm64
- macos-15-intel # This uses x86-64
# is_pr is a flag used to exclude certain jobs from the matrix on PRs.
# It is not read by the rest of the workflow.
is_pr:
@@ -124,7 +124,7 @@ jobs:
exclude:
# Don't build macos wheels on PR CI.
- is_pr: true
os: "macos-15-intel"
os: "macos-13"
- is_pr: true
os: "macos-14"
# Don't build aarch64 wheels on PR CI.

View File

@@ -1,157 +1,3 @@
# Synapse 1.141.0rc2 (2025-10-28)
## Deprecation of MacOS Python wheels
The team has decided to deprecate and eventually stop publishing python wheels
for MacOS. This is a burden on the team, and we're not aware of any parties
that use them. Synapse docker images will continue to work on MacOS, as will
building Synapse from source (though note this requires a Rust compiler).
Publishing MacOS Python wheels will continue for the next few releases. If you
do make use of these wheels downstream, please reach out to us in
[#synapse-dev:matrix.org](https://matrix.to/#/#synapse-dev:matrix.org). We'd
love to hear from you!
## Bugfixes
- Fix users being unable to log in if their password, or the server's configured pepper, was too long. ([\#19101](https://github.com/element-hq/synapse/issues/19101))
# Synapse 1.141.0rc1 (2025-10-21)
## Features
- Allow using [MSC4190](https://github.com/matrix-org/matrix-spec-proposals/pull/4190) behavior without the opt-in registration flag. Contributed by @tulir @ Beeper. ([\#19031](https://github.com/element-hq/synapse/issues/19031))
- Stabilized support for [MSC4326](https://github.com/matrix-org/matrix-spec-proposals/pull/4326): Device masquerading for appservices. Contributed by @tulir @ Beeper. ([\#19033](https://github.com/element-hq/synapse/issues/19033))
## Bugfixes
- Fix a bug introduced in 1.136.0 that would prevent Synapse from being able to be `reload`-ed more than once when running under systemd. ([\#19060](https://github.com/element-hq/synapse/issues/19060))
- Fix a bug introduced in 1.140.0 where an internal server error could be raised when hashing user passwords that are too long. ([\#19078](https://github.com/element-hq/synapse/issues/19078))
## Updates to the Docker image
- Update docker image to use Debian trixie as the base and thus Python 3.13. ([\#19064](https://github.com/element-hq/synapse/issues/19064))
## Internal Changes
- Move unique snowflake homeserver background tasks to `start_background_tasks` (the standard pattern for this kind of thing). ([\#19037](https://github.com/element-hq/synapse/issues/19037))
- Drop a deprecated field of the `PyGitHub` dependency in the release script and raise the dependency's minimum version to `1.59.0`. ([\#19039](https://github.com/element-hq/synapse/issues/19039))
- Update TODO list of conflicting areas where we encounter metrics being clobbered (`ApplicationService`). ([\#19040](https://github.com/element-hq/synapse/issues/19040))
# Synapse 1.140.0 (2025-10-14)
## Compatibility notice for users of `synapse-s3-storage-provider`
Deployments that make use of the
[synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider)
module must upgrade to
[v1.6.0](https://github.com/matrix-org/synapse-s3-storage-provider/releases/tag/v1.6.0).
Using older versions of the module with this release of Synapse will prevent
users from being able to upload or download media.
No significant changes since 1.140.0rc1.
# Synapse 1.140.0rc1 (2025-10-10)
## Features
- Add [a new Media Query by ID Admin API](https://element-hq.github.io/synapse/v1.140/admin_api/media_admin_api.html#query-a-piece-of-media-by-id) that allows server admins to query and investigate the metadata of local or cached remote media via
the `origin/media_id` identifier found in a [Matrix Content URI](https://spec.matrix.org/v1.14/client-server-api/#matrix-content-mxc-uris). ([\#18911](https://github.com/element-hq/synapse/issues/18911))
- Add [a new Fetch Event Admin API](https://element-hq.github.io/synapse/v1.140/admin_api/fetch_event.html) to fetch an event by ID. ([\#18963](https://github.com/element-hq/synapse/issues/18963))
- Update [MSC4284: Policy Servers](https://github.com/matrix-org/matrix-spec-proposals/pull/4284) implementation to support signatures when available. ([\#18934](https://github.com/element-hq/synapse/issues/18934))
- Add experimental implementation of the `GET /_matrix/client/v1/rtc/transports` endpoint for the latest draft of [MSC4143: MatrixRTC](https://github.com/matrix-org/matrix-spec-proposals/pull/4143). ([\#18967](https://github.com/element-hq/synapse/issues/18967))
- Expose a `defer_to_threadpool` function in the Synapse Module API that allows modules to run a function on a separate thread in a custom threadpool. ([\#19032](https://github.com/element-hq/synapse/issues/19032))
## Bugfixes
- Fix room upgrade `room_config` argument and documentation for `user_may_create_room` spam-checker callback. ([\#18721](https://github.com/element-hq/synapse/issues/18721))
- Compute a user's last seen timestamp from their devices' last seen timestamps instead of IPs, because the latter are automatically cleared according to `user_ips_max_age`. ([\#18948](https://github.com/element-hq/synapse/issues/18948))
- Fix bug where ephemeral events were not filtered by room ID. Contributed by @frastefanini. ([\#19002](https://github.com/element-hq/synapse/issues/19002))
- Update Synapse main process version string to include git info. ([\#19011](https://github.com/element-hq/synapse/issues/19011))
## Improved Documentation
- Explain how `Deferred` callbacks interact with logcontexts. ([\#18914](https://github.com/element-hq/synapse/issues/18914))
- Fix documentation for `rc_room_creation` and `rc_reports` to clarify that a `per_user` rate limit is not supported. ([\#18998](https://github.com/element-hq/synapse/issues/18998))
## Deprecations and Removals
- Remove deprecated `LoggingContext.set_current_context`/`LoggingContext.current_context` methods which already have equivalent bare methods in `synapse.logging.context`. ([\#18989](https://github.com/element-hq/synapse/issues/18989))
- Drop support for unstable field names from the long-accepted [MSC2732](https://github.com/matrix-org/matrix-spec-proposals/pull/2732) (Olm fallback keys) proposal. ([\#18996](https://github.com/element-hq/synapse/issues/18996))
## Internal Changes
- Cleanly shutdown `SynapseHomeServer` object, allowing artifacts of embedded small hosts to be properly garbage collected. ([\#18828](https://github.com/element-hq/synapse/issues/18828))
- Update OEmbed providers to use 'X' instead of 'Twitter' in URL previews, following a rebrand. Contributed by @HammyHavoc. ([\#18767](https://github.com/element-hq/synapse/issues/18767))
- Fix `server_name` in logging context for multiple Synapse instances in one process. ([\#18868](https://github.com/element-hq/synapse/issues/18868))
- Wrap the Rust HTTP client with `make_deferred_yieldable` so it follows Synapse logcontext rules. ([\#18903](https://github.com/element-hq/synapse/issues/18903))
- Fix the GitHub Actions workflow that moves issues labeled "X-Needs-Info" to the "Needs info" column on the team's internal triage board. ([\#18913](https://github.com/element-hq/synapse/issues/18913))
- Disconnect background process work from request trace. ([\#18932](https://github.com/element-hq/synapse/issues/18932))
- Reduce overall number of calls to `_get_e2e_cross_signing_signatures_for_devices` by increasing the batch size of devices the query is called with, reducing DB load. ([\#18939](https://github.com/element-hq/synapse/issues/18939))
- Update error code used when an appservice tries to masquerade as an unknown device using [MSC4326](https://github.com/matrix-org/matrix-spec-proposals/pull/4326). Contributed by @tulir @ Beeper. ([\#18947](https://github.com/element-hq/synapse/issues/18947))
- Fix `no active span when trying to log` tracing error on startup (when OpenTracing is enabled). ([\#18959](https://github.com/element-hq/synapse/issues/18959))
- Fix `run_coroutine_in_background(...)` incorrectly handling logcontext. ([\#18964](https://github.com/element-hq/synapse/issues/18964))
- Add debug logs wherever we change current logcontext. ([\#18966](https://github.com/element-hq/synapse/issues/18966))
- Update dockerfile metadata to fix broken link; point to documentation website. ([\#18971](https://github.com/element-hq/synapse/issues/18971))
- Note that the code is additionally licensed under the [Element Commercial license](https://github.com/element-hq/synapse/blob/develop/LICENSE-COMMERCIAL) in SPDX expression field configs. ([\#18973](https://github.com/element-hq/synapse/issues/18973))
- Fix logcontext handling in `timeout_deferred` tests. ([\#18974](https://github.com/element-hq/synapse/issues/18974))
- Remove internal `ReplicationUploadKeysForUserRestServlet` as a follow-up to the work in https://github.com/element-hq/synapse/pull/18581 that moved device changes off the main process. ([\#18988](https://github.com/element-hq/synapse/issues/18988))
- Switch task scheduler from raw logcontext manipulation to using the dedicated logcontext utils. ([\#18990](https://github.com/element-hq/synapse/issues/18990))
- Remove `MockClock()` in tests. ([\#18992](https://github.com/element-hq/synapse/issues/18992))
- Switch back to our own custom `LogContextScopeManager` instead of OpenTracing's `ContextVarsScopeManager` which was causing problems when using the experimental `SYNAPSE_ASYNC_IO_REACTOR` option with tracing enabled. ([\#19007](https://github.com/element-hq/synapse/issues/19007))
- Remove `version_string` argument from `HomeServer` since it's always the same. ([\#19012](https://github.com/element-hq/synapse/issues/19012))
- Remove duplicate call to `hs.start_background_tasks()` introduced from a bad merge. ([\#19013](https://github.com/element-hq/synapse/issues/19013))
- Split homeserver creation (`create_homeserver`) and setup (`setup`). ([\#19015](https://github.com/element-hq/synapse/issues/19015))
- Swap near-end-of-life `macos-13` GitHub Actions runner for the `macos-15-intel` variant. ([\#19025](https://github.com/element-hq/synapse/issues/19025))
- Introduce `RootConfig.validate_config()` which can be subclassed in `HomeServerConfig` to do cross-config class validation. ([\#19027](https://github.com/element-hq/synapse/issues/19027))
- Allow any command of the `release.py` script to accept a `--gh-token` argument. ([\#19035](https://github.com/element-hq/synapse/issues/19035))
### Updates to locked dependencies
* Bump Swatinem/rust-cache from 2.8.0 to 2.8.1. ([\#18949](https://github.com/element-hq/synapse/issues/18949))
* Bump actions/cache from 4.2.4 to 4.3.0. ([\#18983](https://github.com/element-hq/synapse/issues/18983))
* Bump anyhow from 1.0.99 to 1.0.100. ([\#18950](https://github.com/element-hq/synapse/issues/18950))
* Bump authlib from 1.6.3 to 1.6.4. ([\#18957](https://github.com/element-hq/synapse/issues/18957))
* Bump authlib from 1.6.4 to 1.6.5. ([\#19019](https://github.com/element-hq/synapse/issues/19019))
* Bump bcrypt from 4.3.0 to 5.0.0. ([\#18984](https://github.com/element-hq/synapse/issues/18984))
* Bump docker/login-action from 3.5.0 to 3.6.0. ([\#18978](https://github.com/element-hq/synapse/issues/18978))
* Bump lxml from 6.0.0 to 6.0.2. ([\#18979](https://github.com/element-hq/synapse/issues/18979))
* Bump phonenumbers from 9.0.13 to 9.0.14. ([\#18954](https://github.com/element-hq/synapse/issues/18954))
* Bump phonenumbers from 9.0.14 to 9.0.15. ([\#18991](https://github.com/element-hq/synapse/issues/18991))
* Bump prometheus-client from 0.22.1 to 0.23.1. ([\#19016](https://github.com/element-hq/synapse/issues/19016))
* Bump pydantic from 2.11.9 to 2.11.10. ([\#19017](https://github.com/element-hq/synapse/issues/19017))
* Bump pygithub from 2.7.0 to 2.8.1. ([\#18952](https://github.com/element-hq/synapse/issues/18952))
* Bump regex from 1.11.2 to 1.11.3. ([\#18981](https://github.com/element-hq/synapse/issues/18981))
* Bump serde from 1.0.224 to 1.0.226. ([\#18953](https://github.com/element-hq/synapse/issues/18953))
* Bump serde from 1.0.226 to 1.0.228. ([\#18982](https://github.com/element-hq/synapse/issues/18982))
* Bump setuptools-rust from 1.11.1 to 1.12.0. ([\#18980](https://github.com/element-hq/synapse/issues/18980))
* Bump twine from 6.1.0 to 6.2.0. ([\#18985](https://github.com/element-hq/synapse/issues/18985))
* Bump types-pyyaml from 6.0.12.20250809 to 6.0.12.20250915. ([\#19018](https://github.com/element-hq/synapse/issues/19018))
* Bump types-requests from 2.32.4.20250809 to 2.32.4.20250913. ([\#18951](https://github.com/element-hq/synapse/issues/18951))
* Bump typing-extensions from 4.14.1 to 4.15.0. ([\#18956](https://github.com/element-hq/synapse/issues/18956))
# Synapse 1.139.2 (2025-10-07)
## Bugfixes
- Fix a bug introduced in 1.139.1 where a client could receive an Internal Server Error if they set `device_keys: null` in the request to [`POST /_matrix/client/v3/keys/upload`](https://spec.matrix.org/v1.16/client-server-api/#post_matrixclientv3keysupload). ([\#19023](https://github.com/element-hq/synapse/issues/19023))
# Synapse 1.139.1 (2025-10-07)
## Security Fixes
@@ -162,17 +8,6 @@ No significant changes since 1.140.0rc1.
- Drop support for unstable field names from the long-accepted [MSC2732](https://github.com/matrix-org/matrix-spec-proposals/pull/2732) (Olm fallback keys) proposal. This change allows unit tests to pass following the security patch above. ([\#18996](https://github.com/element-hq/synapse/issues/18996))
# Synapse 1.138.4 (2025-10-07)
## Bugfixes
- Fix a bug introduced in 1.138.3 where a client could receive an Internal Server Error if they set `device_keys: null` in the request to [`POST /_matrix/client/v3/keys/upload`](https://spec.matrix.org/v1.16/client-server-api/#post_matrixclientv3keysupload). ([\#19023](https://github.com/element-hq/synapse/issues/19023))
# Synapse 1.138.3 (2025-10-07)
## Security Fixes

1
changelog.d/17097.misc Normal file
View File

@@ -0,0 +1 @@
Extend validation of uploaded device keys.

1
changelog.d/18721.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix room upgrade `room_config` argument and documentation for `user_may_create_room` spam-checker callback.

1
changelog.d/18767.misc Normal file
View File

@@ -0,0 +1 @@
Update OEmbed providers to use 'X' instead of 'Twitter' in URL previews, following a rebrand. Contributed by @HammyHavoc.

View File

@@ -0,0 +1 @@
Cleanly shutdown `SynapseHomeServer` object.

1
changelog.d/18868.misc Normal file
View File

@@ -0,0 +1 @@
Fix `server_name` in logging context for multiple Synapse instances in one process.

1
changelog.d/18903.misc Normal file
View File

@@ -0,0 +1 @@
Wrap the Rust HTTP client with `make_deferred_yieldable` so it follows Synapse logcontext rules.

View File

@@ -0,0 +1,2 @@
Add an Admin API that allows server admins to to query and investigate the metadata of local or cached remote media via
the `origin/media_id` identifier found in a [Matrix Content URI](https://spec.matrix.org/v1.14/client-server-api/#matrix-content-mxc-uris).

1
changelog.d/18913.misc Normal file
View File

@@ -0,0 +1 @@
Fix the GitHub Actions workflow that moves issues labeled "X-Needs-Info" to the "Needs info" column on the team's internal triage board.

1
changelog.d/18914.doc Normal file
View File

@@ -0,0 +1 @@
Explain how Deferred callbacks interact with logcontexts.

1
changelog.d/18932.misc Normal file
View File

@@ -0,0 +1 @@
Disconnect background process work from request trace.

View File

@@ -0,0 +1 @@
Update [MSC4284: Policy Servers](https://github.com/matrix-org/matrix-spec-proposals/pull/4284) implementation to support signatures when available.

1
changelog.d/18939.misc Normal file
View File

@@ -0,0 +1 @@
Reduce overall number of calls to `_get_e2e_cross_signing_signatures_for_devices` by increasing the batch size of devices the query is called with, reducing DB load.

1
changelog.d/18947.misc Normal file
View File

@@ -0,0 +1 @@
Update error code used when an appservice tries to masquerade as an unknown device using [MSC4326](https://github.com/matrix-org/matrix-spec-proposals/pull/4326). Contributed by @tulir @ Beeper.

1
changelog.d/18948.bugfix Normal file
View File

@@ -0,0 +1 @@
Compute a user's last seen timestamp from their devices' last seen timestamps instead of IPs, because the latter are automatically cleared according to `user_ips_max_age`.

1
changelog.d/18959.misc Normal file
View File

@@ -0,0 +1 @@
Fix `no active span when trying to log` tracing error on startup (when OpenTracing is enabled).

1
changelog.d/18964.misc Normal file
View File

@@ -0,0 +1 @@
Fix `run_coroutine_in_background(...)` incorrectly handling logcontext.

1
changelog.d/18966.misc Normal file
View File

@@ -0,0 +1 @@
Add debug logs wherever we change current logcontext.

1
changelog.d/18971.misc Normal file
View File

@@ -0,0 +1 @@
Update dockerfile metadata to fix broken link; point to documentation website.

1
changelog.d/18973.misc Normal file
View File

@@ -0,0 +1 @@
Note that the code is additionally licensed under the [Element Commercial license](https://github.com/element-hq/synapse/blob/develop/LICENSE-COMMERCIAL) in SPDX expression field configs.

1
changelog.d/18974.misc Normal file
View File

@@ -0,0 +1 @@
Fix logcontext handling in `timeout_deferred` tests.

1
changelog.d/18988.misc Normal file
View File

@@ -0,0 +1 @@
Remove internal `ReplicationUploadKeysForUserRestServlet` as a follow-up to the work in https://github.com/element-hq/synapse/pull/18581 that moved device changes off the main process.

View File

@@ -0,0 +1 @@
Remove deprecated `LoggingContext.set_current_context`/`LoggingContext.current_context` methods which already have equivalent bare methods in `synapse.logging.context`.

1
changelog.d/18990.misc Normal file
View File

@@ -0,0 +1 @@
Switch task scheduler from raw logcontext manipulation to using the dedicated logcontext utils.

1
changelog.d/18992.misc Normal file
View File

@@ -0,0 +1 @@
Remove `MockClock()` in tests.

View File

@@ -0,0 +1 @@
Drop support for unstable field names from the long-accepted [MSC2732](https://github.com/matrix-org/matrix-spec-proposals/pull/2732) (Olm fallback keys) proposal.

1
changelog.d/18998.doc Normal file
View File

@@ -0,0 +1 @@
Fix documentation for `rc_room_creation` and `rc_reports` to clarify that a `per_user` rate limit is not supported.

1
changelog.d/19002.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug where ephemeral events were not filtered by room ID. Contributed by @frastefanini.

1
changelog.d/19007.misc Normal file
View File

@@ -0,0 +1 @@
Switch back to our own custom `LogContextScopeManager` instead of OpenTracing's `ContextVarsScopeManager` which was causing problems when using the experimental `SYNAPSE_ASYNC_IO_REACTOR` option with tracing enabled.

1
changelog.d/19023.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in 1.139.1 where a client could receive an Internal Server Error if they set `device_keys: null` in the request to [`POST /_matrix/client/v3/keys/upload`](https://spec.matrix.org/v1.16/client-server-api/#post_matrixclientv3keysupload).

View File

@@ -0,0 +1 @@
Allow Synapse modules to specify a custom threadpool when calling `defer_to_thread`.

36
debian/changelog vendored
View File

@@ -1,45 +1,9 @@
matrix-synapse-py3 (1.141.0~rc2) stable; urgency=medium
* New Synapse release 1.141.0rc2.
-- Synapse Packaging team <packages@matrix.org> Tue, 28 Oct 2025 10:20:26 +0000
matrix-synapse-py3 (1.141.0~rc1) stable; urgency=medium
* New Synapse release 1.141.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 21 Oct 2025 11:01:44 +0100
matrix-synapse-py3 (1.140.0) stable; urgency=medium
* New Synapse release 1.140.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 14 Oct 2025 15:22:36 +0100
matrix-synapse-py3 (1.140.0~rc1) stable; urgency=medium
* New Synapse release 1.140.0rc1.
-- Synapse Packaging team <packages@matrix.org> Fri, 10 Oct 2025 10:56:51 +0100
matrix-synapse-py3 (1.139.2) stable; urgency=medium
* New Synapse release 1.139.2.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 16:29:47 +0100
matrix-synapse-py3 (1.139.1) stable; urgency=medium
* New Synapse release 1.139.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 11:46:51 +0100
matrix-synapse-py3 (1.138.4) stable; urgency=medium
* New Synapse release 1.138.4.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 16:28:38 +0100
matrix-synapse-py3 (1.138.3) stable; urgency=medium
* New Synapse release 1.138.3.

View File

@@ -20,8 +20,8 @@
# `poetry export | pip install -r /dev/stdin`, but beware: we have experienced bugs in
# in `poetry export` in the past.
ARG DEBIAN_VERSION=trixie
ARG PYTHON_VERSION=3.13
ARG DEBIAN_VERSION=bookworm
ARG PYTHON_VERSION=3.12
ARG POETRY_VERSION=2.1.1
###
@@ -142,10 +142,10 @@ RUN \
libwebp7 \
xmlsec1 \
libjemalloc2 \
libicu \
| grep '^\w' > /tmp/pkg-list && \
for arch in arm64 amd64; do \
mkdir -p /tmp/debs-${arch} && \
chown _apt:root /tmp/debs-${arch} && \
cd /tmp/debs-${arch} && \
apt-get -o APT::Architecture="${arch}" download $(cat /tmp/pkg-list); \
done
@@ -176,6 +176,11 @@ LABEL org.opencontainers.image.documentation='https://element-hq.github.io/synap
LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git'
LABEL org.opencontainers.image.licenses='AGPL-3.0-or-later OR LicenseRef-Element-Commercial'
# On the runtime image, /lib is a symlink to /usr/lib, so we need to copy the
# libraries to the right place, else the `COPY` won't work.
# On amd64, we'll also have a /lib64 folder with ld-linux-x86-64.so.2, which is
# already present in the runtime image.
COPY --from=runtime-deps /install-${TARGETARCH}/lib /usr/lib
COPY --from=runtime-deps /install-${TARGETARCH}/etc /etc
COPY --from=runtime-deps /install-${TARGETARCH}/usr /usr
COPY --from=runtime-deps /install-${TARGETARCH}/var /var

View File

@@ -1,10 +1,9 @@
# syntax=docker/dockerfile:1-labs
# syntax=docker/dockerfile:1
ARG SYNAPSE_VERSION=latest
ARG FROM=matrixdotorg/synapse:$SYNAPSE_VERSION
ARG DEBIAN_VERSION=trixie
ARG PYTHON_VERSION=3.13
ARG REDIS_VERSION=7.2
ARG DEBIAN_VERSION=bookworm
ARG PYTHON_VERSION=3.12
# first of all, we create a base image with dependencies which we can copy into the
# target image. For repeated rebuilds, this is much faster than apt installing
@@ -12,27 +11,15 @@ ARG REDIS_VERSION=7.2
FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS deps_base
ARG DEBIAN_VERSION
ARG REDIS_VERSION
# Tell apt to keep downloaded package files, as we're using cache mounts.
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
# The upstream redis-server deb has fewer dynamic libraries than Debian's package which makes it easier to copy later on
RUN \
curl -fsSL https://packages.redis.io/gpg | gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg && \
chmod 644 /usr/share/keyrings/redis-archive-keyring.gpg && \
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb ${DEBIAN_VERSION} main" | tee /etc/apt/sources.list.d/redis.list
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -yqq --no-install-recommends \
nginx-light \
redis-server="6:${REDIS_VERSION}.*" redis-tools="6:${REDIS_VERSION}.*" \
# libicu is required by postgres, see `docker/complement/Dockerfile`
libicu76
nginx-light
RUN \
# remove default page
@@ -48,12 +35,19 @@ FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS deps_base
RUN mkdir -p /uv/etc/supervisor/conf.d
# Similarly, a base to copy the redis server from.
#
# The redis docker image has fewer dynamic libraries than the debian package,
# which makes it much easier to copy (but we need to make sure we use an image
# based on the same debian version as the synapse image, to make sure we get
# the expected version of libc.
FROM docker.io/library/redis:7-${DEBIAN_VERSION} AS redis_base
# now build the final image, based on the the regular Synapse docker image
FROM $FROM
# Copy over dependencies
COPY --from=deps_base --parents /usr/lib/*-linux-gnu/libicu* /
COPY --from=deps_base /usr/bin/redis-server /usr/local/bin
COPY --from=redis_base /usr/local/bin/redis-server /usr/local/bin
COPY --from=deps_base /uv /
COPY --from=deps_base /usr/sbin/nginx /usr/sbin
COPY --from=deps_base /usr/share/nginx /usr/share/nginx

View File

@@ -9,7 +9,7 @@
ARG SYNAPSE_VERSION=latest
# This is an intermediate image, to be built locally (not pulled from a registry).
ARG FROM=matrixdotorg/synapse-workers:$SYNAPSE_VERSION
ARG DEBIAN_VERSION=trixie
ARG DEBIAN_VERSION=bookworm
FROM docker.io/library/postgres:13-${DEBIAN_VERSION} AS postgres_base
@@ -18,10 +18,10 @@ FROM $FROM
# since for repeated rebuilds, this is much faster than apt installing
# postgres each time.
# This trick only works because we use a postgres image based on the same
# debian version as Synapse's docker image (so the versions of the shared
# libraries match). Any missing libraries need to be added to either the
# Synapse image or docker/Dockerfile-workers.
# This trick only works because (a) the Synapse image happens to have all the
# shared libraries that postgres wants, (b) we use a postgres image based on
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql

View File

@@ -8,9 +8,9 @@ ARG PYTHON_VERSION=3.9
###
### Stage 0: generate requirements.txt
###
# We hardcode the use of Debian trixie here because this could change upstream
# and other Dockerfiles used for testing are expecting trixie.
FROM docker.io/library/python:${PYTHON_VERSION}-slim-trixie
# We hardcode the use of Debian bookworm here because this could change upstream
# and other Dockerfiles used for testing are expecting bookworm.
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm
# Install Rust and other dependencies (stolen from normal Dockerfile)
# install the OS build deps

View File

@@ -60,7 +60,6 @@
- [Admin API](usage/administration/admin_api/README.md)
- [Account Validity](admin_api/account_validity.md)
- [Background Updates](usage/administration/admin_api/background_updates.md)
- [Fetch Event](admin_api/fetch_event.md)
- [Event Reports](admin_api/event_reports.md)
- [Experimental Features](admin_api/experimental_features.md)
- [Media](admin_api/media_admin_api.md)

View File

@@ -1,53 +0,0 @@
# Fetch Event API
The fetch event API allows admins to fetch an event regardless of their membership in the room it
originated in.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api/).
Request:
```http
GET /_synapse/admin/v1/fetch_event/<event_id>
```
The API returns a JSON body like the following:
Response:
```json
{
"event": {
"auth_events": [
"$WhLChbYg6atHuFRP7cUd95naUtc8L0f7fqeizlsUVvc",
"$9Wj8dt02lrNEWweeq-KjRABUYKba0K9DL2liRvsAdtQ",
"$qJxBFxBt8_ODd9b3pgOL_jXP98S_igc1_kizuPSZFi4"
],
"content": {
"body": "Hey now",
"msgtype": "m.text"
},
"depth": 6,
"event_id": "$hJ_kcXbVMcI82JDrbqfUJIHu61tJD86uIFJ_8hNHi7s",
"hashes": {
"sha256": "LiNw8DtrRVf55EgAH8R42Wz7WCJUqGsPt2We6qZO5Rg"
},
"origin_server_ts": 799,
"prev_events": [
"$cnSUrNMnC3Ywh9_W7EquFxYQjC_sT3BAAVzcUVxZq1g"
],
"room_id": "!aIhKToCqgPTBloWMpf:test",
"sender": "@user:test",
"signatures": {
"test": {
"ed25519:a_lPym": "7mqSDwK1k7rnw34Dd8Fahu0rhPW7jPmcWPRtRDoEN9Yuv+BCM2+Rfdpv2MjxNKy3AYDEBwUwYEuaKMBaEMiKAQ"
}
},
"type": "m.room.message",
"unsigned": {
"age_ts": 799
}
}
}
```

View File

@@ -117,24 +117,6 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.141.0
## Docker images now based on Debian `trixie` with Python 3.13
The Docker images are now based on Debian `trixie` and use Python 3.13. If you
are using the Docker images as a base image you may need to e.g. adjust the
paths you mount any additional Python packages at.
# Upgrading to v1.140.0
## Users of `synapse-s3-storage-provider` must update the module to `v1.6.0`
Deployments that make use of the
[synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider/)
module must update it to
[v1.6.0](https://github.com/matrix-org/synapse-s3-storage-provider/releases/tag/v1.6.0),
otherwise users will be unable to upload or download media.
# Upgrading to v1.139.0
## `/register` requests from old application service implementations may break when using MAS

View File

@@ -2573,28 +2573,6 @@ Example configuration:
turn_allow_guests: false
```
---
### `matrix_rtc`
*(object)* Options related to MatrixRTC. Defaults to `{}`.
This setting has the following sub-options:
* `transports` (array): A list of transport types and arguments to use for MatrixRTC connections. Defaults to `[]`.
Options for each entry include:
* `type` (string): The type of transport to use to connect to the selective forwarding unit (SFU).
* `livekit_service_url` (string): The base URL of the LiveKit service. Should only be used with LiveKit-based transports.
Example configuration:
```yaml
matrix_rtc:
transports:
- type: livekit
livekit_service_url: https://matrix-rtc.example.com/livekit/jwt
```
---
## Registration
Registration can be rate-limited using the parameters in the [Ratelimiting](#ratelimiting) section of this manual.
@@ -3815,7 +3793,7 @@ This setting has the following sub-options:
* `localdb_enabled` (boolean): Set to false to disable authentication against the local password database. This is ignored if `enabled` is false, and is only useful if you have other `password_providers`. Defaults to `true`.
* `pepper` (string|null): A secret random string that will be appended to user's passwords before they are hashed. This improves the security of short passwords. DO NOT CHANGE THIS AFTER INITIAL SETUP! Defaults to `null`.
* `pepper` (string|null): Set the value here to a secret random string for extra security. DO NOT CHANGE THIS AFTER INITIAL SETUP! Defaults to `null`.
* `policy` (object): Define and enforce a password policy, such as minimum lengths for passwords, etc. This is an implementation of MSC2000.

93
poetry.lock generated
View File

@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.2.0 and should not be changed by hand.
[[package]]
name = "annotated-types"
@@ -34,15 +34,15 @@ tests-mypy = ["mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" a
[[package]]
name = "authlib"
version = "1.6.5"
version = "1.6.4"
description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients."
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"oidc\" or extra == \"jwt\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"jwt\" or extra == \"oidc\""
files = [
{file = "authlib-1.6.5-py2.py3-none-any.whl", hash = "sha256:3e0e0507807f842b02175507bdee8957a1d5707fd4afb17c32fb43fee90b6e3a"},
{file = "authlib-1.6.5.tar.gz", hash = "sha256:6aaf9c79b7cc96c900f0b284061691c5d4e61221640a948fe690b556a6d6d10b"},
{file = "authlib-1.6.4-py2.py3-none-any.whl", hash = "sha256:39313d2a2caac3ecf6d8f95fbebdfd30ae6ea6ae6a6db794d976405fdd9aa796"},
{file = "authlib-1.6.4.tar.gz", hash = "sha256:104b0442a43061dc8bc23b133d1d06a2b0a9c2e3e33f34c4338929e816287649"},
]
[package.dependencies]
@@ -447,7 +447,7 @@ description = "XML bomb protection for Python stdlib modules"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
groups = ["main"]
markers = "extra == \"saml2\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"saml2\""
files = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
@@ -472,7 +472,7 @@ description = "XPath 1.0/2.0/3.0/3.1 parsers and selectors for ElementTree and l
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"saml2\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"saml2\""
files = [
{file = "elementpath-4.1.5-py3-none-any.whl", hash = "sha256:2ac1a2fb31eb22bbbf817f8cf6752f844513216263f0e3892c8e79782fe4bb55"},
{file = "elementpath-4.1.5.tar.gz", hash = "sha256:c2d6dc524b29ef751ecfc416b0627668119d8812441c555d7471da41d4bacb8d"},
@@ -523,7 +523,7 @@ description = "Python wrapper for hiredis"
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"redis\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"redis\""
files = [
{file = "hiredis-3.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:add17efcbae46c5a6a13b244ff0b4a8fa079602ceb62290095c941b42e9d5dec"},
{file = "hiredis-3.2.1-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:5fe955cc4f66c57df1ae8e5caf4de2925d43b5efab4e40859662311d1bcc5f54"},
@@ -860,7 +860,7 @@ description = "Jaeger Python OpenTracing Tracer implementation"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"opentracing\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"opentracing\""
files = [
{file = "jaeger-client-4.8.0.tar.gz", hash = "sha256:3157836edab8e2c209bd2d6ae61113db36f7ee399e66b1dcbb715d87ab49bfe0"},
]
@@ -998,7 +998,7 @@ description = "A strictly RFC 4510 conforming LDAP V3 pure Python client library
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
files = [
{file = "ldap3-2.9.1-py2.py3-none-any.whl", hash = "sha256:5869596fc4948797020d3f03b7939da938778a0f9e2009f7a072ccf92b8e8d70"},
{file = "ldap3-2.9.1.tar.gz", hash = "sha256:f3e7fc4718e3f09dda568b57100095e0ce58633bcabbed8667ce3f8fbaa4229f"},
@@ -1014,7 +1014,7 @@ description = "Powerful and Pythonic XML processing library combining libxml2/li
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"url-preview\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"url-preview\""
files = [
{file = "lxml-6.0.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e77dd455b9a16bbd2a5036a63ddbd479c19572af81b624e79ef422f929eef388"},
{file = "lxml-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5d444858b9f07cefff6455b983aea9a67f7462ba1f6cbe4a21e8bf6791bf2153"},
@@ -1301,7 +1301,7 @@ description = "An LDAP3 auth provider for Synapse"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
files = [
{file = "matrix-synapse-ldap3-0.3.0.tar.gz", hash = "sha256:8bb6517173164d4b9cc44f49de411d8cebdb2e705d5dd1ea1f38733c4a009e1d"},
{file = "matrix_synapse_ldap3-0.3.0-py3-none-any.whl", hash = "sha256:8b4d701f8702551e98cc1d8c20dbed532de5613584c08d0df22de376ba99159d"},
@@ -1540,7 +1540,7 @@ description = "OpenTracing API for Python. See documentation at http://opentraci
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"opentracing\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"opentracing\""
files = [
{file = "opentracing-2.4.0.tar.gz", hash = "sha256:a173117e6ef580d55874734d1fa7ecb6f3655160b8b8974a2a1e98e5ec9c840d"},
]
@@ -1609,6 +1609,8 @@ groups = ["main"]
files = [
{file = "pillow-11.3.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1b9c17fd4ace828b3003dfd1e30bff24863e0eb59b535e8f80194d9cc7ecf860"},
{file = "pillow-11.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:65dc69160114cdd0ca0f35cb434633c75e8e7fad4cf855177a05bf38678f73ad"},
{file = "pillow-11.3.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7107195ddc914f656c7fc8e4a5e1c25f32e9236ea3ea860f257b0436011fddd0"},
{file = "pillow-11.3.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:cc3e831b563b3114baac7ec2ee86819eb03caa1a2cef0b481a5675b59c4fe23b"},
{file = "pillow-11.3.0-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f1f182ebd2303acf8c380a54f615ec883322593320a9b00438eb842c1f37ae50"},
{file = "pillow-11.3.0-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4445fa62e15936a028672fd48c4c11a66d641d2c05726c7ec1f8ba6a572036ae"},
{file = "pillow-11.3.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:71f511f6b3b91dd543282477be45a033e4845a40278fa8dcdbfdb07109bf18f9"},
@@ -1618,6 +1620,8 @@ files = [
{file = "pillow-11.3.0-cp310-cp310-win_arm64.whl", hash = "sha256:819931d25e57b513242859ce1876c58c59dc31587847bf74cfe06b2e0cb22d2f"},
{file = "pillow-11.3.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:1cd110edf822773368b396281a2293aeb91c90a2db00d78ea43e7e861631b722"},
{file = "pillow-11.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9c412fddd1b77a75aa904615ebaa6001f169b26fd467b4be93aded278266b288"},
{file = "pillow-11.3.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7d1aa4de119a0ecac0a34a9c8bde33f34022e2e8f99104e47a3ca392fd60e37d"},
{file = "pillow-11.3.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:91da1d88226663594e3f6b4b8c3c8d85bd504117d043740a8e0ec449087cc494"},
{file = "pillow-11.3.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:643f189248837533073c405ec2f0bb250ba54598cf80e8c1e043381a60632f58"},
{file = "pillow-11.3.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:106064daa23a745510dabce1d84f29137a37224831d88eb4ce94bb187b1d7e5f"},
{file = "pillow-11.3.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:cd8ff254faf15591e724dc7c4ddb6bf4793efcbe13802a4ae3e863cd300b493e"},
@@ -1627,6 +1631,8 @@ files = [
{file = "pillow-11.3.0-cp311-cp311-win_arm64.whl", hash = "sha256:30807c931ff7c095620fe04448e2c2fc673fcbb1ffe2a7da3fb39613489b1ddd"},
{file = "pillow-11.3.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fdae223722da47b024b867c1ea0be64e0df702c5e0a60e27daad39bf960dd1e4"},
{file = "pillow-11.3.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:921bd305b10e82b4d1f5e802b6850677f965d8394203d182f078873851dada69"},
{file = "pillow-11.3.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:eb76541cba2f958032d79d143b98a3a6b3ea87f0959bbe256c0b5e416599fd5d"},
{file = "pillow-11.3.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:67172f2944ebba3d4a7b54f2e95c786a3a50c21b88456329314caaa28cda70f6"},
{file = "pillow-11.3.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:97f07ed9f56a3b9b5f49d3661dc9607484e85c67e27f3e8be2c7d28ca032fec7"},
{file = "pillow-11.3.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:676b2815362456b5b3216b4fd5bd89d362100dc6f4945154ff172e206a22c024"},
{file = "pillow-11.3.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3e184b2f26ff146363dd07bde8b711833d7b0202e27d13540bfe2e35a323a809"},
@@ -1639,6 +1645,8 @@ files = [
{file = "pillow-11.3.0-cp313-cp313-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:7859a4cc7c9295f5838015d8cc0a9c215b77e43d07a25e460f35cf516df8626f"},
{file = "pillow-11.3.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ec1ee50470b0d050984394423d96325b744d55c701a439d2bd66089bff963d3c"},
{file = "pillow-11.3.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7db51d222548ccfd274e4572fdbf3e810a5e66b00608862f947b163e613b67dd"},
{file = "pillow-11.3.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:2d6fcc902a24ac74495df63faad1884282239265c6839a0a6416d33faedfae7e"},
{file = "pillow-11.3.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f0f5d8f4a08090c6d6d578351a2b91acf519a54986c055af27e7a93feae6d3f1"},
{file = "pillow-11.3.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c37d8ba9411d6003bba9e518db0db0c58a680ab9fe5179f040b0463644bc9805"},
{file = "pillow-11.3.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:13f87d581e71d9189ab21fe0efb5a23e9f28552d5be6979e84001d3b8505abe8"},
{file = "pillow-11.3.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:023f6d2d11784a465f09fd09a34b150ea4672e85fb3d05931d89f373ab14abb2"},
@@ -1648,6 +1656,8 @@ files = [
{file = "pillow-11.3.0-cp313-cp313-win_arm64.whl", hash = "sha256:1904e1264881f682f02b7f8167935cce37bc97db457f8e7849dc3a6a52b99580"},
{file = "pillow-11.3.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:4c834a3921375c48ee6b9624061076bc0a32a60b5532b322cc0ea64e639dd50e"},
{file = "pillow-11.3.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:5e05688ccef30ea69b9317a9ead994b93975104a677a36a8ed8106be9260aa6d"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1019b04af07fc0163e2810167918cb5add8d74674b6267616021ab558dc98ced"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f944255db153ebb2b19c51fe85dd99ef0ce494123f21b9db4877ffdfc5590c7c"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1f85acb69adf2aaee8b7da124efebbdb959a104db34d3a2cb0f3793dbae422a8"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:05f6ecbeff5005399bb48d198f098a9b4b6bdf27b8487c7f38ca16eeb070cd59"},
{file = "pillow-11.3.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a7bc6e6fd0395bc052f16b1a8670859964dbd7003bd0af2ff08342eb6e442cfe"},
@@ -1657,6 +1667,8 @@ files = [
{file = "pillow-11.3.0-cp313-cp313t-win_arm64.whl", hash = "sha256:8797edc41f3e8536ae4b10897ee2f637235c94f27404cac7297f7b607dd0716e"},
{file = "pillow-11.3.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:d9da3df5f9ea2a89b81bb6087177fb1f4d1c7146d583a3fe5c672c0d94e55e12"},
{file = "pillow-11.3.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:0b275ff9b04df7b640c59ec5a3cb113eefd3795a8df80bac69646ef699c6981a"},
{file = "pillow-11.3.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0743841cabd3dba6a83f38a92672cccbd69af56e3e91777b0ee7f4dba4385632"},
{file = "pillow-11.3.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2465a69cf967b8b49ee1b96d76718cd98c4e925414ead59fdf75cf0fd07df673"},
{file = "pillow-11.3.0-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:41742638139424703b4d01665b807c6468e23e699e8e90cffefe291c5832b027"},
{file = "pillow-11.3.0-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:93efb0b4de7e340d99057415c749175e24c8864302369e05914682ba642e5d77"},
{file = "pillow-11.3.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7966e38dcd0fa11ca390aed7c6f20454443581d758242023cf36fcb319b1a874"},
@@ -1666,6 +1678,8 @@ files = [
{file = "pillow-11.3.0-cp314-cp314-win_arm64.whl", hash = "sha256:155658efb5e044669c08896c0c44231c5e9abcaadbc5cd3648df2f7c0b96b9a6"},
{file = "pillow-11.3.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:59a03cdf019efbfeeed910bf79c7c93255c3d54bc45898ac2a4140071b02b4ae"},
{file = "pillow-11.3.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:f8a5827f84d973d8636e9dc5764af4f0cf2318d26744b3d902931701b0d46653"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ee92f2fd10f4adc4b43d07ec5e779932b4eb3dbfbc34790ada5a6669bc095aa6"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c96d333dcf42d01f47b37e0979b6bd73ec91eae18614864622d9b87bbd5bbf36"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4c96f993ab8c98460cd0c001447bff6194403e8b1d7e149ade5f00594918128b"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:41342b64afeba938edb034d122b2dda5db2139b9a4af999729ba8818e0056477"},
{file = "pillow-11.3.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:068d9c39a2d1b358eb9f245ce7ab1b5c3246c7c8c7d9ba58cfa5b43146c06e50"},
@@ -1675,6 +1689,8 @@ files = [
{file = "pillow-11.3.0-cp314-cp314t-win_arm64.whl", hash = "sha256:79ea0d14d3ebad43ec77ad5272e6ff9bba5b679ef73375ea760261207fa8e0aa"},
{file = "pillow-11.3.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:48d254f8a4c776de343051023eb61ffe818299eeac478da55227d96e241de53f"},
{file = "pillow-11.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7aee118e30a4cf54fdd873bd3a29de51e29105ab11f9aad8c32123f58c8f8081"},
{file = "pillow-11.3.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:23cff760a9049c502721bdb743a7cb3e03365fafcdfc2ef9784610714166e5a4"},
{file = "pillow-11.3.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:6359a3bc43f57d5b375d1ad54a0074318a0844d11b76abccf478c37c986d3cfc"},
{file = "pillow-11.3.0-cp39-cp39-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:092c80c76635f5ecb10f3f83d76716165c96f5229addbd1ec2bdbbda7d496e06"},
{file = "pillow-11.3.0-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:cadc9e0ea0a2431124cde7e1697106471fc4c1da01530e679b2391c37d3fbb3a"},
{file = "pillow-11.3.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6a418691000f2a418c9135a7cf0d797c1bb7d9a485e61fe8e7722845b95ef978"},
@@ -1684,11 +1700,15 @@ files = [
{file = "pillow-11.3.0-cp39-cp39-win_arm64.whl", hash = "sha256:6abdbfd3aea42be05702a8dd98832329c167ee84400a1d1f61ab11437f1717eb"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:3cee80663f29e3843b68199b9d6f4f54bd1d4a6b59bdd91bceefc51238bcb967"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:b5f56c3f344f2ccaf0dd875d3e180f631dc60a51b314295a3e681fe8cf851fbe"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:e67d793d180c9df62f1f40aee3accca4829d3794c95098887edc18af4b8b780c"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d000f46e2917c705e9fb93a3606ee4a819d1e3aa7a9b442f6444f07e77cf5e25"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:527b37216b6ac3a12d7838dc3bd75208ec57c1c6d11ef01902266a5a0c14fc27"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:be5463ac478b623b9dd3937afd7fb7ab3d79dd290a28e2b6df292dc75063eb8a"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:8dc70ca24c110503e16918a658b869019126ecfe03109b754c402daff12b3d9f"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7c8ec7a017ad1bd562f93dbd8505763e688d388cde6e4a010ae1486916e713e6"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:9ab6ae226de48019caa8074894544af5b53a117ccb9d3b3dcb2871464c829438"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fe27fb049cdcca11f11a7bfda64043c37b30e6b91f10cb5bab275806c32f6ab3"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:465b9e8844e3c3519a983d58b80be3f668e2a7a5db97f2784e7079fbc9f9822c"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5418b53c0d59b3824d05e029669efa023bbef0f3e92e75ec8428f3799487f361"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:504b6f59505f08ae014f724b6207ff6222662aab5cc9542577fb084ed0676ac7"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:c84d689db21a1c397d001aa08241044aa2069e7587b398c8cc63020390b1c1b8"},
@@ -1706,14 +1726,14 @@ xmp = ["defusedxml"]
[[package]]
name = "prometheus-client"
version = "0.23.1"
version = "0.22.1"
description = "Python client for the Prometheus monitoring system."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "prometheus_client-0.23.1-py3-none-any.whl", hash = "sha256:dd1913e6e76b59cfe44e7a4b83e01afc9873c1bdfd2ed8739f1e76aeca115f99"},
{file = "prometheus_client-0.23.1.tar.gz", hash = "sha256:6ae8f9081eaaaf153a2e959d2e6c4f4fb57b12ef76c8c7980202f1e57b48b2ce"},
{file = "prometheus_client-0.22.1-py3-none-any.whl", hash = "sha256:cca895342e308174341b2cbf99a56bef291fbc0ef7b9e5412a0f26d653ba7094"},
{file = "prometheus_client-0.22.1.tar.gz", hash = "sha256:190f1331e783cf21eb60bca559354e0a4d4378facecf78f5428c39b675d20d28"},
]
[package.extras]
@@ -1726,7 +1746,7 @@ description = "psycopg2 - Python-PostgreSQL Database Adapter"
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"postgres\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"postgres\""
files = [
{file = "psycopg2-2.9.10-cp310-cp310-win32.whl", hash = "sha256:5df2b672140f95adb453af93a7d669d7a7bf0a56bcd26f1502329166f4a61716"},
{file = "psycopg2-2.9.10-cp310-cp310-win_amd64.whl", hash = "sha256:c6f7b8561225f9e711a9c47087388a97fdc948211c10a4bccbf0ba68ab7b3b5a"},
@@ -1734,6 +1754,7 @@ files = [
{file = "psycopg2-2.9.10-cp311-cp311-win_amd64.whl", hash = "sha256:0435034157049f6846e95103bd8f5a668788dd913a7c30162ca9503fdf542cb4"},
{file = "psycopg2-2.9.10-cp312-cp312-win32.whl", hash = "sha256:65a63d7ab0e067e2cdb3cf266de39663203d38d6a8ed97f5ca0cb315c73fe067"},
{file = "psycopg2-2.9.10-cp312-cp312-win_amd64.whl", hash = "sha256:4a579d6243da40a7b3182e0430493dbd55950c493d8c68f4eec0b302f6bbf20e"},
{file = "psycopg2-2.9.10-cp313-cp313-win_amd64.whl", hash = "sha256:91fd603a2155da8d0cfcdbf8ab24a2d54bca72795b90d2a3ed2b6da8d979dee2"},
{file = "psycopg2-2.9.10-cp39-cp39-win32.whl", hash = "sha256:9d5b3b94b79a844a986d029eee38998232451119ad653aea42bb9220a8c5066b"},
{file = "psycopg2-2.9.10-cp39-cp39-win_amd64.whl", hash = "sha256:88138c8dedcbfa96408023ea2b0c369eda40fe5d75002c0964c78f46f11fa442"},
{file = "psycopg2-2.9.10.tar.gz", hash = "sha256:12ec0b40b0273f95296233e8750441339298e6a572f7039da5b260e3c8b60e11"},
@@ -1746,7 +1767,7 @@ description = ".. image:: https://travis-ci.org/chtd/psycopg2cffi.svg?branch=mas
optional = true
python-versions = "*"
groups = ["main"]
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
files = [
{file = "psycopg2cffi-2.9.0.tar.gz", hash = "sha256:7e272edcd837de3a1d12b62185eb85c45a19feda9e62fa1b120c54f9e8d35c52"},
]
@@ -1762,7 +1783,7 @@ description = "A Simple library to enable psycopg2 compatability"
optional = true
python-versions = "*"
groups = ["main"]
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
files = [
{file = "psycopg2cffi-compat-1.1.tar.gz", hash = "sha256:d25e921748475522b33d13420aad5c2831c743227dc1f1f2585e0fdb5c914e05"},
]
@@ -1811,14 +1832,14 @@ files = [
[[package]]
name = "pydantic"
version = "2.11.10"
version = "2.11.9"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "pydantic-2.11.10-py3-none-any.whl", hash = "sha256:802a655709d49bd004c31e865ef37da30b540786a46bfce02333e0e24b5fe29a"},
{file = "pydantic-2.11.10.tar.gz", hash = "sha256:dc280f0982fbda6c38fada4e476dc0a4f3aeaf9c6ad4c28df68a666ec3c61423"},
{file = "pydantic-2.11.9-py3-none-any.whl", hash = "sha256:c42dd626f5cfc1c6950ce6205ea58c93efa406da65f479dcb4029d5934857da2"},
{file = "pydantic-2.11.9.tar.gz", hash = "sha256:6b8ffda597a14812a7975c90b82a8a2e777d9257aba3453f973acd3c032a18e2"},
]
[package.dependencies]
@@ -2021,7 +2042,7 @@ description = "A development tool to measure, monitor and analyze the memory beh
optional = true
python-versions = ">=3.6"
groups = ["main"]
markers = "extra == \"cache-memory\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"cache-memory\""
files = [
{file = "Pympler-1.0.1-py3-none-any.whl", hash = "sha256:d260dda9ae781e1eab6ea15bacb84015849833ba5555f141d2d9b7b7473b307d"},
{file = "Pympler-1.0.1.tar.gz", hash = "sha256:993f1a3599ca3f4fcd7160c7545ad06310c9e12f70174ae7ae8d4e25f6c5d3fa"},
@@ -2081,7 +2102,7 @@ description = "Python implementation of SAML Version 2 Standard"
optional = true
python-versions = ">=3.9,<4.0"
groups = ["main"]
markers = "extra == \"saml2\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"saml2\""
files = [
{file = "pysaml2-7.5.0-py3-none-any.whl", hash = "sha256:bc6627cc344476a83c757f440a73fda1369f13b6fda1b4e16bca63ffbabb5318"},
{file = "pysaml2-7.5.0.tar.gz", hash = "sha256:f36871d4e5ee857c6b85532e942550d2cf90ea4ee943d75eb681044bbc4f54f7"},
@@ -2106,7 +2127,7 @@ description = "Extensions to the standard Python datetime module"
optional = true
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
groups = ["main"]
markers = "extra == \"saml2\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"saml2\""
files = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
@@ -2134,7 +2155,7 @@ description = "World timezone definitions, modern and historical"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"saml2\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"saml2\""
files = [
{file = "pytz-2022.7.1-py2.py3-none-any.whl", hash = "sha256:78f4f37d8198e0627c5f1143240bb0206b8691d8d7ac6d78fee88b78733f8c4a"},
{file = "pytz-2022.7.1.tar.gz", hash = "sha256:01a0681c4b9684a28304615eba55d1ab31ae00bf68ec157ec3708a8182dbbcd0"},
@@ -2500,7 +2521,7 @@ description = "Python client for Sentry (https://sentry.io)"
optional = true
python-versions = ">=3.6"
groups = ["main"]
markers = "extra == \"sentry\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"sentry\""
files = [
{file = "sentry_sdk-2.34.1-py2.py3-none-any.whl", hash = "sha256:b7a072e1cdc5abc48101d5146e1ae680fa81fe886d8d95aaa25a0b450c818d32"},
{file = "sentry_sdk-2.34.1.tar.gz", hash = "sha256:69274eb8c5c38562a544c3e9f68b5be0a43be4b697f5fd385bf98e4fbe672687"},
@@ -2688,7 +2709,7 @@ description = "Tornado IOLoop Backed Concurrent Futures"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"opentracing\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"opentracing\""
files = [
{file = "threadloop-1.0.2-py2-none-any.whl", hash = "sha256:5c90dbefab6ffbdba26afb4829d2a9df8275d13ac7dc58dccb0e279992679599"},
{file = "threadloop-1.0.2.tar.gz", hash = "sha256:8b180aac31013de13c2ad5c834819771992d350267bddb854613ae77ef571944"},
@@ -2704,7 +2725,7 @@ description = "Python bindings for the Apache Thrift RPC system"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"opentracing\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"opentracing\""
files = [
{file = "thrift-0.16.0.tar.gz", hash = "sha256:2b5b6488fcded21f9d312aa23c9ff6a0195d0f6ae26ddbd5ad9e3e25dfc14408"},
]
@@ -2766,7 +2787,7 @@ description = "Tornado is a Python web framework and asynchronous networking lib
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"opentracing\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"opentracing\""
files = [
{file = "tornado-6.5-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:f81067dad2e4443b015368b24e802d0083fecada4f0a4572fdb72fc06e54a9a6"},
{file = "tornado-6.5-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:9ac1cbe1db860b3cbb251e795c701c41d343f06a96049d6274e7c77559117e41"},
@@ -2903,7 +2924,7 @@ description = "non-blocking redis client for python"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"redis\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"redis\""
files = [
{file = "txredisapi-1.4.11-py3-none-any.whl", hash = "sha256:ac64d7a9342b58edca13ef267d4fa7637c1aa63f8595e066801c1e8b56b22d0b"},
{file = "txredisapi-1.4.11.tar.gz", hash = "sha256:3eb1af99aefdefb59eb877b1dd08861efad60915e30ad5bf3d5bf6c5cedcdbc6"},
@@ -3036,14 +3057,14 @@ types-cffi = "*"
[[package]]
name = "types-pyyaml"
version = "6.0.12.20250915"
version = "6.0.12.20250809"
description = "Typing stubs for PyYAML"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_pyyaml-6.0.12.20250915-py3-none-any.whl", hash = "sha256:e7d4d9e064e89a3b3cae120b4990cd370874d2bf12fa5f46c97018dd5d3c9ab6"},
{file = "types_pyyaml-6.0.12.20250915.tar.gz", hash = "sha256:0f8b54a528c303f0e6f7165687dd33fafa81c807fcac23f632b63aa624ced1d3"},
{file = "types_pyyaml-6.0.12.20250809-py3-none-any.whl", hash = "sha256:032b6003b798e7de1a1ddfeefee32fac6486bdfe4845e0ae0e7fb3ee4512b52f"},
{file = "types_pyyaml-6.0.12.20250809.tar.gz", hash = "sha256:af4a1aca028f18e75297da2ee0da465f799627370d74073e96fee876524f61b5"},
]
[[package]]
@@ -3149,7 +3170,7 @@ description = "An XML Schema validator and decoder"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"saml2\" or extra == \"all\""
markers = "extra == \"all\" or extra == \"saml2\""
files = [
{file = "xmlschema-2.4.0-py3-none-any.whl", hash = "sha256:dc87be0caaa61f42649899189aab2fd8e0d567f2cf548433ba7b79278d231a4a"},
{file = "xmlschema-2.4.0.tar.gz", hash = "sha256:d74cd0c10866ac609e1ef94a5a69b018ad16e39077bc6393408b40c6babee793"},
@@ -3293,4 +3314,4 @@ url-preview = ["lxml"]
[metadata]
lock-version = "2.1"
python-versions = "^3.9.0"
content-hash = "0058b93ca13a3f2a0cfc28485ddd8202c42d0015dbaf3b9692e43f37fe2a0be6"
content-hash = "2e8ea085e1a0c6f0ac051d4bc457a96827d01f621b1827086de01a5ffa98cf79"

View File

@@ -101,7 +101,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.141.0rc2"
version = "1.139.1"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later OR LicenseRef-Element-Commercial"
@@ -356,7 +356,7 @@ click = ">=8.1.3"
# GitPython was == 3.1.14; bumped to 3.1.20, the first release with type hints.
GitPython = ">=3.1.20"
markdown-it-py = ">=3.0.0"
pygithub = ">=1.59"
pygithub = ">=1.55"
# The following are executed as commands by the release script.
twine = "*"
# Towncrier min version comes from https://github.com/matrix-org/synapse/pull/3425. Rationale unclear.

View File

@@ -1,5 +1,5 @@
$schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.141/synapse-config.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.139/synapse-config.schema.json
type: object
properties:
modules:
@@ -2884,35 +2884,6 @@ properties:
default: true
examples:
- false
matrix_rtc:
type: object
description: >-
Options related to MatrixRTC.
properties:
transports:
type: array
items:
type: object
required:
- type
properties:
type:
type: string
description: The type of transport to use to connect to the selective forwarding unit (SFU).
example: livekit
livekit_service_url:
type: string
description: >-
The base URL of the LiveKit service. Should only be used with LiveKit-based transports.
example: https://matrix-rtc.example.com/livekit/jwt
description:
A list of transport types and arguments to use for MatrixRTC connections.
default: []
default: {}
examples:
- transports:
- type: livekit
livekit_service_url: https://matrix-rtc.example.com/livekit/jwt
enable_registration:
type: boolean
description: >-
@@ -4695,9 +4666,8 @@ properties:
pepper:
type: ["string", "null"]
description: >-
A secret random string that will be appended to user's passwords
before they are hashed. This improves the security of short passwords.
DO NOT CHANGE THIS AFTER INITIAL SETUP!
Set the value here to a secret random string for extra security. DO
NOT CHANGE THIS AFTER INITIAL SETUP!
default: null
policy:
type: object

View File

@@ -37,7 +37,6 @@ from typing import Any, List, Match, Optional, Union
import attr
import click
import git
import github
from click.exceptions import ClickException
from git import GitCommandError, Repo
from github import BadCredentialsException, Github
@@ -398,7 +397,7 @@ def _tag(gh_token: Optional[str]) -> None:
return
# Create a new draft release
gh = Github(auth=github.Auth.Token(token=gh_token))
gh = Github(gh_token)
gh_repo = gh.get_repo("element-hq/synapse")
release = gh_repo.create_git_release(
tag=tag_name,
@@ -640,16 +639,7 @@ def _notify(message: str) -> None:
@cli.command()
# Although this option is not used, allow it anyways. Otherwise the user will
# receive an error when providing it, which is annoying as other commands accept
# it.
@click.option(
"--gh-token",
"_gh_token",
envvar=["GH_TOKEN", "GITHUB_TOKEN"],
required=False,
)
def merge_back(_gh_token: Optional[str]) -> None:
def merge_back() -> None:
_merge_back()
@@ -697,16 +687,7 @@ def _merge_back() -> None:
@cli.command()
# Although this option is not used, allow it anyways. Otherwise the user will
# receive an error when providing it, which is annoying as other commands accept
# it.
@click.option(
"--gh-token",
"_gh_token",
envvar=["GH_TOKEN", "GITHUB_TOKEN"],
required=False,
)
def announce(_gh_token: Optional[str]) -> None:
def announce() -> None:
_announce()

View File

@@ -73,18 +73,8 @@ def main() -> None:
pw = unicodedata.normalize("NFKC", password)
bytes_to_hash = pw.encode("utf8") + password_pepper.encode("utf8")
if len(bytes_to_hash) > 72:
# bcrypt only looks at the first 72 bytes
print(
f"Password + pepper is too long ({len(bytes_to_hash)} bytes); truncating to 72 bytes for bcrypt. "
"This is expected behaviour and will not affect a user's ability to log in. 72 bytes is "
"sufficient entropy for a password."
)
bytes_to_hash = bytes_to_hash[:72]
hashed = bcrypt.hashpw(
bytes_to_hash,
pw.encode("utf8") + password_pepper.encode("utf8"),
bcrypt.gensalt(bcrypt_rounds),
).decode("ascii")

View File

@@ -98,6 +98,7 @@ from synapse.storage.databases.state.bg_updates import StateBackgroundUpdateStor
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.types import ISynapseReactor
from synapse.util import SYNAPSE_VERSION
# Cast safety: Twisted does some naughty magic which replaces the
# twisted.internet.reactor module with a Reactor instance at runtime.
@@ -324,6 +325,7 @@ class MockHomeserver(HomeServer):
hostname=config.server.server_name,
config=config,
reactor=reactor,
version_string=f"Synapse/{SYNAPSE_VERSION}",
)

View File

@@ -31,6 +31,7 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.types import ISynapseReactor
from synapse.util import SYNAPSE_VERSION
# Cast safety: Twisted does some naughty magic which replaces the
# twisted.internet.reactor module with a Reactor instance at runtime.
@@ -46,6 +47,7 @@ class MockHomeserver(HomeServer):
hostname=config.server.server_name,
config=config,
reactor=reactor,
version_string=f"Synapse/{SYNAPSE_VERSION}",
)

View File

@@ -302,9 +302,12 @@ class BaseAuth:
(the user_id URI parameter allows an application service to masquerade
any applicable user in its namespace)
- what device the application service should be treated as controlling
(the device_id URI parameter allows an application service to masquerade
(the device_id[^1] URI parameter allows an application service to masquerade
as any device that exists for the relevant user)
[^1] Unstable and provided by MSC3202.
Must use `org.matrix.msc3202.device_id` in place of `device_id` for now.
Returns:
the application service `Requester` of that request
@@ -316,8 +319,7 @@ class BaseAuth:
- The returned device ID, if present, has been checked to be a valid device ID
for the returned user ID.
"""
# TODO: We can drop unstable support after 2026-01-01 (couple months after stable support)
UNSTABLE_DEVICE_ID_ARG_NAME = b"org.matrix.msc3202.device_id"
DEVICE_ID_ARG_NAME = b"org.matrix.msc3202.device_id"
app_service = self.store.get_app_service_by_token(access_token)
if app_service is None:
@@ -339,11 +341,13 @@ class BaseAuth:
else:
effective_user_id = app_service.sender
effective_device_id_args = request.args.get(
b"device_id", request.args.get(UNSTABLE_DEVICE_ID_ARG_NAME)
)
if effective_device_id_args:
effective_device_id = effective_device_id_args[0].decode("utf8")
effective_device_id: Optional[str] = None
if (
self.hs.config.experimental.msc3202_device_masquerading_enabled
and DEVICE_ID_ARG_NAME in request.args
):
effective_device_id = request.args[DEVICE_ID_ARG_NAME][0].decode("utf8")
# We only just set this so it can't be None!
assert effective_device_id is not None
device_opt = await self.store.get_device(
@@ -355,8 +359,6 @@ class BaseAuth:
f"Application service trying to use a device that doesn't exist ('{effective_device_id}' for {effective_user_id})",
Codes.UNKNOWN_DEVICE,
)
else:
effective_device_id = None
return create_requester(
effective_user_id, app_service=app_service, device_id=effective_device_id

View File

@@ -64,6 +64,7 @@ from twisted.web.resource import Resource
import synapse.util.caches
from synapse.api.constants import MAX_PDU_SIZE
from synapse.app import check_bind_error
from synapse.app.phone_stats_home import start_phone_stats_home
from synapse.config import ConfigError
from synapse.config._base import format_config_error
from synapse.config.homeserver import HomeServerConfig
@@ -420,26 +421,17 @@ def listen_http(
context_factory: Optional[IOpenSSLContextFactory],
reactor: ISynapseReactor = reactor,
) -> List[Port]:
"""
Args:
listener_config: TODO
root_resource: TODO
version_string: A string to present for the Server header
max_request_body_size: TODO
context_factory: TODO
reactor: TODO
"""
assert listener_config.http_options is not None
site_tag = listener_config.get_site_tag()
site = SynapseSite(
logger_name="synapse.access.%s.%s"
"synapse.access.%s.%s"
% ("https" if listener_config.is_tls() else "http", site_tag),
site_tag=site_tag,
config=listener_config,
resource=root_resource,
server_version_string=version_string,
site_tag,
listener_config,
root_resource,
version_string,
max_request_body_size=max_request_body_size,
reactor=reactor,
hs=hs,
@@ -591,9 +583,9 @@ async def start(hs: "HomeServer", freeze: bool = True) -> None:
# we're not using systemd.
sdnotify(b"RELOADING=1")
for sighup_callbacks in _instance_id_to_sighup_callbacks_map.values():
for func, args, kwargs in sighup_callbacks:
func(*args, **kwargs)
for sighup_callbacks in _instance_id_to_sighup_callbacks_map.values():
for func, args, kwargs in sighup_callbacks:
func(*args, **kwargs)
sdnotify(b"READY=1")
@@ -682,6 +674,15 @@ async def start(hs: "HomeServer", freeze: bool = True) -> None:
if hs.config.worker.run_background_tasks:
hs.start_background_tasks()
# TODO: This should be moved to same pattern we use for other background tasks:
# Add to `REQUIRED_ON_BACKGROUND_TASK_STARTUP` and rely on
# `start_background_tasks` to start it.
await hs.get_common_usage_metrics_manager().setup()
# TODO: This feels like another pattern that should refactored as one of the
# `REQUIRED_ON_BACKGROUND_TASK_STARTUP`
start_phone_stats_home(hs)
if freeze:
# We now freeze all allocated objects in the hopes that (almost)
# everything currently allocated are things that will be used for the

View File

@@ -65,6 +65,7 @@ from synapse.storage.databases.main.stream import StreamWorkerStore
from synapse.storage.databases.main.tags import TagsWorkerStore
from synapse.storage.databases.main.user_erasure_store import UserErasureWorkerStore
from synapse.types import JsonMapping, StateMap
from synapse.util import SYNAPSE_VERSION
from synapse.util.logcontext import LoggingContext
logger = logging.getLogger("synapse.app.admin_cmd")
@@ -315,6 +316,7 @@ def start(config: HomeServerConfig, args: argparse.Namespace) -> None:
ss = AdminCmdServer(
config.server.server_name,
config=config,
version_string=f"Synapse/{SYNAPSE_VERSION}",
)
setup_logging(ss, config, use_worker_options=True)

View File

@@ -112,6 +112,7 @@ from synapse.storage.databases.main.transactions import TransactionWorkerStore
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
from synapse.storage.databases.main.user_directory import UserDirectoryStore
from synapse.storage.databases.main.user_erasure_store import UserErasureWorkerStore
from synapse.util import SYNAPSE_VERSION
from synapse.util.httpresourcetree import create_resource_tree
logger = logging.getLogger("synapse.app.generic_worker")
@@ -358,6 +359,7 @@ def start(config: HomeServerConfig) -> None:
hs = GenericWorkerServer(
config.server.server_name,
config=config,
version_string=f"Synapse/{SYNAPSE_VERSION}",
)
setup_logging(hs, config, use_worker_options=True)

View File

@@ -71,7 +71,7 @@ from synapse.rest.well_known import well_known_resource
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.types import ISynapseReactor
from synapse.util.check_dependencies import check_requirements
from synapse.util.check_dependencies import VERSION, check_requirements
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.module_loader import load_module
@@ -83,10 +83,6 @@ def gz_wrap(r: Resource) -> Resource:
class SynapseHomeServer(HomeServer):
"""
Homeserver class for the main Synapse process.
"""
DATASTORE_CLASS = DataStore
def _listener_http(
@@ -349,53 +345,18 @@ def load_or_generate_config(argv_options: List[str]) -> HomeServerConfig:
return config
def create_homeserver(
def setup(
config: HomeServerConfig,
reactor: Optional[ISynapseReactor] = None,
freeze: bool = True,
) -> SynapseHomeServer:
"""
Create a homeserver instance for the Synapse main process.
Create and setup a Synapse homeserver instance given a configuration.
Args:
config: The configuration for the homeserver.
reactor: Optionally provide a reactor to use. Can be useful in different
scenarios that you want control over the reactor, such as tests.
Returns:
A homeserver instance.
"""
if config.worker.worker_app:
raise ConfigError(
"You have specified `worker_app` in the config but are attempting to setup a non-worker "
"instance. Please use `python -m synapse.app.generic_worker` instead (or remove the option if this is the main process)."
)
events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
if config.server.gc_seconds:
synapse.metrics.MIN_TIME_BETWEEN_GCS = config.server.gc_seconds
hs = SynapseHomeServer(
hostname=config.server.server_name,
config=config,
reactor=reactor,
)
return hs
def setup(
hs: SynapseHomeServer,
*,
freeze: bool = True,
) -> None:
"""
Setup a Synapse homeserver instance given a configuration.
Args:
hs: The homeserver to setup.
freeze: whether to freeze the homeserver base objects in the garbage collector.
May improve garbage collection performance by marking objects with an effectively
static lifetime as frozen so they don't need to be considered for cleanup.
@@ -406,22 +367,55 @@ def setup(
A homeserver instance.
"""
setup_logging(hs, hs.config, use_worker_options=False)
if config.worker.worker_app:
raise ConfigError(
"You have specified `worker_app` in the config but are attempting to start a non-worker "
"instance. Please use `python -m synapse.app.generic_worker` instead (or remove the option if this is the main process)."
)
sys.exit(1)
# Log after we've configured logging.
logger.info("Setting up server")
events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
if config.server.gc_seconds:
synapse.metrics.MIN_TIME_BETWEEN_GCS = config.server.gc_seconds
if (
config.registration.enable_registration
and not config.registration.enable_registration_without_verification
):
if (
not config.captcha.enable_registration_captcha
and not config.registration.registrations_require_3pid
and not config.registration.registration_requires_token
):
raise ConfigError(
"You have enabled open registration without any verification. This is a known vector for "
"spam and abuse. If you would like to allow public registration, please consider adding email, "
"captcha, or token-based verification. Otherwise this check can be removed by setting the "
"`enable_registration_without_verification` config option to `true`."
)
hs = SynapseHomeServer(
config.server.server_name,
config=config,
version_string=f"Synapse/{VERSION}",
reactor=reactor,
)
setup_logging(hs, config, use_worker_options=False)
# Start the tracer
init_tracer(hs) # noqa
logger.info("Setting up server")
try:
hs.setup()
except Exception as e:
handle_startup_exception(e)
async def _start_when_reactor_running() -> None:
# TODO: Feels like this should be moved somewhere else.
#
async def start() -> None:
# Load the OIDC provider metadatas, if OIDC is enabled.
if hs.config.oidc.oidc_enabled:
oidc = hs.get_oidc_handler()
@@ -430,29 +424,21 @@ def setup(
await _base.start(hs, freeze)
# TODO: Feels like this should be moved somewhere else.
hs.get_datastores().main.db_pool.updates.start_doing_background_updates()
# Register a callback to be invoked once the reactor is running
register_start(hs, _start_when_reactor_running)
register_start(hs, start)
return hs
def start_reactor(
config: HomeServerConfig,
) -> None:
"""
Start the reactor (Twisted event-loop).
Args:
config: The configuration for the homeserver.
"""
def run(hs: HomeServer) -> None:
_base.start_reactor(
"synapse-homeserver",
soft_file_limit=config.server.soft_file_limit,
gc_thresholds=config.server.gc_thresholds,
pid_file=config.server.pid_file,
daemonize=config.server.daemonize,
print_pidfile=config.server.print_pidfile,
soft_file_limit=hs.config.server.soft_file_limit,
gc_thresholds=hs.config.server.gc_thresholds,
pid_file=hs.config.server.pid_file,
daemonize=hs.config.server.daemonize,
print_pidfile=hs.config.server.print_pidfile,
logger=logger,
)
@@ -463,14 +449,13 @@ def main() -> None:
with LoggingContext(name="main", server_name=homeserver_config.server.server_name):
# check base requirements
check_requirements()
hs = create_homeserver(homeserver_config)
setup(hs)
hs = setup(homeserver_config)
# redirect stdio to the logs, if configured.
if not hs.config.logging.no_redirect_stdio:
redirect_stdio_to_logs()
start_reactor(homeserver_config)
run(hs)
if __name__ == "__main__":

View File

@@ -545,22 +545,18 @@ class RootConfig:
@classmethod
def load_config(
cls: Type[TRootConfig], description: str, argv_options: List[str]
cls: Type[TRootConfig], description: str, argv: List[str]
) -> TRootConfig:
"""Parse the commandline and config files
Doesn't support config-file-generation: used by the worker apps.
Args:
description: TODO
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
Returns:
Config object.
"""
config_parser = argparse.ArgumentParser(description=description)
cls.add_arguments_to_parser(config_parser)
obj, _ = cls.load_config_with_parser(config_parser, argv_options)
obj, _ = cls.load_config_with_parser(config_parser, argv)
return obj
@@ -613,10 +609,6 @@ class RootConfig:
Used for workers where we want to add extra flags/subcommands.
Note: This is the common denominator for loading config and is also used by
`load_config` and `load_or_generate_config`. Which is why we call
`validate_config()` here.
Args:
parser
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
@@ -650,10 +642,6 @@ class RootConfig:
obj.invoke_all("read_arguments", config_args)
# Now that we finally have the full config sections parsed, allow subclasses to
# do some extra validation across the entire config.
obj.validate_config()
return obj, config_args
@classmethod
@@ -854,7 +842,15 @@ class RootConfig:
):
return None
return cls.load_config(description, argv_options)
obj.parse_config_dict(
config_dict,
config_dir_path=config_dir_path,
data_dir_path=data_dir_path,
allow_secrets_in_config=config_args.secrets_in_config,
)
obj.invoke_all("read_arguments", config_args)
return obj
def parse_config_dict(
self,
@@ -915,20 +911,6 @@ class RootConfig:
existing_config.root = None
return existing_config
def validate_config(self) -> None:
"""
Additional config validation across all config sections.
Override this in subclasses to add extra validation. This is called once all
config option values have been populated.
XXX: This should only validate, not modify the configuration, as the final
config state is required for proper validation across all config sections.
Raises:
ConfigError: if the config is invalid.
"""
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
"""Read the config files and shallowly merge them into a dict.

View File

@@ -37,7 +37,6 @@ from synapse.config import ( # noqa: F401
key,
logger,
mas,
matrixrtc,
metrics,
modules,
oembed,
@@ -127,7 +126,6 @@ class RootConfig:
auto_accept_invites: auto_accept_invites.AutoAcceptInvitesConfig
user_types: user_types.UserTypesConfig
mas: mas.MasConfig
matrix_rtc: matrixrtc.MatrixRtcConfig
config_classes: List[Type["Config"]] = ...
config_files: List[str]
@@ -158,11 +156,11 @@ class RootConfig:
) -> str: ...
@classmethod
def load_or_generate_config(
cls: Type[TRootConfig], description: str, argv_options: List[str]
cls: Type[TRootConfig], description: str, argv: List[str]
) -> Optional[TRootConfig]: ...
@classmethod
def load_config(
cls: Type[TRootConfig], description: str, argv_options: List[str]
cls: Type[TRootConfig], description: str, argv: List[str]
) -> TRootConfig: ...
@classmethod
def add_arguments_to_parser(
@@ -170,7 +168,7 @@ class RootConfig:
) -> None: ...
@classmethod
def load_config_with_parser(
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv_options: List[str]
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv: List[str]
) -> Tuple[TRootConfig, argparse.Namespace]: ...
def generate_missing_files(
self, config_dict: dict, config_dir_path: str

View File

@@ -412,6 +412,11 @@ class ExperimentalConfig(Config):
"msc2409_to_device_messages_enabled", False
)
# The portion of MSC3202 which is related to device masquerading.
self.msc3202_device_masquerading_enabled: bool = experimental.get(
"msc3202_device_masquerading", False
)
# The portion of MSC3202 related to transaction extensions:
# sending device list changes, one-time key counts and fallback key
# usage to application services.
@@ -551,9 +556,6 @@ class ExperimentalConfig(Config):
# MSC4133: Custom profile fields
self.msc4133_enabled: bool = experimental.get("msc4133_enabled", False)
# MSC4143: Matrix RTC Transport using Livekit Backend
self.msc4143_enabled: bool = experimental.get("msc4143_enabled", False)
# MSC4169: Backwards-compatible redaction sending using `/send`
self.msc4169_enabled: bool = experimental.get("msc4169_enabled", False)

View File

@@ -18,8 +18,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
from ._base import ConfigError, RootConfig
from ._base import RootConfig
from .account_validity import AccountValidityConfig
from .api import ApiConfig
from .appservice import AppServiceConfig
@@ -38,7 +37,6 @@ from .jwt import JWTConfig
from .key import KeyConfig
from .logger import LoggingConfig
from .mas import MasConfig
from .matrixrtc import MatrixRtcConfig
from .metrics import MetricsConfig
from .modules import ModulesConfig
from .oembed import OembedConfig
@@ -68,10 +66,6 @@ from .workers import WorkerConfig
class HomeServerConfig(RootConfig):
"""
Top-level config object for Synapse homeserver (main process and workers).
"""
config_classes = [
ModulesConfig,
ServerConfig,
@@ -86,7 +80,6 @@ class HomeServerConfig(RootConfig):
OembedConfig,
CaptchaConfig,
VoipConfig,
MatrixRtcConfig,
RegistrationConfig,
AccountValidityConfig,
MetricsConfig,
@@ -120,22 +113,3 @@ class HomeServerConfig(RootConfig):
# This must be last, as it checks for conflicts with other config options.
MasConfig,
]
def validate_config(
self,
) -> None:
if (
self.registration.enable_registration
and not self.registration.enable_registration_without_verification
):
if (
not self.captcha.enable_registration_captcha
and not self.registration.registrations_require_3pid
and not self.registration.registration_requires_token
):
raise ConfigError(
"You have enabled open registration without any verification. This is a known vector for "
"spam and abuse. If you would like to allow public registration, please consider adding email, "
"captcha, or token-based verification. Otherwise this check can be removed by setting the "
"`enable_registration_without_verification` config option to `true`."
)

View File

@@ -1,67 +0,0 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# [This file includes modifications made by New Vector Limited]
#
#
from typing import Any, Optional
from pydantic import ValidationError
from synapse._pydantic_compat import Field, StrictStr, validator
from synapse.types import JsonDict
from synapse.util.pydantic_models import ParseModel
from ._base import Config, ConfigError
class TransportConfigModel(ParseModel):
type: StrictStr
livekit_service_url: Optional[StrictStr] = Field(default=None)
"""An optional livekit service URL. Only required if type is "livekit"."""
@validator("livekit_service_url", always=True)
def validate_livekit_service_url(cls, v: Any, values: dict) -> Any:
if values.get("type") == "livekit" and not v:
raise ValueError(
"You must set a `livekit_service_url` when using the 'livekit' transport."
)
return v
class MatrixRtcConfigModel(ParseModel):
transports: list = []
class MatrixRtcConfig(Config):
section = "matrix_rtc"
def read_config(
self, config: JsonDict, allow_secrets_in_config: bool, **kwargs: Any
) -> None:
matrix_rtc = config.get("matrix_rtc", {})
if matrix_rtc is None:
matrix_rtc = {}
try:
parsed = MatrixRtcConfigModel(**matrix_rtc)
except ValidationError as e:
raise ConfigError(
"Could not validate matrix_rtc config",
("matrix_rtc",),
) from e
self.transports = parsed.transports

View File

@@ -1683,22 +1683,8 @@ class AuthHandler:
# Normalise the Unicode in the password
pw = unicodedata.normalize("NFKC", password)
bytes_to_hash = pw.encode(
"utf8"
) + self.hs.config.auth.password_pepper.encode("utf8")
if len(bytes_to_hash) > 72:
# bcrypt only looks at the first 72 bytes.
#
# Note: we explicitly DO NOT log the length of the user's password here.
logger.debug(
"Password + pepper is too long; truncating to 72 bytes for bcrypt. "
"This is expected behaviour and will not affect a user's ability to log in. 72 bytes is "
"sufficient entropy for a password."
)
bytes_to_hash = bytes_to_hash[:72]
return bcrypt.hashpw(
bytes_to_hash,
pw.encode("utf8") + self.hs.config.auth.password_pepper.encode("utf8"),
bcrypt.gensalt(self.bcrypt_rounds),
).decode("ascii")
@@ -1720,20 +1706,9 @@ class AuthHandler:
def _do_validate_hash(checked_hash: bytes) -> bool:
# Normalise the Unicode in the password
pw = unicodedata.normalize("NFKC", password)
password_pepper = self.hs.config.auth.password_pepper
bytes_to_hash = pw.encode("utf8") + password_pepper.encode("utf8")
if len(bytes_to_hash) > 72:
# bcrypt only looks at the first 72 bytes
logger.debug(
"Password + pepper is too long; truncating to 72 bytes for bcrypt. "
"This is expected behaviour and will not affect a user's ability to log in. 72 bytes is "
"sufficient entropy for a password."
)
bytes_to_hash = bytes_to_hash[:72]
return bcrypt.checkpw(
bytes_to_hash,
pw.encode("utf8") + self.hs.config.auth.password_pepper.encode("utf8"),
checked_hash,
)

View File

@@ -741,7 +741,6 @@ class SynapseSite(ProxySite):
def __init__(
self,
*,
logger_name: str,
site_tag: str,
config: ListenerConfig,

View File

@@ -62,7 +62,7 @@ class CommonUsageMetricsManager:
"""
return await self._collect()
def setup(self) -> None:
async def setup(self) -> None:
"""Keep the gauges for common usage metrics up to date."""
self._hs.run_as_background_process(
desc="common_usage_metrics_update_gauges",

View File

@@ -79,7 +79,6 @@ from synapse.http.server import (
from synapse.http.servlet import parse_json_object_from_request
from synapse.http.site import SynapseRequest
from synapse.logging.context import (
defer_to_thread,
defer_to_threadpool,
make_deferred_yieldable,
run_in_background,
@@ -1718,6 +1717,7 @@ class ModuleApi:
async def defer_to_thread(
self,
f: Callable[P, T],
threadpool: Optional[ThreadPool] = None,
*args: P.args,
**kwargs: P.kwargs,
) -> T:
@@ -1733,31 +1733,11 @@ class ModuleApi:
Returns:
The return value of the function once ran in a thread.
"""
return await defer_to_thread(self._hs.get_reactor(), f, *args, **kwargs)
# If a threadpool is not provided by the module, then use the default
# reactor threadpool of the homeserver.
if threadpool is None:
threadpool = self._hs.get_reactor().getThreadPool()
async def defer_to_threadpool(
self,
threadpool: ThreadPool,
f: Callable[P, T],
*args: P.args,
**kwargs: P.kwargs,
) -> T:
"""Runs the given function in a separate thread from the given thread pool.
Allows specifying a custom thread pool instead of using the default Synapse
one. To use the default Synapse threadpool, use `defer_to_thread` instead.
Added in Synapse v1.140.0.
Args:
threadpool: The thread pool to use.
f: The function to run.
args: The function's arguments.
kwargs: The function's keyword arguments.
Returns:
The return value of the function once ran in a thread.
"""
return await defer_to_threadpool(
self._hs.get_reactor(), threadpool, f, *args, **kwargs
)

View File

@@ -42,7 +42,6 @@ from synapse.rest.client import (
login,
login_token_request,
logout,
matrixrtc,
mutual_rooms,
notifications,
openid,
@@ -90,7 +89,6 @@ CLIENT_SERVLET_FUNCTIONS: Tuple[RegisterServletsFunc, ...] = (
presence.register_servlets,
directory.register_servlets,
voip.register_servlets,
matrixrtc.register_servlets,
pusher.register_servlets,
push_rule.register_servlets,
logout.register_servlets,

View File

@@ -57,9 +57,6 @@ from synapse.rest.admin.event_reports import (
EventReportDetailRestServlet,
EventReportsRestServlet,
)
from synapse.rest.admin.events import (
EventRestServlet,
)
from synapse.rest.admin.experimental_features import ExperimentalFeaturesRestServlet
from synapse.rest.admin.federation import (
DestinationMembershipRestServlet,
@@ -342,7 +339,6 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
ExperimentalFeaturesRestServlet(hs).register(http_server)
SuspendAccountRestServlet(hs).register(http_server)
ScheduledTasksRestServlet(hs).register(http_server)
EventRestServlet(hs).register(http_server)
def register_servlets_for_client_rest_resource(

View File

@@ -1,69 +0,0 @@
from http import HTTPStatus
from typing import TYPE_CHECKING, Tuple
from synapse.api.errors import NotFoundError
from synapse.events.utils import (
SerializeEventConfig,
format_event_raw,
serialize_event,
)
from synapse.http.servlet import RestServlet
from synapse.http.site import SynapseRequest
from synapse.rest.admin import admin_patterns
from synapse.rest.admin._base import assert_user_is_admin
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
from synapse.types import JsonDict
if TYPE_CHECKING:
from synapse.server import HomeServer
class EventRestServlet(RestServlet):
"""
Get an event that is known to the homeserver.
The requester must have administrator access in Synapse.
GET /_synapse/admin/v1/fetch_event/<event_id>
returns:
200 OK with event json if the event is known to the homeserver. Otherwise raises
a NotFound error.
Args:
event_id: the id of the requested event.
Returns:
JSON blob of the event
"""
PATTERNS = admin_patterns("/fetch_event/(?P<event_id>[^/]*)$")
def __init__(self, hs: "HomeServer"):
self._auth = hs.get_auth()
self._store = hs.get_datastores().main
self._clock = hs.get_clock()
async def on_GET(
self, request: SynapseRequest, event_id: str
) -> Tuple[int, JsonDict]:
requester = await self._auth.get_user_by_req(request)
await assert_user_is_admin(self._auth, requester)
event = await self._store.get_event(
event_id,
EventRedactBehaviour.as_is,
allow_none=True,
)
if event is None:
raise NotFoundError("Event not found")
config = SerializeEventConfig(
as_client_event=False,
event_format=format_event_raw,
requester=requester,
only_event_fields=None,
include_stripped_room_state=True,
include_admin_metadata=True,
)
res = {"event": serialize_event(event, self._clock.time_msec(), config=config)}
return HTTPStatus.OK, res

View File

@@ -112,7 +112,7 @@ class DeleteDevicesRestServlet(RestServlet):
else:
raise e
if requester.app_service:
if requester.app_service and requester.app_service.msc4190_device_management:
# MSC4190 can skip UIA for this endpoint
pass
else:
@@ -192,7 +192,7 @@ class DeviceRestServlet(RestServlet):
else:
raise
if requester.app_service:
if requester.app_service and requester.app_service.msc4190_device_management:
# MSC4190 allows appservices to delete devices through this endpoint without UIA
# It's also allowed with MSC3861 enabled
pass
@@ -227,7 +227,7 @@ class DeviceRestServlet(RestServlet):
body = parse_and_validate_json_object_from_request(request, self.PutBody)
# MSC4190 allows appservices to create devices through this endpoint
if requester.app_service:
if requester.app_service and requester.app_service.msc4190_device_management:
created = await self.device_handler.upsert_device(
user_id=requester.user.to_string(),
device_id=device_id,

View File

@@ -543,11 +543,15 @@ class SigningKeyUploadServlet(RestServlet):
if not keys_are_different:
return 200, {}
# MSC4190 can skip UIA for replacing cross-signing keys as well.
is_appservice_with_msc4190 = (
requester.app_service and requester.app_service.msc4190_device_management
)
# The keys are different; is x-signing set up? If no, then this is first-time
# setup, and that is allowed without UIA, per MSC3967.
# If yes, then we need to authenticate the change.
# MSC4190 can skip UIA for replacing cross-signing keys as well.
if is_cross_signing_setup and not requester.app_service:
if is_cross_signing_setup and not is_appservice_with_msc4190:
# With MSC3861, UIA is not possible. Instead, the auth service has to
# explicitly mark the master key as replaceable.
if self.hs.config.mas.enabled:

View File

@@ -1,52 +0,0 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# [This file includes modifications made by New Vector Limited]
#
#
from typing import TYPE_CHECKING, Tuple
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet
from synapse.http.site import SynapseRequest
from synapse.rest.client._base import client_patterns
from synapse.types import JsonDict
if TYPE_CHECKING:
from synapse.server import HomeServer
class MatrixRTCRestServlet(RestServlet):
PATTERNS = client_patterns(r"/org\.matrix\.msc4143/rtc/transports$", releases=())
CATEGORY = "Client API requests"
def __init__(self, hs: "HomeServer"):
super().__init__()
self._hs = hs
self._auth = hs.get_auth()
self._transports = hs.config.matrix_rtc.transports
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
# Require authentication for this endpoint.
await self._auth.get_user_by_req(request)
if self._transports:
return 200, {"rtc_transports": self._transports}
return 200, {}
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
if hs.config.experimental.msc4143_enabled:
MatrixRTCRestServlet(hs).register(http_server)

View File

@@ -62,7 +62,6 @@ from synapse.api.auth_blocking import AuthBlocking
from synapse.api.filtering import Filtering
from synapse.api.ratelimiting import Ratelimiter, RequestRatelimiter
from synapse.app._base import unregister_sighups
from synapse.app.phone_stats_home import start_phone_stats_home
from synapse.appservice.api import ApplicationServiceApi
from synapse.appservice.scheduler import ApplicationServiceScheduler
from synapse.config.homeserver import HomeServerConfig
@@ -176,7 +175,6 @@ from synapse.storage.controllers import StorageControllers
from synapse.streams.events import EventSources
from synapse.synapse_rust.rendezvous import RendezvousHandler
from synapse.types import DomainSpecificString, ISynapseReactor
from synapse.util import SYNAPSE_VERSION
from synapse.util.caches import CACHE_METRIC_REGISTRY
from synapse.util.clock import Clock
from synapse.util.distributor import Distributor
@@ -324,6 +322,7 @@ class HomeServer(metaclass=abc.ABCMeta):
hostname: str,
config: HomeServerConfig,
reactor: Optional[ISynapseReactor] = None,
version_string: str = "Synapse",
):
"""
Args:
@@ -348,7 +347,7 @@ class HomeServer(metaclass=abc.ABCMeta):
self._instance_id = random_string(5)
self._instance_name = config.worker.instance_name
self.version_string = f"Synapse/{SYNAPSE_VERSION}"
self.version_string = version_string
self.datastores: Optional[Databases] = None
@@ -614,6 +613,12 @@ class HomeServer(metaclass=abc.ABCMeta):
self.datastores = Databases(self.DATASTORE_CLASS, self)
logger.info("Finished setting up.")
# Register background tasks required by this server. This must be done
# somewhat manually due to the background tasks not being registered
# unless handlers are instantiated.
if self.config.worker.run_background_tasks:
self.start_background_tasks()
# def __del__(self) -> None:
# """
# Called when an the homeserver is garbage collected.
@@ -644,8 +649,6 @@ class HomeServer(metaclass=abc.ABCMeta):
for i in self.REQUIRED_ON_BACKGROUND_TASK_STARTUP:
getattr(self, "get_" + i + "_handler")()
self.get_task_scheduler()
self.get_common_usage_metrics_manager().setup()
start_phone_stats_home(self)
def get_reactor(self) -> ISynapseReactor:
"""

View File

@@ -323,13 +323,9 @@ class DynamicCollectorRegistry(CollectorRegistry):
if server_hooks.get(metric_name) is not None:
# TODO: This should be an `assert` since registering the same metric name
# multiple times will clobber the old metric.
#
# We currently rely on this behaviour in a few places:
# - We instantiate multiple `SyncRestServlet`, one per listener, and in the
# `__init__` we setup a new `LruCache`.
# - We instantiate multiple `ApplicationService` (one per configured
# application service) which use the `@cached` decorator on some methods.
#
# We currently rely on this behaviour as we instantiate multiple
# `SyncRestServlet`, one per listener, and in the `__init__` we setup a new
# LruCache.
# Once the above behaviour is changed, this should be changed to an `assert`.
logger.error(
"Metric named %s already registered for server %s",

View File

@@ -42,6 +42,7 @@ from synapse.types import Requester, UserID
from synapse.util.clock import Clock
from tests import unittest
from tests.unittest import override_config
from tests.utils import mock_getRawHeaders
@@ -236,6 +237,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
self.get_failure(self.auth.get_user_by_req(request), AuthError)
@override_config({"experimental_features": {"msc3202_device_masquerading": True}})
def test_get_user_by_req_appservice_valid_token_valid_device_id(self) -> None:
"""
Tests that when an application service passes the device_id URL parameter
@@ -262,7 +264,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
request.getClientAddress.return_value.host = "127.0.0.1"
request.args[b"access_token"] = [self.test_token]
request.args[b"user_id"] = [masquerading_user_id]
request.args[b"device_id"] = [masquerading_device_id]
request.args[b"org.matrix.msc3202.device_id"] = [masquerading_device_id]
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
requester = self.get_success(self.auth.get_user_by_req(request))
self.assertEqual(
@@ -270,6 +272,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
)
self.assertEqual(requester.device_id, masquerading_device_id.decode("utf8"))
@override_config({"experimental_features": {"msc3202_device_masquerading": True}})
def test_get_user_by_req_appservice_valid_token_invalid_device_id(self) -> None:
"""
Tests that when an application service passes the device_id URL parameter
@@ -296,7 +299,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
request.getClientAddress.return_value.host = "127.0.0.1"
request.args[b"access_token"] = [self.test_token]
request.args[b"user_id"] = [masquerading_user_id]
request.args[b"device_id"] = [masquerading_device_id]
request.args[b"org.matrix.msc3202.device_id"] = [masquerading_device_id]
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
failure = self.get_failure(self.auth.get_user_by_req(request), AuthError)

View File

@@ -37,13 +37,7 @@ class HomeserverAppStartTestCase(ConfigFileTestCase):
self.add_lines_to_config([" main:", " host: 127.0.0.1", " port: 1234"])
# Ensure that starting master process with worker config raises an exception
with self.assertRaises(ConfigError):
# Do a normal homeserver creation and setup
homeserver_config = synapse.app.homeserver.load_or_generate_config(
["-c", self.config_file]
)
# XXX: The error will be raised at this point
hs = synapse.app.homeserver.create_homeserver(homeserver_config)
# Continue with the setup. We don't expect this to run because we raised
# earlier, but in the future, the code could be refactored to raise the
# error in a different place.
synapse.app.homeserver.setup(hs)
synapse.app.homeserver.setup(homeserver_config)

View File

@@ -99,14 +99,7 @@ class ConfigLoadingFileTestCase(ConfigFileTestCase):
def test_disable_registration(self) -> None:
self.generate_config()
self.add_lines_to_config(
[
"enable_registration: true",
"disable_registration: true",
# We're not worried about open registration in this test. This test is
# focused on making sure that enable/disable_registration properly
# override each other.
"enable_registration_without_verification: true",
]
["enable_registration: true", "disable_registration: true"]
)
# Check that disable_registration clobbers enable_registration.
config = HomeServerConfig.load_config("", ["-c", self.config_file])

View File

@@ -19,8 +19,6 @@
#
#
import argparse
import synapse.app.homeserver
from synapse.config import ConfigError
from synapse.config.homeserver import HomeServerConfig
@@ -101,10 +99,6 @@ class RegistrationConfigTestCase(ConfigFileTestCase):
)
def test_refuse_to_start_if_open_registration_and_no_verification(self) -> None:
"""
Test that our utilities to start the main Synapse homeserver process refuses
to start if we detect open registration.
"""
self.generate_config()
self.add_lines_to_config(
[
@@ -117,87 +111,8 @@ class RegistrationConfigTestCase(ConfigFileTestCase):
)
# Test that allowing open registration without verification raises an error
with self.assertRaises(SystemExit):
# Do a normal homeserver creation and setup
with self.assertRaises(ConfigError):
homeserver_config = synapse.app.homeserver.load_or_generate_config(
["-c", self.config_file]
)
# XXX: The error will be raised at this point
hs = synapse.app.homeserver.create_homeserver(homeserver_config)
# Continue with the setup. We don't expect this to run because we raised
# earlier, but in the future, the code could be refactored to raise the
# error in a different place.
synapse.app.homeserver.setup(hs)
def test_load_config_error_if_open_registration_and_no_verification(self) -> None:
"""
Test that `HomeServerConfig.load_config(...)` raises an exception when we detect open
registration.
"""
self.generate_config()
self.add_lines_to_config(
[
" ",
"enable_registration: true",
"registrations_require_3pid: []",
"enable_registration_captcha: false",
"registration_requires_token: false",
]
)
# Test that allowing open registration without verification raises an error
with self.assertRaises(ConfigError):
_homeserver_config = HomeServerConfig.load_config(
description="test", argv_options=["-c", self.config_file]
)
def test_load_or_generate_config_error_if_open_registration_and_no_verification(
self,
) -> None:
"""
Test that `HomeServerConfig.load_or_generate_config(...)` raises an exception when we detect open
registration.
"""
self.generate_config()
self.add_lines_to_config(
[
" ",
"enable_registration: true",
"registrations_require_3pid: []",
"enable_registration_captcha: false",
"registration_requires_token: false",
]
)
# Test that allowing open registration without verification raises an error
with self.assertRaises(ConfigError):
_homeserver_config = HomeServerConfig.load_or_generate_config(
description="test", argv_options=["-c", self.config_file]
)
def test_load_config_with_parser_error_if_open_registration_and_no_verification(
self,
) -> None:
"""
Test that `HomeServerConfig.load_config_with_parser(...)` raises an exception when we detect open
registration.
"""
self.generate_config()
self.add_lines_to_config(
[
" ",
"enable_registration: true",
"registrations_require_3pid: []",
"enable_registration_captcha: false",
"registration_requires_token: false",
]
)
# Test that allowing open registration without verification raises an error
with self.assertRaises(ConfigError):
config_parser = argparse.ArgumentParser(description="test")
HomeServerConfig.add_arguments_to_parser(config_parser)
_homeserver_config = HomeServerConfig.load_config_with_parser(
parser=config_parser, argv_options=["-c", self.config_file]
)
synapse.app.homeserver.setup(homeserver_config)

View File

@@ -214,12 +214,7 @@ class BaseStreamTestCase(unittest.HomeserverTestCase):
client_to_server_transport.loseConnection()
# there should have been exactly one request
self.assertEqual(
len(requests),
1,
"Expected to handle exactly one HTTP replication request but saw %d - requests=%s"
% (len(requests), requests),
)
self.assertEqual(len(requests), 1)
return requests[0]

View File

@@ -46,39 +46,28 @@ class AccountDataStreamTestCase(BaseStreamTestCase):
# check we're testing what we think we are: no rows should yet have been
# received
received_account_data_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == AccountDataStream.NAME
]
self.assertEqual([], received_account_data_rows)
self.assertEqual([], self.test_handler.received_rdata_rows)
# now reconnect to pull the updates
self.reconnect()
self.replicate()
# We should have received all the expected rows in the right order
#
# Filter the updates to only include account data changes
received_account_data_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == AccountDataStream.NAME
]
# we should have received all the expected rows in the right order
received_rows = self.test_handler.received_rdata_rows
for t in updates:
(stream_name, token, row) = received_account_data_rows.pop(0)
(stream_name, token, row) = received_rows.pop(0)
self.assertEqual(stream_name, AccountDataStream.NAME)
self.assertIsInstance(row, AccountDataStream.AccountDataStreamRow)
self.assertEqual(row.data_type, t)
self.assertEqual(row.room_id, "test_room")
(stream_name, token, row) = received_account_data_rows.pop(0)
(stream_name, token, row) = received_rows.pop(0)
self.assertIsInstance(row, AccountDataStream.AccountDataStreamRow)
self.assertEqual(row.data_type, "m.global")
self.assertIsNone(row.room_id)
self.assertEqual([], received_account_data_rows)
self.assertEqual([], received_rows)
def test_update_function_global_account_data_limit(self) -> None:
"""Test replication with many global account data updates"""
@@ -96,38 +85,32 @@ class AccountDataStreamTestCase(BaseStreamTestCase):
store.add_account_data_to_room("test_user", "test_room", "m.per_room", {})
)
# tell the notifier to catch up to avoid duplicate rows.
# workaround for https://github.com/matrix-org/synapse/issues/7360
# FIXME remove this when the above is fixed
self.replicate()
# check we're testing what we think we are: no rows should yet have been
# received
received_account_data_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == AccountDataStream.NAME
]
self.assertEqual([], received_account_data_rows)
self.assertEqual([], self.test_handler.received_rdata_rows)
# now reconnect to pull the updates
self.reconnect()
self.replicate()
# we should have received all the expected rows in the right order
#
# Filter the updates to only include typing changes
received_account_data_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == AccountDataStream.NAME
]
received_rows = self.test_handler.received_rdata_rows
for t in updates:
(stream_name, token, row) = received_account_data_rows.pop(0)
(stream_name, token, row) = received_rows.pop(0)
self.assertEqual(stream_name, AccountDataStream.NAME)
self.assertIsInstance(row, AccountDataStream.AccountDataStreamRow)
self.assertEqual(row.data_type, t)
self.assertIsNone(row.room_id)
(stream_name, token, row) = received_account_data_rows.pop(0)
(stream_name, token, row) = received_rows.pop(0)
self.assertIsInstance(row, AccountDataStream.AccountDataStreamRow)
self.assertEqual(row.data_type, "m.per_room")
self.assertEqual(row.room_id, "test_room")
self.assertEqual([], received_account_data_rows)
self.assertEqual([], received_rows)

View File

@@ -30,7 +30,6 @@ from synapse.replication.tcp.commands import RdataCommand
from synapse.replication.tcp.streams._base import _STREAM_UPDATE_TARGET_ROW_COUNT
from synapse.replication.tcp.streams.events import (
_MAX_STATE_UPDATES_PER_ROOM,
EventsStream,
EventsStreamAllStateRow,
EventsStreamCurrentStateRow,
EventsStreamEventRow,
@@ -83,12 +82,7 @@ class EventsStreamTestCase(BaseStreamTestCase):
# check we're testing what we think we are: no rows should yet have been
# received
received_event_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == EventsStream.NAME
]
self.assertEqual([], received_event_rows)
self.assertEqual([], self.test_handler.received_rdata_rows)
# now reconnect to pull the updates
self.reconnect()
@@ -96,34 +90,31 @@ class EventsStreamTestCase(BaseStreamTestCase):
# we should have received all the expected rows in the right order (as
# well as various cache invalidation updates which we ignore)
#
# Filter the updates to only include event changes
received_event_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == EventsStream.NAME
received_rows = [
row for row in self.test_handler.received_rdata_rows if row[0] == "events"
]
for event in events:
stream_name, token, row = received_event_rows.pop(0)
self.assertEqual(EventsStream.NAME, stream_name)
stream_name, token, row = received_rows.pop(0)
self.assertEqual("events", stream_name)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "ev")
self.assertIsInstance(row.data, EventsStreamEventRow)
self.assertEqual(row.data.event_id, event.event_id)
stream_name, token, row = received_event_rows.pop(0)
stream_name, token, row = received_rows.pop(0)
self.assertIsInstance(row, EventsStreamRow)
self.assertIsInstance(row.data, EventsStreamEventRow)
self.assertEqual(row.data.event_id, state_event.event_id)
stream_name, token, row = received_event_rows.pop(0)
stream_name, token, row = received_rows.pop(0)
self.assertEqual("events", stream_name)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "state")
self.assertIsInstance(row.data, EventsStreamCurrentStateRow)
self.assertEqual(row.data.event_id, state_event.event_id)
self.assertEqual([], received_event_rows)
self.assertEqual([], received_rows)
@parameterized.expand(
[(_STREAM_UPDATE_TARGET_ROW_COUNT, False), (_MAX_STATE_UPDATES_PER_ROOM, True)]
@@ -179,12 +170,9 @@ class EventsStreamTestCase(BaseStreamTestCase):
self.replicate()
# all those events and state changes should have landed
received_event_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == EventsStream.NAME
]
self.assertGreaterEqual(len(received_event_rows), 2 * len(events))
self.assertGreaterEqual(
len(self.test_handler.received_rdata_rows), 2 * len(events)
)
# disconnect, so that we can stack up the changes
self.disconnect()
@@ -214,12 +202,7 @@ class EventsStreamTestCase(BaseStreamTestCase):
# check we're testing what we think we are: no rows should yet have been
# received
received_event_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == EventsStream.NAME
]
self.assertEqual([], received_event_rows)
self.assertEqual([], self.test_handler.received_rdata_rows)
# now reconnect to pull the updates
self.reconnect()
@@ -235,34 +218,33 @@ class EventsStreamTestCase(BaseStreamTestCase):
# of the states that got reverted.
# - two rows for state2
received_event_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == EventsStream.NAME
received_rows = [
row for row in self.test_handler.received_rdata_rows if row[0] == "events"
]
# first check the first two rows, which should be the state1 event.
stream_name, token, row = received_event_rows.pop(0)
stream_name, token, row = received_rows.pop(0)
self.assertEqual("events", stream_name)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "ev")
self.assertIsInstance(row.data, EventsStreamEventRow)
self.assertEqual(row.data.event_id, state1.event_id)
stream_name, token, row = received_event_rows.pop(0)
stream_name, token, row = received_rows.pop(0)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "state")
self.assertIsInstance(row.data, EventsStreamCurrentStateRow)
self.assertEqual(row.data.event_id, state1.event_id)
# now the last two rows, which should be the state2 event.
stream_name, token, row = received_event_rows.pop(-2)
stream_name, token, row = received_rows.pop(-2)
self.assertEqual("events", stream_name)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "ev")
self.assertIsInstance(row.data, EventsStreamEventRow)
self.assertEqual(row.data.event_id, state2.event_id)
stream_name, token, row = received_event_rows.pop(-1)
stream_name, token, row = received_rows.pop(-1)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "state")
self.assertIsInstance(row.data, EventsStreamCurrentStateRow)
@@ -272,16 +254,16 @@ class EventsStreamTestCase(BaseStreamTestCase):
if collapse_state_changes:
# that should leave us with the rows for the PL event, the state changes
# get collapsed into a single row.
self.assertEqual(len(received_event_rows), 2)
self.assertEqual(len(received_rows), 2)
stream_name, token, row = received_event_rows.pop(0)
stream_name, token, row = received_rows.pop(0)
self.assertEqual("events", stream_name)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "ev")
self.assertIsInstance(row.data, EventsStreamEventRow)
self.assertEqual(row.data.event_id, pl_event.event_id)
stream_name, token, row = received_event_rows.pop(0)
stream_name, token, row = received_rows.pop(0)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "state-all")
self.assertIsInstance(row.data, EventsStreamAllStateRow)
@@ -289,9 +271,9 @@ class EventsStreamTestCase(BaseStreamTestCase):
else:
# that should leave us with the rows for the PL event
self.assertEqual(len(received_event_rows), len(events) + 2)
self.assertEqual(len(received_rows), len(events) + 2)
stream_name, token, row = received_event_rows.pop(0)
stream_name, token, row = received_rows.pop(0)
self.assertEqual("events", stream_name)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "ev")
@@ -300,7 +282,7 @@ class EventsStreamTestCase(BaseStreamTestCase):
# the state rows are unsorted
state_rows: List[EventsStreamCurrentStateRow] = []
for stream_name, _, row in received_event_rows:
for stream_name, _, row in received_rows:
self.assertEqual("events", stream_name)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "state")
@@ -364,12 +346,9 @@ class EventsStreamTestCase(BaseStreamTestCase):
self.replicate()
# all those events and state changes should have landed
received_event_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == EventsStream.NAME
]
self.assertGreaterEqual(len(received_event_rows), 2 * len(events))
self.assertGreaterEqual(
len(self.test_handler.received_rdata_rows), 2 * len(events)
)
# disconnect, so that we can stack up the changes
self.disconnect()
@@ -396,12 +375,7 @@ class EventsStreamTestCase(BaseStreamTestCase):
# check we're testing what we think we are: no rows should yet have been
# received
received_event_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == EventsStream.NAME
]
self.assertEqual([], received_event_rows)
self.assertEqual([], self.test_handler.received_rdata_rows)
# now reconnect to pull the updates
self.reconnect()
@@ -409,16 +383,14 @@ class EventsStreamTestCase(BaseStreamTestCase):
# we should have received all the expected rows in the right order (as
# well as various cache invalidation updates which we ignore)
received_event_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == EventsStream.NAME
received_rows = [
row for row in self.test_handler.received_rdata_rows if row[0] == "events"
]
self.assertGreaterEqual(len(received_event_rows), len(events))
self.assertGreaterEqual(len(received_rows), len(events))
for i in range(NUM_USERS):
# for each user, we expect the PL event row, followed by state rows for
# the PL event and each of the states that got reverted.
stream_name, token, row = received_event_rows.pop(0)
stream_name, token, row = received_rows.pop(0)
self.assertEqual("events", stream_name)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "ev")
@@ -428,7 +400,7 @@ class EventsStreamTestCase(BaseStreamTestCase):
# the state rows are unsorted
state_rows: List[EventsStreamCurrentStateRow] = []
for _ in range(STATES_PER_USER + 1):
stream_name, token, row = received_event_rows.pop(0)
stream_name, token, row = received_rows.pop(0)
self.assertEqual("events", stream_name)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "state")
@@ -445,7 +417,7 @@ class EventsStreamTestCase(BaseStreamTestCase):
# "None" indicates the state has been deleted
self.assertIsNone(sr.event_id)
self.assertEqual([], received_event_rows)
self.assertEqual([], received_rows)
def test_backwards_stream_id(self) -> None:
"""
@@ -460,12 +432,7 @@ class EventsStreamTestCase(BaseStreamTestCase):
# check we're testing what we think we are: no rows should yet have been
# received
received_event_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == EventsStream.NAME
]
self.assertEqual([], received_event_rows)
self.assertEqual([], self.test_handler.received_rdata_rows)
# now reconnect to pull the updates
self.reconnect()
@@ -473,16 +440,14 @@ class EventsStreamTestCase(BaseStreamTestCase):
# We should have received the expected single row (as well as various
# cache invalidation updates which we ignore).
received_event_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == EventsStream.NAME
received_rows = [
row for row in self.test_handler.received_rdata_rows if row[0] == "events"
]
# There should be a single received row.
self.assertEqual(len(received_event_rows), 1)
self.assertEqual(len(received_rows), 1)
stream_name, token, row = received_event_rows[0]
stream_name, token, row = received_rows[0]
self.assertEqual("events", stream_name)
self.assertIsInstance(row, EventsStreamRow)
self.assertEqual(row.type, "ev")
@@ -503,12 +468,10 @@ class EventsStreamTestCase(BaseStreamTestCase):
)
# No updates have been received (because it was discard as old).
received_event_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == EventsStream.NAME
received_rows = [
row for row in self.test_handler.received_rdata_rows if row[0] == "events"
]
self.assertEqual(len(received_event_rows), 0)
self.assertEqual(len(received_rows), 0)
# Ensure the stream has not gone backwards.
current_token = worker_events_stream.current_token("master")

View File

@@ -38,45 +38,24 @@ class FederationStreamTestCase(BaseStreamTestCase):
Makes sure that updates sent while we are offline are received later.
"""
fed_sender = self.hs.get_federation_sender()
received_rows = self.test_handler.received_rdata_rows
# Send an update before we connect
fed_sender.build_and_send_edu("testdest", "m.test_edu", {"a": "b"})
# Now reconnect and pull the updates
self.reconnect()
# FIXME: This seems odd, why aren't we calling `self.replicate()` here? but also
# doing so, causes other assumptions to fail (multiple HTTP replication attempts
# are made).
self.reactor.advance(0)
# Check we're testing what we think we are: no rows should yet have been
# check we're testing what we think we are: no rows should yet have been
# received
#
# Filter the updates to only include typing changes
received_federation_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == FederationStream.NAME
]
self.assertEqual(received_federation_rows, [])
self.assertEqual(received_rows, [])
# We should now see an attempt to connect to the master
request = self.handle_http_replication_attempt()
self.assert_request_is_get_repl_stream_updates(request, FederationStream.NAME)
self.assert_request_is_get_repl_stream_updates(request, "federation")
# we should have received an update row
received_federation_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == FederationStream.NAME
]
self.assertEqual(
len(received_federation_rows),
1,
"Expected exactly one row for the federation stream",
)
(stream_name, token, row) = received_federation_rows[0]
self.assertEqual(stream_name, FederationStream.NAME)
stream_name, token, row = received_rows.pop()
self.assertEqual(stream_name, "federation")
self.assertIsInstance(row, FederationStream.FederationStreamRow)
self.assertEqual(row.type, EduRow.TypeId)
edurow = EduRow.from_data(row.data)
@@ -84,30 +63,19 @@ class FederationStreamTestCase(BaseStreamTestCase):
self.assertEqual(edurow.edu.origin, self.hs.hostname)
self.assertEqual(edurow.edu.destination, "testdest")
self.assertEqual(edurow.edu.content, {"a": "b"})
# Clear out the received rows that we've checked so we can check for new ones later
self.test_handler.received_rdata_rows.clear()
self.assertEqual(received_rows, [])
# additional updates should be transferred without an HTTP hit
fed_sender.build_and_send_edu("testdest", "m.test1", {"c": "d"})
# Pull in the updates
self.replicate()
self.reactor.advance(0)
# there should be no http hit
self.assertEqual(len(self.reactor.tcpClients), 0)
# ... but we should have a row
received_federation_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == FederationStream.NAME
]
self.assertEqual(
len(received_federation_rows),
1,
"Expected exactly one row for the federation stream",
)
(stream_name, token, row) = received_federation_rows[0]
self.assertEqual(stream_name, FederationStream.NAME)
self.assertEqual(len(received_rows), 1)
stream_name, token, row = received_rows.pop()
self.assertEqual(stream_name, "federation")
self.assertIsInstance(row, FederationStream.FederationStreamRow)
self.assertEqual(row.type, EduRow.TypeId)
edurow = EduRow.from_data(row.data)

View File

@@ -20,6 +20,7 @@
# type: ignore
from unittest.mock import Mock
from synapse.replication.tcp.streams._base import ReceiptsStream
@@ -29,6 +30,9 @@ USER_ID = "@feeling:blue"
class ReceiptsStreamTestCase(BaseStreamTestCase):
def _build_replication_data_handler(self):
return Mock(wraps=super()._build_replication_data_handler())
def test_receipt(self):
self.reconnect()
@@ -46,30 +50,23 @@ class ReceiptsStreamTestCase(BaseStreamTestCase):
self.replicate()
# there should be one RDATA command
received_receipt_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == ReceiptsStream.NAME
]
self.assertEqual(
len(received_receipt_rows),
1,
"Expected exactly one row for the receipts stream",
)
(stream_name, token, row) = received_receipt_rows[0]
self.assertEqual(stream_name, ReceiptsStream.NAME)
self.test_handler.on_rdata.assert_called_once()
stream_name, _, token, rdata_rows = self.test_handler.on_rdata.call_args[0]
self.assertEqual(stream_name, "receipts")
self.assertEqual(1, len(rdata_rows))
row: ReceiptsStream.ReceiptsStreamRow = rdata_rows[0]
self.assertEqual("!room:blue", row.room_id)
self.assertEqual("m.read", row.receipt_type)
self.assertEqual(USER_ID, row.user_id)
self.assertEqual("$event:blue", row.event_id)
self.assertIsNone(row.thread_id)
self.assertEqual({"a": 1}, row.data)
# Clear out the received rows that we've checked so we can check for new ones later
self.test_handler.received_rdata_rows.clear()
# Now let's disconnect and insert some data.
self.disconnect()
self.test_handler.on_rdata.reset_mock()
self.get_success(
self.hs.get_datastores().main.insert_receipt(
"!room2:blue",
@@ -82,27 +79,20 @@ class ReceiptsStreamTestCase(BaseStreamTestCase):
)
self.replicate()
# Not yet connected: no rows should yet have been received
self.assertEqual([], self.test_handler.received_rdata_rows)
# Nothing should have happened as we are disconnected
self.test_handler.on_rdata.assert_not_called()
# Now reconnect and pull the updates
self.reconnect()
self.replicate()
self.pump(0.1)
# We should now have caught up and get the missing data
received_receipt_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == ReceiptsStream.NAME
]
self.assertEqual(
len(received_receipt_rows),
1,
"Expected exactly one row for the receipts stream",
)
(stream_name, token, row) = received_receipt_rows[0]
self.assertEqual(stream_name, ReceiptsStream.NAME)
self.test_handler.on_rdata.assert_called_once()
stream_name, _, token, rdata_rows = self.test_handler.on_rdata.call_args[0]
self.assertEqual(stream_name, "receipts")
self.assertEqual(token, 3)
self.assertEqual(1, len(rdata_rows))
row: ReceiptsStream.ReceiptsStreamRow = rdata_rows[0]
self.assertEqual("!room2:blue", row.room_id)
self.assertEqual("m.read", row.receipt_type)
self.assertEqual(USER_ID, row.user_id)

View File

@@ -88,15 +88,15 @@ class ThreadSubscriptionsStreamTestCase(BaseStreamTestCase):
# We should have received all the expected rows in the right order
# Filter the updates to only include thread subscription changes
received_thread_subscription_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == ThreadSubscriptionsStream.NAME
received_rows = [
upd
for upd in self.test_handler.received_rdata_rows
if upd[0] == ThreadSubscriptionsStream.NAME
]
# Verify all the thread subscription updates
for thread_id in updates:
(stream_name, token, row) = received_thread_subscription_rows.pop(0)
(stream_name, token, row) = received_rows.pop(0)
self.assertEqual(stream_name, ThreadSubscriptionsStream.NAME)
self.assertIsInstance(row, ThreadSubscriptionsStream.ROW_TYPE)
self.assertEqual(row.user_id, "@test_user:example.org")
@@ -104,14 +104,14 @@ class ThreadSubscriptionsStreamTestCase(BaseStreamTestCase):
self.assertEqual(row.event_id, thread_id)
# Verify the last update in the different room
(stream_name, token, row) = received_thread_subscription_rows.pop(0)
(stream_name, token, row) = received_rows.pop(0)
self.assertEqual(stream_name, ThreadSubscriptionsStream.NAME)
self.assertIsInstance(row, ThreadSubscriptionsStream.ROW_TYPE)
self.assertEqual(row.user_id, "@test_user:example.org")
self.assertEqual(row.room_id, other_room_id)
self.assertEqual(row.event_id, other_thread_root_id)
self.assertEqual([], received_thread_subscription_rows)
self.assertEqual([], received_rows)
def test_multiple_users_thread_subscription_updates(self) -> None:
"""Test replication with thread subscription updates for multiple users"""
@@ -138,18 +138,18 @@ class ThreadSubscriptionsStreamTestCase(BaseStreamTestCase):
# We should have received all the expected rows
# Filter the updates to only include thread subscription changes
received_thread_subscription_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == ThreadSubscriptionsStream.NAME
received_rows = [
upd
for upd in self.test_handler.received_rdata_rows
if upd[0] == ThreadSubscriptionsStream.NAME
]
# Should have one update per user
self.assertEqual(len(received_thread_subscription_rows), len(users))
self.assertEqual(len(received_rows), len(users))
# Verify all updates
for i, user_id in enumerate(users):
(stream_name, token, row) = received_thread_subscription_rows[i]
(stream_name, token, row) = received_rows[i]
self.assertEqual(stream_name, ThreadSubscriptionsStream.NAME)
self.assertIsInstance(row, ThreadSubscriptionsStream.ROW_TYPE)
self.assertEqual(row.user_id, user_id)

View File

@@ -21,10 +21,7 @@
import logging
import synapse
from synapse.replication.tcp.streams._base import (
_STREAM_UPDATE_TARGET_ROW_COUNT,
ToDeviceStream,
)
from synapse.replication.tcp.streams._base import _STREAM_UPDATE_TARGET_ROW_COUNT
from synapse.types import JsonDict
from tests.replication._base import BaseStreamTestCase
@@ -85,12 +82,7 @@ class ToDeviceStreamTestCase(BaseStreamTestCase):
)
# replication is disconnected so we shouldn't get any updates yet
received_to_device_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == ToDeviceStream.NAME
]
self.assertEqual([], received_to_device_rows)
self.assertEqual([], self.test_handler.received_rdata_rows)
# now reconnect to pull the updates
self.reconnect()
@@ -98,15 +90,7 @@ class ToDeviceStreamTestCase(BaseStreamTestCase):
# we should receive the fact that we have to_device updates
# for user1 and user2
received_to_device_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == ToDeviceStream.NAME
]
self.assertEqual(
len(received_to_device_rows),
2,
"Expected two rows in the to_device stream",
)
self.assertEqual(received_to_device_rows[0][2].entity, user1)
self.assertEqual(received_to_device_rows[1][2].entity, user2)
received_rows = self.test_handler.received_rdata_rows
self.assertEqual(len(received_rows), 2)
self.assertEqual(received_rows[0][2].entity, user1)
self.assertEqual(received_rows[1][2].entity, user2)

View File

@@ -19,6 +19,7 @@
#
#
import logging
from unittest.mock import Mock
from synapse.handlers.typing import RoomMember, TypingWriterHandler
from synapse.replication.tcp.streams import TypingStream
@@ -26,8 +27,6 @@ from synapse.util.caches.stream_change_cache import StreamChangeCache
from tests.replication._base import BaseStreamTestCase
logger = logging.getLogger(__name__)
USER_ID = "@feeling:blue"
USER_ID_2 = "@da-ba-dee:blue"
@@ -36,6 +35,10 @@ ROOM_ID_2 = "!foo:blue"
class TypingStreamTestCase(BaseStreamTestCase):
def _build_replication_data_handler(self) -> Mock:
self.mock_handler = Mock(wraps=super()._build_replication_data_handler())
return self.mock_handler
def test_typing(self) -> None:
typing = self.hs.get_typing_handler()
assert isinstance(typing, TypingWriterHandler)
@@ -44,74 +47,51 @@ class TypingStreamTestCase(BaseStreamTestCase):
# update to fetch.
typing._push_update(member=RoomMember(ROOM_ID, USER_ID), typing=True)
# Not yet connected: no rows should yet have been received
self.assertEqual([], self.test_handler.received_rdata_rows)
# Reconnect
self.reconnect()
typing._push_update(member=RoomMember(ROOM_ID, USER_ID), typing=True)
# Pull in the updates
self.replicate()
self.reactor.advance(0)
# We should now see an attempt to connect to the master
request = self.handle_http_replication_attempt()
self.assert_request_is_get_repl_stream_updates(request, TypingStream.NAME)
self.assert_request_is_get_repl_stream_updates(request, "typing")
# Filter the updates to only include typing changes
received_typing_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == TypingStream.NAME
]
self.assertEqual(
len(received_typing_rows),
1,
"Expected exactly one row for the typing stream",
)
(stream_name, token, row) = received_typing_rows[0]
self.assertEqual(stream_name, TypingStream.NAME)
self.assertIsInstance(row, TypingStream.ROW_TYPE)
self.assertEqual(row.room_id, ROOM_ID)
self.assertEqual(row.user_ids, [USER_ID])
# Clear out the received rows that we've checked so we can check for new ones later
self.test_handler.received_rdata_rows.clear()
self.mock_handler.on_rdata.assert_called_once()
stream_name, _, token, rdata_rows = self.mock_handler.on_rdata.call_args[0]
self.assertEqual(stream_name, "typing")
self.assertEqual(1, len(rdata_rows))
row: TypingStream.TypingStreamRow = rdata_rows[0]
self.assertEqual(ROOM_ID, row.room_id)
self.assertEqual([USER_ID], row.user_ids)
# Now let's disconnect and insert some data.
self.disconnect()
self.mock_handler.on_rdata.reset_mock()
typing._push_update(member=RoomMember(ROOM_ID, USER_ID), typing=False)
# Not yet connected: no rows should yet have been received
self.assertEqual([], self.test_handler.received_rdata_rows)
self.mock_handler.on_rdata.assert_not_called()
# Now reconnect and pull the updates
self.reconnect()
self.replicate()
self.pump(0.1)
# We should now see an attempt to connect to the master
request = self.handle_http_replication_attempt()
self.assert_request_is_get_repl_stream_updates(request, TypingStream.NAME)
self.assert_request_is_get_repl_stream_updates(request, "typing")
# The from token should be the token from the last RDATA we got.
assert request.args is not None
self.assertEqual(int(request.args[b"from_token"][0]), token)
received_typing_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == TypingStream.NAME
]
self.assertEqual(
len(received_typing_rows),
1,
"Expected exactly one row for the typing stream",
)
(stream_name, token, row) = received_typing_rows[0]
self.assertEqual(stream_name, TypingStream.NAME)
self.assertIsInstance(row, TypingStream.ROW_TYPE)
self.assertEqual(row.room_id, ROOM_ID)
self.assertEqual(row.user_ids, [])
self.mock_handler.on_rdata.assert_called_once()
stream_name, _, token, rdata_rows = self.mock_handler.on_rdata.call_args[0]
self.assertEqual(stream_name, "typing")
self.assertEqual(1, len(rdata_rows))
row = rdata_rows[0]
self.assertEqual(ROOM_ID, row.room_id)
self.assertEqual([], row.user_ids)
def test_reset(self) -> None:
"""
@@ -136,47 +116,33 @@ class TypingStreamTestCase(BaseStreamTestCase):
# update to fetch.
typing._push_update(member=RoomMember(ROOM_ID, USER_ID), typing=True)
# Not yet connected: no rows should yet have been received
self.assertEqual([], self.test_handler.received_rdata_rows)
# Now reconnect to pull the updates
self.reconnect()
typing._push_update(member=RoomMember(ROOM_ID, USER_ID), typing=True)
# Pull in the updates
self.replicate()
self.reactor.advance(0)
# We should now see an attempt to connect to the master
request = self.handle_http_replication_attempt()
self.assert_request_is_get_repl_stream_updates(request, "typing")
received_typing_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == TypingStream.NAME
]
self.assertEqual(
len(received_typing_rows),
1,
"Expected exactly one row for the typing stream",
)
(stream_name, token, row) = received_typing_rows[0]
self.assertEqual(stream_name, TypingStream.NAME)
self.assertIsInstance(row, TypingStream.ROW_TYPE)
self.assertEqual(row.room_id, ROOM_ID)
self.assertEqual(row.user_ids, [USER_ID])
self.mock_handler.on_rdata.assert_called_once()
stream_name, _, token, rdata_rows = self.mock_handler.on_rdata.call_args[0]
self.assertEqual(stream_name, "typing")
self.assertEqual(1, len(rdata_rows))
row: TypingStream.TypingStreamRow = rdata_rows[0]
self.assertEqual(ROOM_ID, row.room_id)
self.assertEqual([USER_ID], row.user_ids)
# Push the stream forward a bunch so it can be reset.
for i in range(100):
typing._push_update(
member=RoomMember(ROOM_ID, "@test%s:blue" % i), typing=True
)
# Pull in the updates
self.replicate()
self.reactor.advance(0)
# Disconnect.
self.disconnect()
self.test_handler.received_rdata_rows.clear()
# Reset the typing handler
self.hs.get_replication_streams()["typing"].last_token = 0
@@ -189,34 +155,30 @@ class TypingStreamTestCase(BaseStreamTestCase):
)
typing._reset()
# Now reconnect and pull the updates
# Reconnect.
self.reconnect()
self.replicate()
self.pump(0.1)
# We should now see an attempt to connect to the master
request = self.handle_http_replication_attempt()
self.assert_request_is_get_repl_stream_updates(request, "typing")
# Reset the test code.
self.mock_handler.on_rdata.reset_mock()
self.mock_handler.on_rdata.assert_not_called()
# Push additional data.
typing._push_update(member=RoomMember(ROOM_ID_2, USER_ID_2), typing=False)
# Pull the updates
self.replicate()
self.reactor.advance(0)
self.mock_handler.on_rdata.assert_called_once()
stream_name, _, token, rdata_rows = self.mock_handler.on_rdata.call_args[0]
self.assertEqual(stream_name, "typing")
self.assertEqual(1, len(rdata_rows))
row = rdata_rows[0]
self.assertEqual(ROOM_ID_2, row.room_id)
self.assertEqual([], row.user_ids)
received_typing_rows = [
row
for row in self.test_handler.received_rdata_rows
if row[0] == TypingStream.NAME
]
self.assertEqual(
len(received_typing_rows),
1,
"Expected exactly one row for the typing stream",
)
(stream_name, token, row) = received_typing_rows[0]
self.assertEqual(stream_name, TypingStream.NAME)
self.assertIsInstance(row, TypingStream.ROW_TYPE)
self.assertEqual(row.room_id, ROOM_ID_2)
self.assertEqual(row.user_ids, [])
# The token should have been reset.
self.assertEqual(token, 1)
finally:

View File

@@ -1,74 +0,0 @@
from twisted.internet.testing import MemoryReactor
import synapse.rest.admin
from synapse.api.errors import Codes
from synapse.rest.client import login, room
from synapse.server import HomeServer
from synapse.util.clock import Clock
from tests import unittest
class FetchEventTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets,
login.register_servlets,
room.register_servlets,
]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.admin_user = self.register_user("admin", "pass", admin=True)
self.admin_user_tok = self.login("admin", "pass")
self.other_user = self.register_user("user", "pass")
self.other_user_tok = self.login("user", "pass")
self.room_id1 = self.helper.create_room_as(
self.other_user, tok=self.other_user_tok, is_public=True
)
resp = self.helper.send(self.room_id1, body="Hey now", tok=self.other_user_tok)
self.event_id = resp["event_id"]
def test_no_auth(self) -> None:
"""
Try to get an event without authentication.
"""
channel = self.make_request(
"GET",
f"/_synapse/admin/v1/fetch_event/{self.event_id}",
)
self.assertEqual(401, channel.code, msg=channel.json_body)
self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"])
def test_requester_is_not_admin(self) -> None:
"""
If the user is not a server admin, an error 403 is returned.
"""
channel = self.make_request(
"GET",
f"/_synapse/admin/v1/fetch_event/{self.event_id}",
access_token=self.other_user_tok,
)
self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
def test_fetch_event(self) -> None:
"""
Test that we can successfully fetch an event
"""
channel = self.make_request(
"GET",
f"/_synapse/admin/v1/fetch_event/{self.event_id}",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(
channel.json_body["event"]["content"],
{"body": "Hey now", "msgtype": "m.text"},
)
self.assertEqual(channel.json_body["event"]["event_id"], self.event_id)
self.assertEqual(channel.json_body["event"]["type"], "m.room.message")
self.assertEqual(channel.json_body["event"]["sender"], self.other_user)

View File

@@ -533,6 +533,16 @@ class MSC4190AppserviceDevicesTestCase(unittest.HomeserverTestCase):
)
self.assertEqual(channel.code, 200, channel.json_body)
# On the regular service, that API should not allow for the
# creation of new devices.
channel = self.make_request(
"PUT",
"/_matrix/client/v3/devices/AABBCCDD?user_id=@bob:test",
content={"display_name": "Bob's device"},
access_token=self.pre_msc_service.token,
)
self.assertEqual(channel.code, 404, channel.json_body)
def test_DELETE_device(self) -> None:
self.register_appservice_user(
"alice", self.msc4190_service.token, inhibit_login=True

View File

@@ -1,105 +0,0 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# [This file includes modifications made by New Vector Limited]
#
#
"""Tests REST events for /rtc/endpoints path."""
from twisted.internet.testing import MemoryReactor
from synapse.rest import admin
from synapse.rest.client import login, matrixrtc, register, room
from synapse.server import HomeServer
from synapse.util.clock import Clock
from tests.unittest import HomeserverTestCase, override_config
PATH_PREFIX = "/_matrix/client/unstable/org.matrix.msc4143"
RTC_ENDPOINT = {"type": "focusA", "required_field": "theField"}
LIVEKIT_ENDPOINT = {
"type": "livekit",
"livekit_service_url": "https://livekit.example.com",
}
class MatrixRtcTestCase(HomeserverTestCase):
"""Tests /rtc/transports Client-Server REST API."""
servlets = [
admin.register_servlets,
room.register_servlets,
login.register_servlets,
register.register_servlets,
matrixrtc.register_servlets,
]
def prepare(
self, reactor: MemoryReactor, clock: Clock, homeserver: HomeServer
) -> None:
self.register_user("alice", "password")
self._alice_tok = self.login("alice", "password")
def test_matrixrtc_endpoint_not_enabled(self) -> None:
channel = self.make_request(
"GET", f"{PATH_PREFIX}/rtc/transports", access_token=self._alice_tok
)
self.assertEqual(404, channel.code, channel.json_body)
self.assertEqual(
"M_UNRECOGNIZED", channel.json_body["errcode"], channel.json_body
)
@override_config({"experimental_features": {"msc4143_enabled": True}})
def test_matrixrtc_endpoint_requires_authentication(self) -> None:
channel = self.make_request("GET", f"{PATH_PREFIX}/rtc/transports")
self.assertEqual(401, channel.code, channel.json_body)
@override_config(
{
"experimental_features": {"msc4143_enabled": True},
"matrix_rtc": {"transports": [RTC_ENDPOINT]},
}
)
def test_matrixrtc_endpoint_contains_expected_transport(self) -> None:
channel = self.make_request(
"GET", f"{PATH_PREFIX}/rtc/transports", access_token=self._alice_tok
)
self.assertEqual(200, channel.code, channel.json_body)
self.assert_dict({"rtc_transports": [RTC_ENDPOINT]}, channel.json_body)
@override_config(
{
"experimental_features": {"msc4143_enabled": True},
"matrix_rtc": {"transports": []},
}
)
def test_matrixrtc_endpoint_no_transports_configured(self) -> None:
channel = self.make_request(
"GET", f"{PATH_PREFIX}/rtc/transports", access_token=self._alice_tok
)
self.assertEqual(200, channel.code, channel.json_body)
self.assert_dict({}, channel.json_body)
@override_config(
{
"experimental_features": {"msc4143_enabled": True},
"matrix_rtc": {"transports": [LIVEKIT_ENDPOINT]},
}
)
def test_matrixrtc_endpoint_livekit_transport(self) -> None:
channel = self.make_request(
"GET", f"{PATH_PREFIX}/rtc/transports", access_token=self._alice_tok
)
self.assertEqual(200, channel.code, channel.json_body)
self.assert_dict({"rtc_transports": [LIVEKIT_ENDPOINT]}, channel.json_body)

View File

@@ -1198,6 +1198,7 @@ def setup_test_homeserver(
hs = homeserver_to_use(
server_name,
config=config,
version_string="Synapse/tests",
reactor=reactor,
)

View File

@@ -110,13 +110,13 @@ class MonthlyActiveUsersTestCase(unittest.HomeserverTestCase):
self.assertGreater(timestamp, 0)
# Test that users with reserved 3pids are not removed from the MAU table
#
# The `start_phone_stats_home()` looping call will cause us to run
# `reap_monthly_active_users` after the time has advanced
# XXX some of this is redundant. poking things into the config shouldn't
# work, and in any case it's not obvious what we expect to happen when
# we advance the reactor.
self.hs.config.server.max_mau_value = 0
self.reactor.advance(FORTY_DAYS)
self.hs.config.server.max_mau_value = 5
# I guess we call this one more time for good measure? Perhaps because
# previously, the phone home stats weren't running in tests?
self.get_success(self.store.reap_monthly_active_users())
active_count = self.get_success(self.store.get_monthly_active_count())

View File

@@ -75,7 +75,7 @@ class CommonMetricsTestCase(HomeserverTestCase):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.metrics_manager = hs.get_common_usage_metrics_manager()
self.metrics_manager.setup()
self.get_success(self.metrics_manager.setup())
def test_dau(self) -> None:
"""Tests that the daily active users count is correctly updated."""

View File

@@ -236,17 +236,17 @@ class OptionsResourceTests(unittest.TestCase):
"""Create a request from the method/path and return a channel with the response."""
# Create a site and query for the resource.
site = SynapseSite(
logger_name="test",
site_tag="site_tag",
config=parse_listener_def(
"test",
"site_tag",
parse_listener_def(
0,
{
"type": "http",
"port": 0,
},
),
resource=self.resource,
server_version_string="1",
self.resource,
"1.0",
max_request_body_size=4096,
reactor=self.reactor,
hs=self.homeserver,