mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-15 02:00:21 +00:00
Compare commits
24 Commits
anoa/updat
...
anoa/bcryp
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
932e5e89b1 | ||
|
|
d422cc9366 | ||
|
|
f7155754c2 | ||
|
|
9f28b6ed35 | ||
|
|
1271e896b5 | ||
|
|
418c9f3fe5 | ||
|
|
eac862629f | ||
|
|
67f22a200d | ||
|
|
da6c0cae96 | ||
|
|
b8f6ad2736 | ||
|
|
ecc90593cb | ||
|
|
a4f9274107 | ||
|
|
ec7554b768 | ||
|
|
d2c582ef3c | ||
|
|
2d07bd7fd2 | ||
|
|
a7303c5311 | ||
|
|
690b3a4fcc | ||
|
|
d399d7649a | ||
|
|
9d9275da5a | ||
|
|
ef80338c2d | ||
|
|
be75de2cfc | ||
|
|
07cfb69778 | ||
|
|
c0d6998dea | ||
|
|
627be7e0a7 |
135
CHANGES.md
135
CHANGES.md
@@ -1,3 +1,138 @@
|
||||
# Synapse 1.141.0rc1 (2025-10-21)
|
||||
|
||||
## Deprecation of MacOS Python wheels
|
||||
|
||||
The team has decided to deprecate and eventually stop publishing python wheels
|
||||
for MacOS. This is a burden on the team, and we're not aware of any parties
|
||||
that use them. Synapse docker images will continue to work on MacOS, as will
|
||||
building Synapse from source (though note this requires a Rust compiler).
|
||||
|
||||
Publishing MacOS Python wheels will continue for the next few releases. If you
|
||||
do make use of these wheels downstream, please reach out to us in
|
||||
[#synapse-dev:matrix.org](https://matrix.to/#/#synapse-dev:matrix.org). We'd
|
||||
love to hear from you!
|
||||
|
||||
## Features
|
||||
|
||||
- Allow using [MSC4190](https://github.com/matrix-org/matrix-spec-proposals/pull/4190) behavior without the opt-in registration flag. Contributed by @tulir @ Beeper. ([\#19031](https://github.com/element-hq/synapse/issues/19031))
|
||||
- Stabilized support for [MSC4326](https://github.com/matrix-org/matrix-spec-proposals/pull/4326): Device masquerading for appservices. Contributed by @tulir @ Beeper. ([\#19033](https://github.com/element-hq/synapse/issues/19033))
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- Fix a bug introduced in 1.136.0 that would prevent Synapse from being able to be `reload`-ed more than once when running under systemd. ([\#19060](https://github.com/element-hq/synapse/issues/19060))
|
||||
- Fix a bug introduced in 1.140.0 where an internal server error could be raised when hashing user passwords that are too long. ([\#19078](https://github.com/element-hq/synapse/issues/19078))
|
||||
|
||||
## Updates to the Docker image
|
||||
|
||||
- Update docker image to use Debian trixie as the base and thus Python 3.13. ([\#19064](https://github.com/element-hq/synapse/issues/19064))
|
||||
|
||||
## Internal Changes
|
||||
|
||||
- Move unique snowflake homeserver background tasks to `start_background_tasks` (the standard pattern for this kind of thing). ([\#19037](https://github.com/element-hq/synapse/issues/19037))
|
||||
- Drop a deprecated field of the `PyGitHub` dependency in the release script and raise the dependency's minimum version to `1.59.0`. ([\#19039](https://github.com/element-hq/synapse/issues/19039))
|
||||
- Update TODO list of conflicting areas where we encounter metrics being clobbered (`ApplicationService`). ([\#19040](https://github.com/element-hq/synapse/issues/19040))
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.140.0 (2025-10-14)
|
||||
|
||||
## Compatibility notice for users of `synapse-s3-storage-provider`
|
||||
|
||||
Deployments that make use of the
|
||||
[synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider)
|
||||
module must upgrade to
|
||||
[v1.6.0](https://github.com/matrix-org/synapse-s3-storage-provider/releases/tag/v1.6.0).
|
||||
Using older versions of the module with this release of Synapse will prevent
|
||||
users from being able to upload or download media.
|
||||
|
||||
|
||||
No significant changes since 1.140.0rc1.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.140.0rc1 (2025-10-10)
|
||||
|
||||
## Features
|
||||
|
||||
- Add [a new Media Query by ID Admin API](https://element-hq.github.io/synapse/v1.140/admin_api/media_admin_api.html#query-a-piece-of-media-by-id) that allows server admins to query and investigate the metadata of local or cached remote media via
|
||||
the `origin/media_id` identifier found in a [Matrix Content URI](https://spec.matrix.org/v1.14/client-server-api/#matrix-content-mxc-uris). ([\#18911](https://github.com/element-hq/synapse/issues/18911))
|
||||
- Add [a new Fetch Event Admin API](https://element-hq.github.io/synapse/v1.140/admin_api/fetch_event.html) to fetch an event by ID. ([\#18963](https://github.com/element-hq/synapse/issues/18963))
|
||||
- Update [MSC4284: Policy Servers](https://github.com/matrix-org/matrix-spec-proposals/pull/4284) implementation to support signatures when available. ([\#18934](https://github.com/element-hq/synapse/issues/18934))
|
||||
- Add experimental implementation of the `GET /_matrix/client/v1/rtc/transports` endpoint for the latest draft of [MSC4143: MatrixRTC](https://github.com/matrix-org/matrix-spec-proposals/pull/4143). ([\#18967](https://github.com/element-hq/synapse/issues/18967))
|
||||
- Expose a `defer_to_threadpool` function in the Synapse Module API that allows modules to run a function on a separate thread in a custom threadpool. ([\#19032](https://github.com/element-hq/synapse/issues/19032))
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- Fix room upgrade `room_config` argument and documentation for `user_may_create_room` spam-checker callback. ([\#18721](https://github.com/element-hq/synapse/issues/18721))
|
||||
- Compute a user's last seen timestamp from their devices' last seen timestamps instead of IPs, because the latter are automatically cleared according to `user_ips_max_age`. ([\#18948](https://github.com/element-hq/synapse/issues/18948))
|
||||
- Fix bug where ephemeral events were not filtered by room ID. Contributed by @frastefanini. ([\#19002](https://github.com/element-hq/synapse/issues/19002))
|
||||
- Update Synapse main process version string to include git info. ([\#19011](https://github.com/element-hq/synapse/issues/19011))
|
||||
|
||||
## Improved Documentation
|
||||
|
||||
- Explain how `Deferred` callbacks interact with logcontexts. ([\#18914](https://github.com/element-hq/synapse/issues/18914))
|
||||
- Fix documentation for `rc_room_creation` and `rc_reports` to clarify that a `per_user` rate limit is not supported. ([\#18998](https://github.com/element-hq/synapse/issues/18998))
|
||||
|
||||
## Deprecations and Removals
|
||||
|
||||
- Remove deprecated `LoggingContext.set_current_context`/`LoggingContext.current_context` methods which already have equivalent bare methods in `synapse.logging.context`. ([\#18989](https://github.com/element-hq/synapse/issues/18989))
|
||||
- Drop support for unstable field names from the long-accepted [MSC2732](https://github.com/matrix-org/matrix-spec-proposals/pull/2732) (Olm fallback keys) proposal. ([\#18996](https://github.com/element-hq/synapse/issues/18996))
|
||||
|
||||
## Internal Changes
|
||||
|
||||
- Cleanly shutdown `SynapseHomeServer` object, allowing artifacts of embedded small hosts to be properly garbage collected. ([\#18828](https://github.com/element-hq/synapse/issues/18828))
|
||||
- Update OEmbed providers to use 'X' instead of 'Twitter' in URL previews, following a rebrand. Contributed by @HammyHavoc. ([\#18767](https://github.com/element-hq/synapse/issues/18767))
|
||||
- Fix `server_name` in logging context for multiple Synapse instances in one process. ([\#18868](https://github.com/element-hq/synapse/issues/18868))
|
||||
- Wrap the Rust HTTP client with `make_deferred_yieldable` so it follows Synapse logcontext rules. ([\#18903](https://github.com/element-hq/synapse/issues/18903))
|
||||
- Fix the GitHub Actions workflow that moves issues labeled "X-Needs-Info" to the "Needs info" column on the team's internal triage board. ([\#18913](https://github.com/element-hq/synapse/issues/18913))
|
||||
- Disconnect background process work from request trace. ([\#18932](https://github.com/element-hq/synapse/issues/18932))
|
||||
- Reduce overall number of calls to `_get_e2e_cross_signing_signatures_for_devices` by increasing the batch size of devices the query is called with, reducing DB load. ([\#18939](https://github.com/element-hq/synapse/issues/18939))
|
||||
- Update error code used when an appservice tries to masquerade as an unknown device using [MSC4326](https://github.com/matrix-org/matrix-spec-proposals/pull/4326). Contributed by @tulir @ Beeper. ([\#18947](https://github.com/element-hq/synapse/issues/18947))
|
||||
- Fix `no active span when trying to log` tracing error on startup (when OpenTracing is enabled). ([\#18959](https://github.com/element-hq/synapse/issues/18959))
|
||||
- Fix `run_coroutine_in_background(...)` incorrectly handling logcontext. ([\#18964](https://github.com/element-hq/synapse/issues/18964))
|
||||
- Add debug logs wherever we change current logcontext. ([\#18966](https://github.com/element-hq/synapse/issues/18966))
|
||||
- Update dockerfile metadata to fix broken link; point to documentation website. ([\#18971](https://github.com/element-hq/synapse/issues/18971))
|
||||
- Note that the code is additionally licensed under the [Element Commercial license](https://github.com/element-hq/synapse/blob/develop/LICENSE-COMMERCIAL) in SPDX expression field configs. ([\#18973](https://github.com/element-hq/synapse/issues/18973))
|
||||
- Fix logcontext handling in `timeout_deferred` tests. ([\#18974](https://github.com/element-hq/synapse/issues/18974))
|
||||
- Remove internal `ReplicationUploadKeysForUserRestServlet` as a follow-up to the work in https://github.com/element-hq/synapse/pull/18581 that moved device changes off the main process. ([\#18988](https://github.com/element-hq/synapse/issues/18988))
|
||||
- Switch task scheduler from raw logcontext manipulation to using the dedicated logcontext utils. ([\#18990](https://github.com/element-hq/synapse/issues/18990))
|
||||
- Remove `MockClock()` in tests. ([\#18992](https://github.com/element-hq/synapse/issues/18992))
|
||||
- Switch back to our own custom `LogContextScopeManager` instead of OpenTracing's `ContextVarsScopeManager` which was causing problems when using the experimental `SYNAPSE_ASYNC_IO_REACTOR` option with tracing enabled. ([\#19007](https://github.com/element-hq/synapse/issues/19007))
|
||||
- Remove `version_string` argument from `HomeServer` since it's always the same. ([\#19012](https://github.com/element-hq/synapse/issues/19012))
|
||||
- Remove duplicate call to `hs.start_background_tasks()` introduced from a bad merge. ([\#19013](https://github.com/element-hq/synapse/issues/19013))
|
||||
- Split homeserver creation (`create_homeserver`) and setup (`setup`). ([\#19015](https://github.com/element-hq/synapse/issues/19015))
|
||||
- Swap near-end-of-life `macos-13` GitHub Actions runner for the `macos-15-intel` variant. ([\#19025](https://github.com/element-hq/synapse/issues/19025))
|
||||
- Introduce `RootConfig.validate_config()` which can be subclassed in `HomeServerConfig` to do cross-config class validation. ([\#19027](https://github.com/element-hq/synapse/issues/19027))
|
||||
- Allow any command of the `release.py` script to accept a `--gh-token` argument. ([\#19035](https://github.com/element-hq/synapse/issues/19035))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump Swatinem/rust-cache from 2.8.0 to 2.8.1. ([\#18949](https://github.com/element-hq/synapse/issues/18949))
|
||||
* Bump actions/cache from 4.2.4 to 4.3.0. ([\#18983](https://github.com/element-hq/synapse/issues/18983))
|
||||
* Bump anyhow from 1.0.99 to 1.0.100. ([\#18950](https://github.com/element-hq/synapse/issues/18950))
|
||||
* Bump authlib from 1.6.3 to 1.6.4. ([\#18957](https://github.com/element-hq/synapse/issues/18957))
|
||||
* Bump authlib from 1.6.4 to 1.6.5. ([\#19019](https://github.com/element-hq/synapse/issues/19019))
|
||||
* Bump bcrypt from 4.3.0 to 5.0.0. ([\#18984](https://github.com/element-hq/synapse/issues/18984))
|
||||
* Bump docker/login-action from 3.5.0 to 3.6.0. ([\#18978](https://github.com/element-hq/synapse/issues/18978))
|
||||
* Bump lxml from 6.0.0 to 6.0.2. ([\#18979](https://github.com/element-hq/synapse/issues/18979))
|
||||
* Bump phonenumbers from 9.0.13 to 9.0.14. ([\#18954](https://github.com/element-hq/synapse/issues/18954))
|
||||
* Bump phonenumbers from 9.0.14 to 9.0.15. ([\#18991](https://github.com/element-hq/synapse/issues/18991))
|
||||
* Bump prometheus-client from 0.22.1 to 0.23.1. ([\#19016](https://github.com/element-hq/synapse/issues/19016))
|
||||
* Bump pydantic from 2.11.9 to 2.11.10. ([\#19017](https://github.com/element-hq/synapse/issues/19017))
|
||||
* Bump pygithub from 2.7.0 to 2.8.1. ([\#18952](https://github.com/element-hq/synapse/issues/18952))
|
||||
* Bump regex from 1.11.2 to 1.11.3. ([\#18981](https://github.com/element-hq/synapse/issues/18981))
|
||||
* Bump serde from 1.0.224 to 1.0.226. ([\#18953](https://github.com/element-hq/synapse/issues/18953))
|
||||
* Bump serde from 1.0.226 to 1.0.228. ([\#18982](https://github.com/element-hq/synapse/issues/18982))
|
||||
* Bump setuptools-rust from 1.11.1 to 1.12.0. ([\#18980](https://github.com/element-hq/synapse/issues/18980))
|
||||
* Bump twine from 6.1.0 to 6.2.0. ([\#18985](https://github.com/element-hq/synapse/issues/18985))
|
||||
* Bump types-pyyaml from 6.0.12.20250809 to 6.0.12.20250915. ([\#19018](https://github.com/element-hq/synapse/issues/19018))
|
||||
* Bump types-requests from 2.32.4.20250809 to 2.32.4.20250913. ([\#18951](https://github.com/element-hq/synapse/issues/18951))
|
||||
* Bump typing-extensions from 4.14.1 to 4.15.0. ([\#18956](https://github.com/element-hq/synapse/issues/18956))
|
||||
|
||||
# Synapse 1.139.2 (2025-10-07)
|
||||
|
||||
## Bugfixes
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Extend validation of uploaded device keys.
|
||||
@@ -1 +0,0 @@
|
||||
Fix room upgrade `room_config` argument and documentation for `user_may_create_room` spam-checker callback.
|
||||
@@ -1 +0,0 @@
|
||||
Update OEmbed providers to use 'X' instead of 'Twitter' in URL previews, following a rebrand. Contributed by @HammyHavoc.
|
||||
@@ -1 +0,0 @@
|
||||
Cleanly shutdown `SynapseHomeServer` object.
|
||||
@@ -1 +0,0 @@
|
||||
Fix `server_name` in logging context for multiple Synapse instances in one process.
|
||||
@@ -1 +0,0 @@
|
||||
Wrap the Rust HTTP client with `make_deferred_yieldable` so it follows Synapse logcontext rules.
|
||||
@@ -1,2 +0,0 @@
|
||||
Add an Admin API that allows server admins to to query and investigate the metadata of local or cached remote media via
|
||||
the `origin/media_id` identifier found in a [Matrix Content URI](https://spec.matrix.org/v1.14/client-server-api/#matrix-content-mxc-uris).
|
||||
@@ -1 +0,0 @@
|
||||
Fix the GitHub Actions workflow that moves issues labeled "X-Needs-Info" to the "Needs info" column on the team's internal triage board.
|
||||
@@ -1 +0,0 @@
|
||||
Explain how Deferred callbacks interact with logcontexts.
|
||||
@@ -1 +0,0 @@
|
||||
Disconnect background process work from request trace.
|
||||
@@ -1 +0,0 @@
|
||||
Update [MSC4284: Policy Servers](https://github.com/matrix-org/matrix-spec-proposals/pull/4284) implementation to support signatures when available.
|
||||
@@ -1 +0,0 @@
|
||||
Reduce overall number of calls to `_get_e2e_cross_signing_signatures_for_devices` by increasing the batch size of devices the query is called with, reducing DB load.
|
||||
@@ -1 +0,0 @@
|
||||
Update error code used when an appservice tries to masquerade as an unknown device using [MSC4326](https://github.com/matrix-org/matrix-spec-proposals/pull/4326). Contributed by @tulir @ Beeper.
|
||||
@@ -1 +0,0 @@
|
||||
Compute a user's last seen timestamp from their devices' last seen timestamps instead of IPs, because the latter are automatically cleared according to `user_ips_max_age`.
|
||||
@@ -1 +0,0 @@
|
||||
Fix `no active span when trying to log` tracing error on startup (when OpenTracing is enabled).
|
||||
@@ -1 +0,0 @@
|
||||
Add an Admin API to fetch an event by ID.
|
||||
@@ -1 +0,0 @@
|
||||
Fix `run_coroutine_in_background(...)` incorrectly handling logcontext.
|
||||
@@ -1 +0,0 @@
|
||||
Add debug logs wherever we change current logcontext.
|
||||
@@ -1 +0,0 @@
|
||||
Add experimental implementation for the latest draft of [MSC4143](https://github.com/matrix-org/matrix-spec-proposals/pull/4143).
|
||||
@@ -1 +0,0 @@
|
||||
Update dockerfile metadata to fix broken link; point to documentation website.
|
||||
@@ -1 +0,0 @@
|
||||
Note that the code is additionally licensed under the [Element Commercial license](https://github.com/element-hq/synapse/blob/develop/LICENSE-COMMERCIAL) in SPDX expression field configs.
|
||||
@@ -1 +0,0 @@
|
||||
Fix logcontext handling in `timeout_deferred` tests.
|
||||
@@ -1 +0,0 @@
|
||||
Remove internal `ReplicationUploadKeysForUserRestServlet` as a follow-up to the work in https://github.com/element-hq/synapse/pull/18581 that moved device changes off the main process.
|
||||
@@ -1 +0,0 @@
|
||||
Remove deprecated `LoggingContext.set_current_context`/`LoggingContext.current_context` methods which already have equivalent bare methods in `synapse.logging.context`.
|
||||
@@ -1 +0,0 @@
|
||||
Switch task scheduler from raw logcontext manipulation to using the dedicated logcontext utils.
|
||||
@@ -1 +0,0 @@
|
||||
Remove `MockClock()` in tests.
|
||||
@@ -1 +0,0 @@
|
||||
Drop support for unstable field names from the long-accepted [MSC2732](https://github.com/matrix-org/matrix-spec-proposals/pull/2732) (Olm fallback keys) proposal.
|
||||
@@ -1 +0,0 @@
|
||||
Fix documentation for `rc_room_creation` and `rc_reports` to clarify that a `per_user` rate limit is not supported.
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug where ephemeral events were not filtered by room ID. Contributed by @frastefanini.
|
||||
@@ -1 +0,0 @@
|
||||
Switch back to our own custom `LogContextScopeManager` instead of OpenTracing's `ContextVarsScopeManager` which was causing problems when using the experimental `SYNAPSE_ASYNC_IO_REACTOR` option with tracing enabled.
|
||||
@@ -1 +0,0 @@
|
||||
Update Synapse main process version string to include git info.
|
||||
@@ -1 +0,0 @@
|
||||
Remove `version_string` argument from `HomeServer` since it's always the same.
|
||||
@@ -1 +0,0 @@
|
||||
Remove duplicate call to `hs.start_background_tasks()` introduced from a bad merge.
|
||||
@@ -1 +0,0 @@
|
||||
Split homeserver creation (`create_homeserver`) and setup (`setup`).
|
||||
@@ -1 +0,0 @@
|
||||
Fix a bug introduced in 1.139.1 where a client could receive an Internal Server Error if they set `device_keys: null` in the request to [`POST /_matrix/client/v3/keys/upload`](https://spec.matrix.org/v1.16/client-server-api/#post_matrixclientv3keysupload).
|
||||
@@ -1 +0,0 @@
|
||||
Swap near-end-of-life `macos-13` GitHub Actions runner for the `macos-15-intel` variant.
|
||||
@@ -1 +0,0 @@
|
||||
Introduce `RootConfig.validate_config()` which can be subclassed in `HomeServerConfig` to do cross-config class validation.
|
||||
@@ -1 +0,0 @@
|
||||
Expose a `defer_to_threadpool` function in the Synapse Module API that allows modules to run a function on a separate thread in a custom threadpool.
|
||||
@@ -1 +0,0 @@
|
||||
Allow any command of the `release.py` to accept a `--gh-token` argument.
|
||||
1
changelog.d/19101.bugfix
Normal file
1
changelog.d/19101.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix users being unable to log in if their password, or the server's configured pepper, was too long.
|
||||
18
debian/changelog
vendored
18
debian/changelog
vendored
@@ -1,3 +1,21 @@
|
||||
matrix-synapse-py3 (1.141.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.141.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 21 Oct 2025 11:01:44 +0100
|
||||
|
||||
matrix-synapse-py3 (1.140.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.140.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 14 Oct 2025 15:22:36 +0100
|
||||
|
||||
matrix-synapse-py3 (1.140.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.140.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 10 Oct 2025 10:56:51 +0100
|
||||
|
||||
matrix-synapse-py3 (1.139.2) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.139.2.
|
||||
|
||||
@@ -20,8 +20,8 @@
|
||||
# `poetry export | pip install -r /dev/stdin`, but beware: we have experienced bugs in
|
||||
# in `poetry export` in the past.
|
||||
|
||||
ARG DEBIAN_VERSION=bookworm
|
||||
ARG PYTHON_VERSION=3.12
|
||||
ARG DEBIAN_VERSION=trixie
|
||||
ARG PYTHON_VERSION=3.13
|
||||
ARG POETRY_VERSION=2.1.1
|
||||
|
||||
###
|
||||
@@ -142,10 +142,10 @@ RUN \
|
||||
libwebp7 \
|
||||
xmlsec1 \
|
||||
libjemalloc2 \
|
||||
libicu \
|
||||
| grep '^\w' > /tmp/pkg-list && \
|
||||
for arch in arm64 amd64; do \
|
||||
mkdir -p /tmp/debs-${arch} && \
|
||||
chown _apt:root /tmp/debs-${arch} && \
|
||||
cd /tmp/debs-${arch} && \
|
||||
apt-get -o APT::Architecture="${arch}" download $(cat /tmp/pkg-list); \
|
||||
done
|
||||
@@ -176,11 +176,6 @@ LABEL org.opencontainers.image.documentation='https://element-hq.github.io/synap
|
||||
LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git'
|
||||
LABEL org.opencontainers.image.licenses='AGPL-3.0-or-later OR LicenseRef-Element-Commercial'
|
||||
|
||||
# On the runtime image, /lib is a symlink to /usr/lib, so we need to copy the
|
||||
# libraries to the right place, else the `COPY` won't work.
|
||||
# On amd64, we'll also have a /lib64 folder with ld-linux-x86-64.so.2, which is
|
||||
# already present in the runtime image.
|
||||
COPY --from=runtime-deps /install-${TARGETARCH}/lib /usr/lib
|
||||
COPY --from=runtime-deps /install-${TARGETARCH}/etc /etc
|
||||
COPY --from=runtime-deps /install-${TARGETARCH}/usr /usr
|
||||
COPY --from=runtime-deps /install-${TARGETARCH}/var /var
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
# syntax=docker/dockerfile:1-labs
|
||||
|
||||
ARG SYNAPSE_VERSION=latest
|
||||
ARG FROM=matrixdotorg/synapse:$SYNAPSE_VERSION
|
||||
ARG DEBIAN_VERSION=bookworm
|
||||
ARG PYTHON_VERSION=3.12
|
||||
ARG DEBIAN_VERSION=trixie
|
||||
ARG PYTHON_VERSION=3.13
|
||||
ARG REDIS_VERSION=7.2
|
||||
|
||||
# first of all, we create a base image with dependencies which we can copy into the
|
||||
# target image. For repeated rebuilds, this is much faster than apt installing
|
||||
@@ -11,15 +12,27 @@ ARG PYTHON_VERSION=3.12
|
||||
|
||||
FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS deps_base
|
||||
|
||||
ARG DEBIAN_VERSION
|
||||
ARG REDIS_VERSION
|
||||
|
||||
# Tell apt to keep downloaded package files, as we're using cache mounts.
|
||||
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
|
||||
|
||||
# The upstream redis-server deb has fewer dynamic libraries than Debian's package which makes it easier to copy later on
|
||||
RUN \
|
||||
curl -fsSL https://packages.redis.io/gpg | gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg && \
|
||||
chmod 644 /usr/share/keyrings/redis-archive-keyring.gpg && \
|
||||
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb ${DEBIAN_VERSION} main" | tee /etc/apt/sources.list.d/redis.list
|
||||
|
||||
RUN \
|
||||
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update -qq && \
|
||||
DEBIAN_FRONTEND=noninteractive apt-get install -yqq --no-install-recommends \
|
||||
nginx-light
|
||||
nginx-light \
|
||||
redis-server="6:${REDIS_VERSION}.*" redis-tools="6:${REDIS_VERSION}.*" \
|
||||
# libicu is required by postgres, see `docker/complement/Dockerfile`
|
||||
libicu76
|
||||
|
||||
RUN \
|
||||
# remove default page
|
||||
@@ -35,19 +48,12 @@ FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS deps_base
|
||||
|
||||
RUN mkdir -p /uv/etc/supervisor/conf.d
|
||||
|
||||
# Similarly, a base to copy the redis server from.
|
||||
#
|
||||
# The redis docker image has fewer dynamic libraries than the debian package,
|
||||
# which makes it much easier to copy (but we need to make sure we use an image
|
||||
# based on the same debian version as the synapse image, to make sure we get
|
||||
# the expected version of libc.
|
||||
FROM docker.io/library/redis:7-${DEBIAN_VERSION} AS redis_base
|
||||
|
||||
# now build the final image, based on the the regular Synapse docker image
|
||||
FROM $FROM
|
||||
|
||||
# Copy over dependencies
|
||||
COPY --from=redis_base /usr/local/bin/redis-server /usr/local/bin
|
||||
COPY --from=deps_base --parents /usr/lib/*-linux-gnu/libicu* /
|
||||
COPY --from=deps_base /usr/bin/redis-server /usr/local/bin
|
||||
COPY --from=deps_base /uv /
|
||||
COPY --from=deps_base /usr/sbin/nginx /usr/sbin
|
||||
COPY --from=deps_base /usr/share/nginx /usr/share/nginx
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
ARG SYNAPSE_VERSION=latest
|
||||
# This is an intermediate image, to be built locally (not pulled from a registry).
|
||||
ARG FROM=matrixdotorg/synapse-workers:$SYNAPSE_VERSION
|
||||
ARG DEBIAN_VERSION=bookworm
|
||||
ARG DEBIAN_VERSION=trixie
|
||||
|
||||
FROM docker.io/library/postgres:13-${DEBIAN_VERSION} AS postgres_base
|
||||
|
||||
@@ -18,10 +18,10 @@ FROM $FROM
|
||||
# since for repeated rebuilds, this is much faster than apt installing
|
||||
# postgres each time.
|
||||
|
||||
# This trick only works because (a) the Synapse image happens to have all the
|
||||
# shared libraries that postgres wants, (b) we use a postgres image based on
|
||||
# the same debian version as Synapse's docker image (so the versions of the
|
||||
# shared libraries match).
|
||||
# This trick only works because we use a postgres image based on the same
|
||||
# debian version as Synapse's docker image (so the versions of the shared
|
||||
# libraries match). Any missing libraries need to be added to either the
|
||||
# Synapse image or docker/Dockerfile-workers.
|
||||
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
|
||||
COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql
|
||||
COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql
|
||||
|
||||
@@ -8,9 +8,9 @@ ARG PYTHON_VERSION=3.9
|
||||
###
|
||||
### Stage 0: generate requirements.txt
|
||||
###
|
||||
# We hardcode the use of Debian bookworm here because this could change upstream
|
||||
# and other Dockerfiles used for testing are expecting bookworm.
|
||||
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm
|
||||
# We hardcode the use of Debian trixie here because this could change upstream
|
||||
# and other Dockerfiles used for testing are expecting trixie.
|
||||
FROM docker.io/library/python:${PYTHON_VERSION}-slim-trixie
|
||||
|
||||
# Install Rust and other dependencies (stolen from normal Dockerfile)
|
||||
# install the OS build deps
|
||||
|
||||
@@ -117,6 +117,24 @@ each upgrade are complete before moving on to the next upgrade, to avoid
|
||||
stacking them up. You can monitor the currently running background updates with
|
||||
[the Admin API](usage/administration/admin_api/background_updates.html#status).
|
||||
|
||||
# Upgrading to v1.141.0
|
||||
|
||||
## Docker images now based on Debian `trixie` with Python 3.13
|
||||
|
||||
The Docker images are now based on Debian `trixie` and use Python 3.13. If you
|
||||
are using the Docker images as a base image you may need to e.g. adjust the
|
||||
paths you mount any additional Python packages at.
|
||||
|
||||
# Upgrading to v1.140.0
|
||||
|
||||
## Users of `synapse-s3-storage-provider` must update the module to `v1.6.0`
|
||||
|
||||
Deployments that make use of the
|
||||
[synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider/)
|
||||
module must update it to
|
||||
[v1.6.0](https://github.com/matrix-org/synapse-s3-storage-provider/releases/tag/v1.6.0),
|
||||
otherwise users will be unable to upload or download media.
|
||||
|
||||
# Upgrading to v1.139.0
|
||||
|
||||
## `/register` requests from old application service implementations may break when using MAS
|
||||
|
||||
@@ -3815,7 +3815,7 @@ This setting has the following sub-options:
|
||||
|
||||
* `localdb_enabled` (boolean): Set to false to disable authentication against the local password database. This is ignored if `enabled` is false, and is only useful if you have other `password_providers`. Defaults to `true`.
|
||||
|
||||
* `pepper` (string|null): Set the value here to a secret random string for extra security. DO NOT CHANGE THIS AFTER INITIAL SETUP! Defaults to `null`.
|
||||
* `pepper` (string|null): A secret random string that will be appended to user's passwords before they are hashed. This improves the security of short passwords. DO NOT CHANGE THIS AFTER INITIAL SETUP! Defaults to `null`.
|
||||
|
||||
* `policy` (object): Define and enforce a password policy, such as minimum lengths for passwords, etc. This is an implementation of MSC2000.
|
||||
|
||||
|
||||
69
poetry.lock
generated
69
poetry.lock
generated
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 2.2.0 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "annotated-types"
|
||||
@@ -39,7 +39,7 @@ description = "The ultimate Python library in building OAuth and OpenID Connect
|
||||
optional = true
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"jwt\" or extra == \"oidc\""
|
||||
markers = "extra == \"oidc\" or extra == \"jwt\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "authlib-1.6.5-py2.py3-none-any.whl", hash = "sha256:3e0e0507807f842b02175507bdee8957a1d5707fd4afb17c32fb43fee90b6e3a"},
|
||||
{file = "authlib-1.6.5.tar.gz", hash = "sha256:6aaf9c79b7cc96c900f0b284061691c5d4e61221640a948fe690b556a6d6d10b"},
|
||||
@@ -447,7 +447,7 @@ description = "XML bomb protection for Python stdlib modules"
|
||||
optional = true
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
|
||||
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
|
||||
@@ -472,7 +472,7 @@ description = "XPath 1.0/2.0/3.0/3.1 parsers and selectors for ElementTree and l
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "elementpath-4.1.5-py3-none-any.whl", hash = "sha256:2ac1a2fb31eb22bbbf817f8cf6752f844513216263f0e3892c8e79782fe4bb55"},
|
||||
{file = "elementpath-4.1.5.tar.gz", hash = "sha256:c2d6dc524b29ef751ecfc416b0627668119d8812441c555d7471da41d4bacb8d"},
|
||||
@@ -523,7 +523,7 @@ description = "Python wrapper for hiredis"
|
||||
optional = true
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"redis\""
|
||||
markers = "extra == \"redis\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "hiredis-3.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:add17efcbae46c5a6a13b244ff0b4a8fa079602ceb62290095c941b42e9d5dec"},
|
||||
{file = "hiredis-3.2.1-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:5fe955cc4f66c57df1ae8e5caf4de2925d43b5efab4e40859662311d1bcc5f54"},
|
||||
@@ -860,7 +860,7 @@ description = "Jaeger Python OpenTracing Tracer implementation"
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"opentracing\""
|
||||
markers = "extra == \"opentracing\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "jaeger-client-4.8.0.tar.gz", hash = "sha256:3157836edab8e2c209bd2d6ae61113db36f7ee399e66b1dcbb715d87ab49bfe0"},
|
||||
]
|
||||
@@ -998,7 +998,7 @@ description = "A strictly RFC 4510 conforming LDAP V3 pure Python client library
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
|
||||
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "ldap3-2.9.1-py2.py3-none-any.whl", hash = "sha256:5869596fc4948797020d3f03b7939da938778a0f9e2009f7a072ccf92b8e8d70"},
|
||||
{file = "ldap3-2.9.1.tar.gz", hash = "sha256:f3e7fc4718e3f09dda568b57100095e0ce58633bcabbed8667ce3f8fbaa4229f"},
|
||||
@@ -1014,7 +1014,7 @@ description = "Powerful and Pythonic XML processing library combining libxml2/li
|
||||
optional = true
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"url-preview\""
|
||||
markers = "extra == \"url-preview\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "lxml-6.0.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e77dd455b9a16bbd2a5036a63ddbd479c19572af81b624e79ef422f929eef388"},
|
||||
{file = "lxml-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5d444858b9f07cefff6455b983aea9a67f7462ba1f6cbe4a21e8bf6791bf2153"},
|
||||
@@ -1301,7 +1301,7 @@ description = "An LDAP3 auth provider for Synapse"
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
|
||||
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "matrix-synapse-ldap3-0.3.0.tar.gz", hash = "sha256:8bb6517173164d4b9cc44f49de411d8cebdb2e705d5dd1ea1f38733c4a009e1d"},
|
||||
{file = "matrix_synapse_ldap3-0.3.0-py3-none-any.whl", hash = "sha256:8b4d701f8702551e98cc1d8c20dbed532de5613584c08d0df22de376ba99159d"},
|
||||
@@ -1540,7 +1540,7 @@ description = "OpenTracing API for Python. See documentation at http://opentraci
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"opentracing\""
|
||||
markers = "extra == \"opentracing\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "opentracing-2.4.0.tar.gz", hash = "sha256:a173117e6ef580d55874734d1fa7ecb6f3655160b8b8974a2a1e98e5ec9c840d"},
|
||||
]
|
||||
@@ -1609,8 +1609,6 @@ groups = ["main"]
|
||||
files = [
|
||||
{file = "pillow-11.3.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1b9c17fd4ace828b3003dfd1e30bff24863e0eb59b535e8f80194d9cc7ecf860"},
|
||||
{file = "pillow-11.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:65dc69160114cdd0ca0f35cb434633c75e8e7fad4cf855177a05bf38678f73ad"},
|
||||
{file = "pillow-11.3.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7107195ddc914f656c7fc8e4a5e1c25f32e9236ea3ea860f257b0436011fddd0"},
|
||||
{file = "pillow-11.3.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:cc3e831b563b3114baac7ec2ee86819eb03caa1a2cef0b481a5675b59c4fe23b"},
|
||||
{file = "pillow-11.3.0-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f1f182ebd2303acf8c380a54f615ec883322593320a9b00438eb842c1f37ae50"},
|
||||
{file = "pillow-11.3.0-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4445fa62e15936a028672fd48c4c11a66d641d2c05726c7ec1f8ba6a572036ae"},
|
||||
{file = "pillow-11.3.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:71f511f6b3b91dd543282477be45a033e4845a40278fa8dcdbfdb07109bf18f9"},
|
||||
@@ -1620,8 +1618,6 @@ files = [
|
||||
{file = "pillow-11.3.0-cp310-cp310-win_arm64.whl", hash = "sha256:819931d25e57b513242859ce1876c58c59dc31587847bf74cfe06b2e0cb22d2f"},
|
||||
{file = "pillow-11.3.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:1cd110edf822773368b396281a2293aeb91c90a2db00d78ea43e7e861631b722"},
|
||||
{file = "pillow-11.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9c412fddd1b77a75aa904615ebaa6001f169b26fd467b4be93aded278266b288"},
|
||||
{file = "pillow-11.3.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7d1aa4de119a0ecac0a34a9c8bde33f34022e2e8f99104e47a3ca392fd60e37d"},
|
||||
{file = "pillow-11.3.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:91da1d88226663594e3f6b4b8c3c8d85bd504117d043740a8e0ec449087cc494"},
|
||||
{file = "pillow-11.3.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:643f189248837533073c405ec2f0bb250ba54598cf80e8c1e043381a60632f58"},
|
||||
{file = "pillow-11.3.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:106064daa23a745510dabce1d84f29137a37224831d88eb4ce94bb187b1d7e5f"},
|
||||
{file = "pillow-11.3.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:cd8ff254faf15591e724dc7c4ddb6bf4793efcbe13802a4ae3e863cd300b493e"},
|
||||
@@ -1631,8 +1627,6 @@ files = [
|
||||
{file = "pillow-11.3.0-cp311-cp311-win_arm64.whl", hash = "sha256:30807c931ff7c095620fe04448e2c2fc673fcbb1ffe2a7da3fb39613489b1ddd"},
|
||||
{file = "pillow-11.3.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fdae223722da47b024b867c1ea0be64e0df702c5e0a60e27daad39bf960dd1e4"},
|
||||
{file = "pillow-11.3.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:921bd305b10e82b4d1f5e802b6850677f965d8394203d182f078873851dada69"},
|
||||
{file = "pillow-11.3.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:eb76541cba2f958032d79d143b98a3a6b3ea87f0959bbe256c0b5e416599fd5d"},
|
||||
{file = "pillow-11.3.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:67172f2944ebba3d4a7b54f2e95c786a3a50c21b88456329314caaa28cda70f6"},
|
||||
{file = "pillow-11.3.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:97f07ed9f56a3b9b5f49d3661dc9607484e85c67e27f3e8be2c7d28ca032fec7"},
|
||||
{file = "pillow-11.3.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:676b2815362456b5b3216b4fd5bd89d362100dc6f4945154ff172e206a22c024"},
|
||||
{file = "pillow-11.3.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3e184b2f26ff146363dd07bde8b711833d7b0202e27d13540bfe2e35a323a809"},
|
||||
@@ -1645,8 +1639,6 @@ files = [
|
||||
{file = "pillow-11.3.0-cp313-cp313-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:7859a4cc7c9295f5838015d8cc0a9c215b77e43d07a25e460f35cf516df8626f"},
|
||||
{file = "pillow-11.3.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ec1ee50470b0d050984394423d96325b744d55c701a439d2bd66089bff963d3c"},
|
||||
{file = "pillow-11.3.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7db51d222548ccfd274e4572fdbf3e810a5e66b00608862f947b163e613b67dd"},
|
||||
{file = "pillow-11.3.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:2d6fcc902a24ac74495df63faad1884282239265c6839a0a6416d33faedfae7e"},
|
||||
{file = "pillow-11.3.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f0f5d8f4a08090c6d6d578351a2b91acf519a54986c055af27e7a93feae6d3f1"},
|
||||
{file = "pillow-11.3.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c37d8ba9411d6003bba9e518db0db0c58a680ab9fe5179f040b0463644bc9805"},
|
||||
{file = "pillow-11.3.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:13f87d581e71d9189ab21fe0efb5a23e9f28552d5be6979e84001d3b8505abe8"},
|
||||
{file = "pillow-11.3.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:023f6d2d11784a465f09fd09a34b150ea4672e85fb3d05931d89f373ab14abb2"},
|
||||
@@ -1656,8 +1648,6 @@ files = [
|
||||
{file = "pillow-11.3.0-cp313-cp313-win_arm64.whl", hash = "sha256:1904e1264881f682f02b7f8167935cce37bc97db457f8e7849dc3a6a52b99580"},
|
||||
{file = "pillow-11.3.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:4c834a3921375c48ee6b9624061076bc0a32a60b5532b322cc0ea64e639dd50e"},
|
||||
{file = "pillow-11.3.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:5e05688ccef30ea69b9317a9ead994b93975104a677a36a8ed8106be9260aa6d"},
|
||||
{file = "pillow-11.3.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1019b04af07fc0163e2810167918cb5add8d74674b6267616021ab558dc98ced"},
|
||||
{file = "pillow-11.3.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f944255db153ebb2b19c51fe85dd99ef0ce494123f21b9db4877ffdfc5590c7c"},
|
||||
{file = "pillow-11.3.0-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1f85acb69adf2aaee8b7da124efebbdb959a104db34d3a2cb0f3793dbae422a8"},
|
||||
{file = "pillow-11.3.0-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:05f6ecbeff5005399bb48d198f098a9b4b6bdf27b8487c7f38ca16eeb070cd59"},
|
||||
{file = "pillow-11.3.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a7bc6e6fd0395bc052f16b1a8670859964dbd7003bd0af2ff08342eb6e442cfe"},
|
||||
@@ -1667,8 +1657,6 @@ files = [
|
||||
{file = "pillow-11.3.0-cp313-cp313t-win_arm64.whl", hash = "sha256:8797edc41f3e8536ae4b10897ee2f637235c94f27404cac7297f7b607dd0716e"},
|
||||
{file = "pillow-11.3.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:d9da3df5f9ea2a89b81bb6087177fb1f4d1c7146d583a3fe5c672c0d94e55e12"},
|
||||
{file = "pillow-11.3.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:0b275ff9b04df7b640c59ec5a3cb113eefd3795a8df80bac69646ef699c6981a"},
|
||||
{file = "pillow-11.3.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0743841cabd3dba6a83f38a92672cccbd69af56e3e91777b0ee7f4dba4385632"},
|
||||
{file = "pillow-11.3.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2465a69cf967b8b49ee1b96d76718cd98c4e925414ead59fdf75cf0fd07df673"},
|
||||
{file = "pillow-11.3.0-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:41742638139424703b4d01665b807c6468e23e699e8e90cffefe291c5832b027"},
|
||||
{file = "pillow-11.3.0-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:93efb0b4de7e340d99057415c749175e24c8864302369e05914682ba642e5d77"},
|
||||
{file = "pillow-11.3.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7966e38dcd0fa11ca390aed7c6f20454443581d758242023cf36fcb319b1a874"},
|
||||
@@ -1678,8 +1666,6 @@ files = [
|
||||
{file = "pillow-11.3.0-cp314-cp314-win_arm64.whl", hash = "sha256:155658efb5e044669c08896c0c44231c5e9abcaadbc5cd3648df2f7c0b96b9a6"},
|
||||
{file = "pillow-11.3.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:59a03cdf019efbfeeed910bf79c7c93255c3d54bc45898ac2a4140071b02b4ae"},
|
||||
{file = "pillow-11.3.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:f8a5827f84d973d8636e9dc5764af4f0cf2318d26744b3d902931701b0d46653"},
|
||||
{file = "pillow-11.3.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ee92f2fd10f4adc4b43d07ec5e779932b4eb3dbfbc34790ada5a6669bc095aa6"},
|
||||
{file = "pillow-11.3.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c96d333dcf42d01f47b37e0979b6bd73ec91eae18614864622d9b87bbd5bbf36"},
|
||||
{file = "pillow-11.3.0-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4c96f993ab8c98460cd0c001447bff6194403e8b1d7e149ade5f00594918128b"},
|
||||
{file = "pillow-11.3.0-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:41342b64afeba938edb034d122b2dda5db2139b9a4af999729ba8818e0056477"},
|
||||
{file = "pillow-11.3.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:068d9c39a2d1b358eb9f245ce7ab1b5c3246c7c8c7d9ba58cfa5b43146c06e50"},
|
||||
@@ -1689,8 +1675,6 @@ files = [
|
||||
{file = "pillow-11.3.0-cp314-cp314t-win_arm64.whl", hash = "sha256:79ea0d14d3ebad43ec77ad5272e6ff9bba5b679ef73375ea760261207fa8e0aa"},
|
||||
{file = "pillow-11.3.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:48d254f8a4c776de343051023eb61ffe818299eeac478da55227d96e241de53f"},
|
||||
{file = "pillow-11.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7aee118e30a4cf54fdd873bd3a29de51e29105ab11f9aad8c32123f58c8f8081"},
|
||||
{file = "pillow-11.3.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:23cff760a9049c502721bdb743a7cb3e03365fafcdfc2ef9784610714166e5a4"},
|
||||
{file = "pillow-11.3.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:6359a3bc43f57d5b375d1ad54a0074318a0844d11b76abccf478c37c986d3cfc"},
|
||||
{file = "pillow-11.3.0-cp39-cp39-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:092c80c76635f5ecb10f3f83d76716165c96f5229addbd1ec2bdbbda7d496e06"},
|
||||
{file = "pillow-11.3.0-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:cadc9e0ea0a2431124cde7e1697106471fc4c1da01530e679b2391c37d3fbb3a"},
|
||||
{file = "pillow-11.3.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6a418691000f2a418c9135a7cf0d797c1bb7d9a485e61fe8e7722845b95ef978"},
|
||||
@@ -1700,15 +1684,11 @@ files = [
|
||||
{file = "pillow-11.3.0-cp39-cp39-win_arm64.whl", hash = "sha256:6abdbfd3aea42be05702a8dd98832329c167ee84400a1d1f61ab11437f1717eb"},
|
||||
{file = "pillow-11.3.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:3cee80663f29e3843b68199b9d6f4f54bd1d4a6b59bdd91bceefc51238bcb967"},
|
||||
{file = "pillow-11.3.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:b5f56c3f344f2ccaf0dd875d3e180f631dc60a51b314295a3e681fe8cf851fbe"},
|
||||
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:e67d793d180c9df62f1f40aee3accca4829d3794c95098887edc18af4b8b780c"},
|
||||
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d000f46e2917c705e9fb93a3606ee4a819d1e3aa7a9b442f6444f07e77cf5e25"},
|
||||
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:527b37216b6ac3a12d7838dc3bd75208ec57c1c6d11ef01902266a5a0c14fc27"},
|
||||
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:be5463ac478b623b9dd3937afd7fb7ab3d79dd290a28e2b6df292dc75063eb8a"},
|
||||
{file = "pillow-11.3.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:8dc70ca24c110503e16918a658b869019126ecfe03109b754c402daff12b3d9f"},
|
||||
{file = "pillow-11.3.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7c8ec7a017ad1bd562f93dbd8505763e688d388cde6e4a010ae1486916e713e6"},
|
||||
{file = "pillow-11.3.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:9ab6ae226de48019caa8074894544af5b53a117ccb9d3b3dcb2871464c829438"},
|
||||
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fe27fb049cdcca11f11a7bfda64043c37b30e6b91f10cb5bab275806c32f6ab3"},
|
||||
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:465b9e8844e3c3519a983d58b80be3f668e2a7a5db97f2784e7079fbc9f9822c"},
|
||||
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5418b53c0d59b3824d05e029669efa023bbef0f3e92e75ec8428f3799487f361"},
|
||||
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:504b6f59505f08ae014f724b6207ff6222662aab5cc9542577fb084ed0676ac7"},
|
||||
{file = "pillow-11.3.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:c84d689db21a1c397d001aa08241044aa2069e7587b398c8cc63020390b1c1b8"},
|
||||
@@ -1746,7 +1726,7 @@ description = "psycopg2 - Python-PostgreSQL Database Adapter"
|
||||
optional = true
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"postgres\""
|
||||
markers = "extra == \"postgres\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "psycopg2-2.9.10-cp310-cp310-win32.whl", hash = "sha256:5df2b672140f95adb453af93a7d669d7a7bf0a56bcd26f1502329166f4a61716"},
|
||||
{file = "psycopg2-2.9.10-cp310-cp310-win_amd64.whl", hash = "sha256:c6f7b8561225f9e711a9c47087388a97fdc948211c10a4bccbf0ba68ab7b3b5a"},
|
||||
@@ -1754,7 +1734,6 @@ files = [
|
||||
{file = "psycopg2-2.9.10-cp311-cp311-win_amd64.whl", hash = "sha256:0435034157049f6846e95103bd8f5a668788dd913a7c30162ca9503fdf542cb4"},
|
||||
{file = "psycopg2-2.9.10-cp312-cp312-win32.whl", hash = "sha256:65a63d7ab0e067e2cdb3cf266de39663203d38d6a8ed97f5ca0cb315c73fe067"},
|
||||
{file = "psycopg2-2.9.10-cp312-cp312-win_amd64.whl", hash = "sha256:4a579d6243da40a7b3182e0430493dbd55950c493d8c68f4eec0b302f6bbf20e"},
|
||||
{file = "psycopg2-2.9.10-cp313-cp313-win_amd64.whl", hash = "sha256:91fd603a2155da8d0cfcdbf8ab24a2d54bca72795b90d2a3ed2b6da8d979dee2"},
|
||||
{file = "psycopg2-2.9.10-cp39-cp39-win32.whl", hash = "sha256:9d5b3b94b79a844a986d029eee38998232451119ad653aea42bb9220a8c5066b"},
|
||||
{file = "psycopg2-2.9.10-cp39-cp39-win_amd64.whl", hash = "sha256:88138c8dedcbfa96408023ea2b0c369eda40fe5d75002c0964c78f46f11fa442"},
|
||||
{file = "psycopg2-2.9.10.tar.gz", hash = "sha256:12ec0b40b0273f95296233e8750441339298e6a572f7039da5b260e3c8b60e11"},
|
||||
@@ -1767,7 +1746,7 @@ description = ".. image:: https://travis-ci.org/chtd/psycopg2cffi.svg?branch=mas
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
|
||||
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
|
||||
files = [
|
||||
{file = "psycopg2cffi-2.9.0.tar.gz", hash = "sha256:7e272edcd837de3a1d12b62185eb85c45a19feda9e62fa1b120c54f9e8d35c52"},
|
||||
]
|
||||
@@ -1783,7 +1762,7 @@ description = "A Simple library to enable psycopg2 compatability"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
|
||||
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
|
||||
files = [
|
||||
{file = "psycopg2cffi-compat-1.1.tar.gz", hash = "sha256:d25e921748475522b33d13420aad5c2831c743227dc1f1f2585e0fdb5c914e05"},
|
||||
]
|
||||
@@ -2042,7 +2021,7 @@ description = "A development tool to measure, monitor and analyze the memory beh
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"cache-memory\""
|
||||
markers = "extra == \"cache-memory\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "Pympler-1.0.1-py3-none-any.whl", hash = "sha256:d260dda9ae781e1eab6ea15bacb84015849833ba5555f141d2d9b7b7473b307d"},
|
||||
{file = "Pympler-1.0.1.tar.gz", hash = "sha256:993f1a3599ca3f4fcd7160c7545ad06310c9e12f70174ae7ae8d4e25f6c5d3fa"},
|
||||
@@ -2102,7 +2081,7 @@ description = "Python implementation of SAML Version 2 Standard"
|
||||
optional = true
|
||||
python-versions = ">=3.9,<4.0"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "pysaml2-7.5.0-py3-none-any.whl", hash = "sha256:bc6627cc344476a83c757f440a73fda1369f13b6fda1b4e16bca63ffbabb5318"},
|
||||
{file = "pysaml2-7.5.0.tar.gz", hash = "sha256:f36871d4e5ee857c6b85532e942550d2cf90ea4ee943d75eb681044bbc4f54f7"},
|
||||
@@ -2127,7 +2106,7 @@ description = "Extensions to the standard Python datetime module"
|
||||
optional = true
|
||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
|
||||
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
|
||||
@@ -2155,7 +2134,7 @@ description = "World timezone definitions, modern and historical"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "pytz-2022.7.1-py2.py3-none-any.whl", hash = "sha256:78f4f37d8198e0627c5f1143240bb0206b8691d8d7ac6d78fee88b78733f8c4a"},
|
||||
{file = "pytz-2022.7.1.tar.gz", hash = "sha256:01a0681c4b9684a28304615eba55d1ab31ae00bf68ec157ec3708a8182dbbcd0"},
|
||||
@@ -2521,7 +2500,7 @@ description = "Python client for Sentry (https://sentry.io)"
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"sentry\""
|
||||
markers = "extra == \"sentry\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "sentry_sdk-2.34.1-py2.py3-none-any.whl", hash = "sha256:b7a072e1cdc5abc48101d5146e1ae680fa81fe886d8d95aaa25a0b450c818d32"},
|
||||
{file = "sentry_sdk-2.34.1.tar.gz", hash = "sha256:69274eb8c5c38562a544c3e9f68b5be0a43be4b697f5fd385bf98e4fbe672687"},
|
||||
@@ -2709,7 +2688,7 @@ description = "Tornado IOLoop Backed Concurrent Futures"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"opentracing\""
|
||||
markers = "extra == \"opentracing\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "threadloop-1.0.2-py2-none-any.whl", hash = "sha256:5c90dbefab6ffbdba26afb4829d2a9df8275d13ac7dc58dccb0e279992679599"},
|
||||
{file = "threadloop-1.0.2.tar.gz", hash = "sha256:8b180aac31013de13c2ad5c834819771992d350267bddb854613ae77ef571944"},
|
||||
@@ -2725,7 +2704,7 @@ description = "Python bindings for the Apache Thrift RPC system"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"opentracing\""
|
||||
markers = "extra == \"opentracing\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "thrift-0.16.0.tar.gz", hash = "sha256:2b5b6488fcded21f9d312aa23c9ff6a0195d0f6ae26ddbd5ad9e3e25dfc14408"},
|
||||
]
|
||||
@@ -2787,7 +2766,7 @@ description = "Tornado is a Python web framework and asynchronous networking lib
|
||||
optional = true
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"opentracing\""
|
||||
markers = "extra == \"opentracing\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "tornado-6.5-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:f81067dad2e4443b015368b24e802d0083fecada4f0a4572fdb72fc06e54a9a6"},
|
||||
{file = "tornado-6.5-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:9ac1cbe1db860b3cbb251e795c701c41d343f06a96049d6274e7c77559117e41"},
|
||||
@@ -2924,7 +2903,7 @@ description = "non-blocking redis client for python"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"redis\""
|
||||
markers = "extra == \"redis\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "txredisapi-1.4.11-py3-none-any.whl", hash = "sha256:ac64d7a9342b58edca13ef267d4fa7637c1aa63f8595e066801c1e8b56b22d0b"},
|
||||
{file = "txredisapi-1.4.11.tar.gz", hash = "sha256:3eb1af99aefdefb59eb877b1dd08861efad60915e30ad5bf3d5bf6c5cedcdbc6"},
|
||||
@@ -3170,7 +3149,7 @@ description = "An XML Schema validator and decoder"
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
files = [
|
||||
{file = "xmlschema-2.4.0-py3-none-any.whl", hash = "sha256:dc87be0caaa61f42649899189aab2fd8e0d567f2cf548433ba7b79278d231a4a"},
|
||||
{file = "xmlschema-2.4.0.tar.gz", hash = "sha256:d74cd0c10866ac609e1ef94a5a69b018ad16e39077bc6393408b40c6babee793"},
|
||||
@@ -3314,4 +3293,4 @@ url-preview = ["lxml"]
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = "^3.9.0"
|
||||
content-hash = "2e8ea085e1a0c6f0ac051d4bc457a96827d01f621b1827086de01a5ffa98cf79"
|
||||
content-hash = "0058b93ca13a3f2a0cfc28485ddd8202c42d0015dbaf3b9692e43f37fe2a0be6"
|
||||
|
||||
@@ -101,7 +101,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.139.2"
|
||||
version = "1.141.0rc1"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later OR LicenseRef-Element-Commercial"
|
||||
@@ -356,7 +356,7 @@ click = ">=8.1.3"
|
||||
# GitPython was == 3.1.14; bumped to 3.1.20, the first release with type hints.
|
||||
GitPython = ">=3.1.20"
|
||||
markdown-it-py = ">=3.0.0"
|
||||
pygithub = ">=1.55"
|
||||
pygithub = ">=1.59"
|
||||
# The following are executed as commands by the release script.
|
||||
twine = "*"
|
||||
# Towncrier min version comes from https://github.com/matrix-org/synapse/pull/3425. Rationale unclear.
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
$schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json
|
||||
$id: https://element-hq.github.io/synapse/schema/synapse/v1.139/synapse-config.schema.json
|
||||
$id: https://element-hq.github.io/synapse/schema/synapse/v1.141/synapse-config.schema.json
|
||||
type: object
|
||||
properties:
|
||||
modules:
|
||||
@@ -4695,8 +4695,9 @@ properties:
|
||||
pepper:
|
||||
type: ["string", "null"]
|
||||
description: >-
|
||||
Set the value here to a secret random string for extra security. DO
|
||||
NOT CHANGE THIS AFTER INITIAL SETUP!
|
||||
A secret random string that will be appended to user's passwords
|
||||
before they are hashed. This improves the security of short passwords.
|
||||
DO NOT CHANGE THIS AFTER INITIAL SETUP!
|
||||
default: null
|
||||
policy:
|
||||
type: object
|
||||
|
||||
@@ -37,6 +37,7 @@ from typing import Any, List, Match, Optional, Union
|
||||
import attr
|
||||
import click
|
||||
import git
|
||||
import github
|
||||
from click.exceptions import ClickException
|
||||
from git import GitCommandError, Repo
|
||||
from github import BadCredentialsException, Github
|
||||
@@ -397,7 +398,7 @@ def _tag(gh_token: Optional[str]) -> None:
|
||||
return
|
||||
|
||||
# Create a new draft release
|
||||
gh = Github(gh_token)
|
||||
gh = Github(auth=github.Auth.Token(token=gh_token))
|
||||
gh_repo = gh.get_repo("element-hq/synapse")
|
||||
release = gh_repo.create_git_release(
|
||||
tag=tag_name,
|
||||
|
||||
@@ -73,8 +73,18 @@ def main() -> None:
|
||||
|
||||
pw = unicodedata.normalize("NFKC", password)
|
||||
|
||||
bytes_to_hash = pw.encode("utf8") + password_pepper.encode("utf8")
|
||||
if len(bytes_to_hash) > 72:
|
||||
# bcrypt only looks at the first 72 bytes
|
||||
print(
|
||||
f"Password + pepper is too long ({len(bytes_to_hash)} bytes); truncating to 72 bytes for bcrypt. "
|
||||
"This is expected behaviour and will not affect a user's ability to log in. 72 bytes is "
|
||||
"sufficient entropy for a password."
|
||||
)
|
||||
bytes_to_hash = bytes_to_hash[:72]
|
||||
|
||||
hashed = bcrypt.hashpw(
|
||||
pw.encode("utf8") + password_pepper.encode("utf8"),
|
||||
bytes_to_hash,
|
||||
bcrypt.gensalt(bcrypt_rounds),
|
||||
).decode("ascii")
|
||||
|
||||
|
||||
@@ -302,12 +302,9 @@ class BaseAuth:
|
||||
(the user_id URI parameter allows an application service to masquerade
|
||||
any applicable user in its namespace)
|
||||
- what device the application service should be treated as controlling
|
||||
(the device_id[^1] URI parameter allows an application service to masquerade
|
||||
(the device_id URI parameter allows an application service to masquerade
|
||||
as any device that exists for the relevant user)
|
||||
|
||||
[^1] Unstable and provided by MSC3202.
|
||||
Must use `org.matrix.msc3202.device_id` in place of `device_id` for now.
|
||||
|
||||
Returns:
|
||||
the application service `Requester` of that request
|
||||
|
||||
@@ -319,7 +316,8 @@ class BaseAuth:
|
||||
- The returned device ID, if present, has been checked to be a valid device ID
|
||||
for the returned user ID.
|
||||
"""
|
||||
DEVICE_ID_ARG_NAME = b"org.matrix.msc3202.device_id"
|
||||
# TODO: We can drop unstable support after 2026-01-01 (couple months after stable support)
|
||||
UNSTABLE_DEVICE_ID_ARG_NAME = b"org.matrix.msc3202.device_id"
|
||||
|
||||
app_service = self.store.get_app_service_by_token(access_token)
|
||||
if app_service is None:
|
||||
@@ -341,13 +339,11 @@ class BaseAuth:
|
||||
else:
|
||||
effective_user_id = app_service.sender
|
||||
|
||||
effective_device_id: Optional[str] = None
|
||||
|
||||
if (
|
||||
self.hs.config.experimental.msc3202_device_masquerading_enabled
|
||||
and DEVICE_ID_ARG_NAME in request.args
|
||||
):
|
||||
effective_device_id = request.args[DEVICE_ID_ARG_NAME][0].decode("utf8")
|
||||
effective_device_id_args = request.args.get(
|
||||
b"device_id", request.args.get(UNSTABLE_DEVICE_ID_ARG_NAME)
|
||||
)
|
||||
if effective_device_id_args:
|
||||
effective_device_id = effective_device_id_args[0].decode("utf8")
|
||||
# We only just set this so it can't be None!
|
||||
assert effective_device_id is not None
|
||||
device_opt = await self.store.get_device(
|
||||
@@ -359,6 +355,8 @@ class BaseAuth:
|
||||
f"Application service trying to use a device that doesn't exist ('{effective_device_id}' for {effective_user_id})",
|
||||
Codes.UNKNOWN_DEVICE,
|
||||
)
|
||||
else:
|
||||
effective_device_id = None
|
||||
|
||||
return create_requester(
|
||||
effective_user_id, app_service=app_service, device_id=effective_device_id
|
||||
|
||||
@@ -64,7 +64,6 @@ from twisted.web.resource import Resource
|
||||
import synapse.util.caches
|
||||
from synapse.api.constants import MAX_PDU_SIZE
|
||||
from synapse.app import check_bind_error
|
||||
from synapse.app.phone_stats_home import start_phone_stats_home
|
||||
from synapse.config import ConfigError
|
||||
from synapse.config._base import format_config_error
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
@@ -592,9 +591,9 @@ async def start(hs: "HomeServer", freeze: bool = True) -> None:
|
||||
# we're not using systemd.
|
||||
sdnotify(b"RELOADING=1")
|
||||
|
||||
for sighup_callbacks in _instance_id_to_sighup_callbacks_map.values():
|
||||
for func, args, kwargs in sighup_callbacks:
|
||||
func(*args, **kwargs)
|
||||
for sighup_callbacks in _instance_id_to_sighup_callbacks_map.values():
|
||||
for func, args, kwargs in sighup_callbacks:
|
||||
func(*args, **kwargs)
|
||||
|
||||
sdnotify(b"READY=1")
|
||||
|
||||
@@ -683,15 +682,6 @@ async def start(hs: "HomeServer", freeze: bool = True) -> None:
|
||||
if hs.config.worker.run_background_tasks:
|
||||
hs.start_background_tasks()
|
||||
|
||||
# TODO: This should be moved to same pattern we use for other background tasks:
|
||||
# Add to `REQUIRED_ON_BACKGROUND_TASK_STARTUP` and rely on
|
||||
# `start_background_tasks` to start it.
|
||||
await hs.get_common_usage_metrics_manager().setup()
|
||||
|
||||
# TODO: This feels like another pattern that should refactored as one of the
|
||||
# `REQUIRED_ON_BACKGROUND_TASK_STARTUP`
|
||||
start_phone_stats_home(hs)
|
||||
|
||||
if freeze:
|
||||
# We now freeze all allocated objects in the hopes that (almost)
|
||||
# everything currently allocated are things that will be used for the
|
||||
|
||||
@@ -430,9 +430,7 @@ def setup(
|
||||
|
||||
await _base.start(hs, freeze)
|
||||
|
||||
# TODO: This should be moved to `SynapseHomeServer.start_background_tasks` (not
|
||||
# `HomeServer.start_background_tasks`) (this way it matches the behavior of only
|
||||
# running on `main`)
|
||||
# TODO: Feels like this should be moved somewhere else.
|
||||
hs.get_datastores().main.db_pool.updates.start_doing_background_updates()
|
||||
|
||||
# Register a callback to be invoked once the reactor is running
|
||||
|
||||
@@ -412,11 +412,6 @@ class ExperimentalConfig(Config):
|
||||
"msc2409_to_device_messages_enabled", False
|
||||
)
|
||||
|
||||
# The portion of MSC3202 which is related to device masquerading.
|
||||
self.msc3202_device_masquerading_enabled: bool = experimental.get(
|
||||
"msc3202_device_masquerading", False
|
||||
)
|
||||
|
||||
# The portion of MSC3202 related to transaction extensions:
|
||||
# sending device list changes, one-time key counts and fallback key
|
||||
# usage to application services.
|
||||
|
||||
@@ -1683,8 +1683,22 @@ class AuthHandler:
|
||||
# Normalise the Unicode in the password
|
||||
pw = unicodedata.normalize("NFKC", password)
|
||||
|
||||
bytes_to_hash = pw.encode(
|
||||
"utf8"
|
||||
) + self.hs.config.auth.password_pepper.encode("utf8")
|
||||
if len(bytes_to_hash) > 72:
|
||||
# bcrypt only looks at the first 72 bytes.
|
||||
#
|
||||
# Note: we explicitly DO NOT log the length of the user's password here.
|
||||
logger.debug(
|
||||
"Password + pepper is too long; truncating to 72 bytes for bcrypt. "
|
||||
"This is expected behaviour and will not affect a user's ability to log in. 72 bytes is "
|
||||
"sufficient entropy for a password."
|
||||
)
|
||||
bytes_to_hash = bytes_to_hash[:72]
|
||||
|
||||
return bcrypt.hashpw(
|
||||
pw.encode("utf8") + self.hs.config.auth.password_pepper.encode("utf8"),
|
||||
bytes_to_hash,
|
||||
bcrypt.gensalt(self.bcrypt_rounds),
|
||||
).decode("ascii")
|
||||
|
||||
@@ -1706,9 +1720,20 @@ class AuthHandler:
|
||||
def _do_validate_hash(checked_hash: bytes) -> bool:
|
||||
# Normalise the Unicode in the password
|
||||
pw = unicodedata.normalize("NFKC", password)
|
||||
password_pepper = self.hs.config.auth.password_pepper
|
||||
|
||||
bytes_to_hash = pw.encode("utf8") + password_pepper.encode("utf8")
|
||||
if len(bytes_to_hash) > 72:
|
||||
# bcrypt only looks at the first 72 bytes
|
||||
logger.debug(
|
||||
"Password + pepper is too long; truncating to 72 bytes for bcrypt. "
|
||||
"This is expected behaviour and will not affect a user's ability to log in. 72 bytes is "
|
||||
"sufficient entropy for a password."
|
||||
)
|
||||
bytes_to_hash = bytes_to_hash[:72]
|
||||
|
||||
return bcrypt.checkpw(
|
||||
pw.encode("utf8") + self.hs.config.auth.password_pepper.encode("utf8"),
|
||||
bytes_to_hash,
|
||||
checked_hash,
|
||||
)
|
||||
|
||||
|
||||
@@ -62,7 +62,7 @@ class CommonUsageMetricsManager:
|
||||
"""
|
||||
return await self._collect()
|
||||
|
||||
async def setup(self) -> None:
|
||||
def setup(self) -> None:
|
||||
"""Keep the gauges for common usage metrics up to date."""
|
||||
self._hs.run_as_background_process(
|
||||
desc="common_usage_metrics_update_gauges",
|
||||
|
||||
@@ -112,7 +112,7 @@ class DeleteDevicesRestServlet(RestServlet):
|
||||
else:
|
||||
raise e
|
||||
|
||||
if requester.app_service and requester.app_service.msc4190_device_management:
|
||||
if requester.app_service:
|
||||
# MSC4190 can skip UIA for this endpoint
|
||||
pass
|
||||
else:
|
||||
@@ -192,7 +192,7 @@ class DeviceRestServlet(RestServlet):
|
||||
else:
|
||||
raise
|
||||
|
||||
if requester.app_service and requester.app_service.msc4190_device_management:
|
||||
if requester.app_service:
|
||||
# MSC4190 allows appservices to delete devices through this endpoint without UIA
|
||||
# It's also allowed with MSC3861 enabled
|
||||
pass
|
||||
@@ -227,7 +227,7 @@ class DeviceRestServlet(RestServlet):
|
||||
body = parse_and_validate_json_object_from_request(request, self.PutBody)
|
||||
|
||||
# MSC4190 allows appservices to create devices through this endpoint
|
||||
if requester.app_service and requester.app_service.msc4190_device_management:
|
||||
if requester.app_service:
|
||||
created = await self.device_handler.upsert_device(
|
||||
user_id=requester.user.to_string(),
|
||||
device_id=device_id,
|
||||
|
||||
@@ -543,15 +543,11 @@ class SigningKeyUploadServlet(RestServlet):
|
||||
if not keys_are_different:
|
||||
return 200, {}
|
||||
|
||||
# MSC4190 can skip UIA for replacing cross-signing keys as well.
|
||||
is_appservice_with_msc4190 = (
|
||||
requester.app_service and requester.app_service.msc4190_device_management
|
||||
)
|
||||
|
||||
# The keys are different; is x-signing set up? If no, then this is first-time
|
||||
# setup, and that is allowed without UIA, per MSC3967.
|
||||
# If yes, then we need to authenticate the change.
|
||||
if is_cross_signing_setup and not is_appservice_with_msc4190:
|
||||
# MSC4190 can skip UIA for replacing cross-signing keys as well.
|
||||
if is_cross_signing_setup and not requester.app_service:
|
||||
# With MSC3861, UIA is not possible. Instead, the auth service has to
|
||||
# explicitly mark the master key as replaceable.
|
||||
if self.hs.config.mas.enabled:
|
||||
|
||||
@@ -62,6 +62,7 @@ from synapse.api.auth_blocking import AuthBlocking
|
||||
from synapse.api.filtering import Filtering
|
||||
from synapse.api.ratelimiting import Ratelimiter, RequestRatelimiter
|
||||
from synapse.app._base import unregister_sighups
|
||||
from synapse.app.phone_stats_home import start_phone_stats_home
|
||||
from synapse.appservice.api import ApplicationServiceApi
|
||||
from synapse.appservice.scheduler import ApplicationServiceScheduler
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
@@ -643,6 +644,8 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||
for i in self.REQUIRED_ON_BACKGROUND_TASK_STARTUP:
|
||||
getattr(self, "get_" + i + "_handler")()
|
||||
self.get_task_scheduler()
|
||||
self.get_common_usage_metrics_manager().setup()
|
||||
start_phone_stats_home(self)
|
||||
|
||||
def get_reactor(self) -> ISynapseReactor:
|
||||
"""
|
||||
|
||||
@@ -323,9 +323,13 @@ class DynamicCollectorRegistry(CollectorRegistry):
|
||||
if server_hooks.get(metric_name) is not None:
|
||||
# TODO: This should be an `assert` since registering the same metric name
|
||||
# multiple times will clobber the old metric.
|
||||
# We currently rely on this behaviour as we instantiate multiple
|
||||
# `SyncRestServlet`, one per listener, and in the `__init__` we setup a new
|
||||
# LruCache.
|
||||
#
|
||||
# We currently rely on this behaviour in a few places:
|
||||
# - We instantiate multiple `SyncRestServlet`, one per listener, and in the
|
||||
# `__init__` we setup a new `LruCache`.
|
||||
# - We instantiate multiple `ApplicationService` (one per configured
|
||||
# application service) which use the `@cached` decorator on some methods.
|
||||
#
|
||||
# Once the above behaviour is changed, this should be changed to an `assert`.
|
||||
logger.error(
|
||||
"Metric named %s already registered for server %s",
|
||||
|
||||
@@ -42,7 +42,6 @@ from synapse.types import Requester, UserID
|
||||
from synapse.util.clock import Clock
|
||||
|
||||
from tests import unittest
|
||||
from tests.unittest import override_config
|
||||
from tests.utils import mock_getRawHeaders
|
||||
|
||||
|
||||
@@ -237,7 +236,6 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
|
||||
self.get_failure(self.auth.get_user_by_req(request), AuthError)
|
||||
|
||||
@override_config({"experimental_features": {"msc3202_device_masquerading": True}})
|
||||
def test_get_user_by_req_appservice_valid_token_valid_device_id(self) -> None:
|
||||
"""
|
||||
Tests that when an application service passes the device_id URL parameter
|
||||
@@ -264,7 +262,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
request.getClientAddress.return_value.host = "127.0.0.1"
|
||||
request.args[b"access_token"] = [self.test_token]
|
||||
request.args[b"user_id"] = [masquerading_user_id]
|
||||
request.args[b"org.matrix.msc3202.device_id"] = [masquerading_device_id]
|
||||
request.args[b"device_id"] = [masquerading_device_id]
|
||||
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
|
||||
requester = self.get_success(self.auth.get_user_by_req(request))
|
||||
self.assertEqual(
|
||||
@@ -272,7 +270,6 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
)
|
||||
self.assertEqual(requester.device_id, masquerading_device_id.decode("utf8"))
|
||||
|
||||
@override_config({"experimental_features": {"msc3202_device_masquerading": True}})
|
||||
def test_get_user_by_req_appservice_valid_token_invalid_device_id(self) -> None:
|
||||
"""
|
||||
Tests that when an application service passes the device_id URL parameter
|
||||
@@ -299,7 +296,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
request.getClientAddress.return_value.host = "127.0.0.1"
|
||||
request.args[b"access_token"] = [self.test_token]
|
||||
request.args[b"user_id"] = [masquerading_user_id]
|
||||
request.args[b"org.matrix.msc3202.device_id"] = [masquerading_device_id]
|
||||
request.args[b"device_id"] = [masquerading_device_id]
|
||||
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
|
||||
|
||||
failure = self.get_failure(self.auth.get_user_by_req(request), AuthError)
|
||||
|
||||
@@ -214,7 +214,12 @@ class BaseStreamTestCase(unittest.HomeserverTestCase):
|
||||
client_to_server_transport.loseConnection()
|
||||
|
||||
# there should have been exactly one request
|
||||
self.assertEqual(len(requests), 1)
|
||||
self.assertEqual(
|
||||
len(requests),
|
||||
1,
|
||||
"Expected to handle exactly one HTTP replication request but saw %d - requests=%s"
|
||||
% (len(requests), requests),
|
||||
)
|
||||
|
||||
return requests[0]
|
||||
|
||||
|
||||
@@ -46,28 +46,39 @@ class AccountDataStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# check we're testing what we think we are: no rows should yet have been
|
||||
# received
|
||||
self.assertEqual([], self.test_handler.received_rdata_rows)
|
||||
received_account_data_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == AccountDataStream.NAME
|
||||
]
|
||||
self.assertEqual([], received_account_data_rows)
|
||||
|
||||
# now reconnect to pull the updates
|
||||
self.reconnect()
|
||||
self.replicate()
|
||||
|
||||
# we should have received all the expected rows in the right order
|
||||
received_rows = self.test_handler.received_rdata_rows
|
||||
# We should have received all the expected rows in the right order
|
||||
#
|
||||
# Filter the updates to only include account data changes
|
||||
received_account_data_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == AccountDataStream.NAME
|
||||
]
|
||||
|
||||
for t in updates:
|
||||
(stream_name, token, row) = received_rows.pop(0)
|
||||
(stream_name, token, row) = received_account_data_rows.pop(0)
|
||||
self.assertEqual(stream_name, AccountDataStream.NAME)
|
||||
self.assertIsInstance(row, AccountDataStream.AccountDataStreamRow)
|
||||
self.assertEqual(row.data_type, t)
|
||||
self.assertEqual(row.room_id, "test_room")
|
||||
|
||||
(stream_name, token, row) = received_rows.pop(0)
|
||||
(stream_name, token, row) = received_account_data_rows.pop(0)
|
||||
self.assertIsInstance(row, AccountDataStream.AccountDataStreamRow)
|
||||
self.assertEqual(row.data_type, "m.global")
|
||||
self.assertIsNone(row.room_id)
|
||||
|
||||
self.assertEqual([], received_rows)
|
||||
self.assertEqual([], received_account_data_rows)
|
||||
|
||||
def test_update_function_global_account_data_limit(self) -> None:
|
||||
"""Test replication with many global account data updates"""
|
||||
@@ -85,32 +96,38 @@ class AccountDataStreamTestCase(BaseStreamTestCase):
|
||||
store.add_account_data_to_room("test_user", "test_room", "m.per_room", {})
|
||||
)
|
||||
|
||||
# tell the notifier to catch up to avoid duplicate rows.
|
||||
# workaround for https://github.com/matrix-org/synapse/issues/7360
|
||||
# FIXME remove this when the above is fixed
|
||||
self.replicate()
|
||||
|
||||
# check we're testing what we think we are: no rows should yet have been
|
||||
# received
|
||||
self.assertEqual([], self.test_handler.received_rdata_rows)
|
||||
received_account_data_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == AccountDataStream.NAME
|
||||
]
|
||||
self.assertEqual([], received_account_data_rows)
|
||||
|
||||
# now reconnect to pull the updates
|
||||
self.reconnect()
|
||||
self.replicate()
|
||||
|
||||
# we should have received all the expected rows in the right order
|
||||
received_rows = self.test_handler.received_rdata_rows
|
||||
#
|
||||
# Filter the updates to only include typing changes
|
||||
received_account_data_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == AccountDataStream.NAME
|
||||
]
|
||||
|
||||
for t in updates:
|
||||
(stream_name, token, row) = received_rows.pop(0)
|
||||
(stream_name, token, row) = received_account_data_rows.pop(0)
|
||||
self.assertEqual(stream_name, AccountDataStream.NAME)
|
||||
self.assertIsInstance(row, AccountDataStream.AccountDataStreamRow)
|
||||
self.assertEqual(row.data_type, t)
|
||||
self.assertIsNone(row.room_id)
|
||||
|
||||
(stream_name, token, row) = received_rows.pop(0)
|
||||
(stream_name, token, row) = received_account_data_rows.pop(0)
|
||||
self.assertIsInstance(row, AccountDataStream.AccountDataStreamRow)
|
||||
self.assertEqual(row.data_type, "m.per_room")
|
||||
self.assertEqual(row.room_id, "test_room")
|
||||
|
||||
self.assertEqual([], received_rows)
|
||||
self.assertEqual([], received_account_data_rows)
|
||||
|
||||
@@ -30,6 +30,7 @@ from synapse.replication.tcp.commands import RdataCommand
|
||||
from synapse.replication.tcp.streams._base import _STREAM_UPDATE_TARGET_ROW_COUNT
|
||||
from synapse.replication.tcp.streams.events import (
|
||||
_MAX_STATE_UPDATES_PER_ROOM,
|
||||
EventsStream,
|
||||
EventsStreamAllStateRow,
|
||||
EventsStreamCurrentStateRow,
|
||||
EventsStreamEventRow,
|
||||
@@ -82,7 +83,12 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# check we're testing what we think we are: no rows should yet have been
|
||||
# received
|
||||
self.assertEqual([], self.test_handler.received_rdata_rows)
|
||||
received_event_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == EventsStream.NAME
|
||||
]
|
||||
self.assertEqual([], received_event_rows)
|
||||
|
||||
# now reconnect to pull the updates
|
||||
self.reconnect()
|
||||
@@ -90,31 +96,34 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# we should have received all the expected rows in the right order (as
|
||||
# well as various cache invalidation updates which we ignore)
|
||||
received_rows = [
|
||||
row for row in self.test_handler.received_rdata_rows if row[0] == "events"
|
||||
#
|
||||
# Filter the updates to only include event changes
|
||||
received_event_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == EventsStream.NAME
|
||||
]
|
||||
|
||||
for event in events:
|
||||
stream_name, token, row = received_rows.pop(0)
|
||||
self.assertEqual("events", stream_name)
|
||||
stream_name, token, row = received_event_rows.pop(0)
|
||||
self.assertEqual(EventsStream.NAME, stream_name)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "ev")
|
||||
self.assertIsInstance(row.data, EventsStreamEventRow)
|
||||
self.assertEqual(row.data.event_id, event.event_id)
|
||||
|
||||
stream_name, token, row = received_rows.pop(0)
|
||||
stream_name, token, row = received_event_rows.pop(0)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertIsInstance(row.data, EventsStreamEventRow)
|
||||
self.assertEqual(row.data.event_id, state_event.event_id)
|
||||
|
||||
stream_name, token, row = received_rows.pop(0)
|
||||
stream_name, token, row = received_event_rows.pop(0)
|
||||
self.assertEqual("events", stream_name)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "state")
|
||||
self.assertIsInstance(row.data, EventsStreamCurrentStateRow)
|
||||
self.assertEqual(row.data.event_id, state_event.event_id)
|
||||
|
||||
self.assertEqual([], received_rows)
|
||||
self.assertEqual([], received_event_rows)
|
||||
|
||||
@parameterized.expand(
|
||||
[(_STREAM_UPDATE_TARGET_ROW_COUNT, False), (_MAX_STATE_UPDATES_PER_ROOM, True)]
|
||||
@@ -170,9 +179,12 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
self.replicate()
|
||||
# all those events and state changes should have landed
|
||||
self.assertGreaterEqual(
|
||||
len(self.test_handler.received_rdata_rows), 2 * len(events)
|
||||
)
|
||||
received_event_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == EventsStream.NAME
|
||||
]
|
||||
self.assertGreaterEqual(len(received_event_rows), 2 * len(events))
|
||||
|
||||
# disconnect, so that we can stack up the changes
|
||||
self.disconnect()
|
||||
@@ -202,7 +214,12 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# check we're testing what we think we are: no rows should yet have been
|
||||
# received
|
||||
self.assertEqual([], self.test_handler.received_rdata_rows)
|
||||
received_event_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == EventsStream.NAME
|
||||
]
|
||||
self.assertEqual([], received_event_rows)
|
||||
|
||||
# now reconnect to pull the updates
|
||||
self.reconnect()
|
||||
@@ -218,33 +235,34 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
# of the states that got reverted.
|
||||
# - two rows for state2
|
||||
|
||||
received_rows = [
|
||||
row for row in self.test_handler.received_rdata_rows if row[0] == "events"
|
||||
received_event_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == EventsStream.NAME
|
||||
]
|
||||
|
||||
# first check the first two rows, which should be the state1 event.
|
||||
stream_name, token, row = received_rows.pop(0)
|
||||
stream_name, token, row = received_event_rows.pop(0)
|
||||
self.assertEqual("events", stream_name)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "ev")
|
||||
self.assertIsInstance(row.data, EventsStreamEventRow)
|
||||
self.assertEqual(row.data.event_id, state1.event_id)
|
||||
|
||||
stream_name, token, row = received_rows.pop(0)
|
||||
stream_name, token, row = received_event_rows.pop(0)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "state")
|
||||
self.assertIsInstance(row.data, EventsStreamCurrentStateRow)
|
||||
self.assertEqual(row.data.event_id, state1.event_id)
|
||||
|
||||
# now the last two rows, which should be the state2 event.
|
||||
stream_name, token, row = received_rows.pop(-2)
|
||||
stream_name, token, row = received_event_rows.pop(-2)
|
||||
self.assertEqual("events", stream_name)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "ev")
|
||||
self.assertIsInstance(row.data, EventsStreamEventRow)
|
||||
self.assertEqual(row.data.event_id, state2.event_id)
|
||||
|
||||
stream_name, token, row = received_rows.pop(-1)
|
||||
stream_name, token, row = received_event_rows.pop(-1)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "state")
|
||||
self.assertIsInstance(row.data, EventsStreamCurrentStateRow)
|
||||
@@ -254,16 +272,16 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
if collapse_state_changes:
|
||||
# that should leave us with the rows for the PL event, the state changes
|
||||
# get collapsed into a single row.
|
||||
self.assertEqual(len(received_rows), 2)
|
||||
self.assertEqual(len(received_event_rows), 2)
|
||||
|
||||
stream_name, token, row = received_rows.pop(0)
|
||||
stream_name, token, row = received_event_rows.pop(0)
|
||||
self.assertEqual("events", stream_name)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "ev")
|
||||
self.assertIsInstance(row.data, EventsStreamEventRow)
|
||||
self.assertEqual(row.data.event_id, pl_event.event_id)
|
||||
|
||||
stream_name, token, row = received_rows.pop(0)
|
||||
stream_name, token, row = received_event_rows.pop(0)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "state-all")
|
||||
self.assertIsInstance(row.data, EventsStreamAllStateRow)
|
||||
@@ -271,9 +289,9 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
else:
|
||||
# that should leave us with the rows for the PL event
|
||||
self.assertEqual(len(received_rows), len(events) + 2)
|
||||
self.assertEqual(len(received_event_rows), len(events) + 2)
|
||||
|
||||
stream_name, token, row = received_rows.pop(0)
|
||||
stream_name, token, row = received_event_rows.pop(0)
|
||||
self.assertEqual("events", stream_name)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "ev")
|
||||
@@ -282,7 +300,7 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# the state rows are unsorted
|
||||
state_rows: List[EventsStreamCurrentStateRow] = []
|
||||
for stream_name, _, row in received_rows:
|
||||
for stream_name, _, row in received_event_rows:
|
||||
self.assertEqual("events", stream_name)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "state")
|
||||
@@ -346,9 +364,12 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
self.replicate()
|
||||
|
||||
# all those events and state changes should have landed
|
||||
self.assertGreaterEqual(
|
||||
len(self.test_handler.received_rdata_rows), 2 * len(events)
|
||||
)
|
||||
received_event_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == EventsStream.NAME
|
||||
]
|
||||
self.assertGreaterEqual(len(received_event_rows), 2 * len(events))
|
||||
|
||||
# disconnect, so that we can stack up the changes
|
||||
self.disconnect()
|
||||
@@ -375,7 +396,12 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# check we're testing what we think we are: no rows should yet have been
|
||||
# received
|
||||
self.assertEqual([], self.test_handler.received_rdata_rows)
|
||||
received_event_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == EventsStream.NAME
|
||||
]
|
||||
self.assertEqual([], received_event_rows)
|
||||
|
||||
# now reconnect to pull the updates
|
||||
self.reconnect()
|
||||
@@ -383,14 +409,16 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# we should have received all the expected rows in the right order (as
|
||||
# well as various cache invalidation updates which we ignore)
|
||||
received_rows = [
|
||||
row for row in self.test_handler.received_rdata_rows if row[0] == "events"
|
||||
received_event_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == EventsStream.NAME
|
||||
]
|
||||
self.assertGreaterEqual(len(received_rows), len(events))
|
||||
self.assertGreaterEqual(len(received_event_rows), len(events))
|
||||
for i in range(NUM_USERS):
|
||||
# for each user, we expect the PL event row, followed by state rows for
|
||||
# the PL event and each of the states that got reverted.
|
||||
stream_name, token, row = received_rows.pop(0)
|
||||
stream_name, token, row = received_event_rows.pop(0)
|
||||
self.assertEqual("events", stream_name)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "ev")
|
||||
@@ -400,7 +428,7 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
# the state rows are unsorted
|
||||
state_rows: List[EventsStreamCurrentStateRow] = []
|
||||
for _ in range(STATES_PER_USER + 1):
|
||||
stream_name, token, row = received_rows.pop(0)
|
||||
stream_name, token, row = received_event_rows.pop(0)
|
||||
self.assertEqual("events", stream_name)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "state")
|
||||
@@ -417,7 +445,7 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
# "None" indicates the state has been deleted
|
||||
self.assertIsNone(sr.event_id)
|
||||
|
||||
self.assertEqual([], received_rows)
|
||||
self.assertEqual([], received_event_rows)
|
||||
|
||||
def test_backwards_stream_id(self) -> None:
|
||||
"""
|
||||
@@ -432,7 +460,12 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# check we're testing what we think we are: no rows should yet have been
|
||||
# received
|
||||
self.assertEqual([], self.test_handler.received_rdata_rows)
|
||||
received_event_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == EventsStream.NAME
|
||||
]
|
||||
self.assertEqual([], received_event_rows)
|
||||
|
||||
# now reconnect to pull the updates
|
||||
self.reconnect()
|
||||
@@ -440,14 +473,16 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# We should have received the expected single row (as well as various
|
||||
# cache invalidation updates which we ignore).
|
||||
received_rows = [
|
||||
row for row in self.test_handler.received_rdata_rows if row[0] == "events"
|
||||
received_event_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == EventsStream.NAME
|
||||
]
|
||||
|
||||
# There should be a single received row.
|
||||
self.assertEqual(len(received_rows), 1)
|
||||
self.assertEqual(len(received_event_rows), 1)
|
||||
|
||||
stream_name, token, row = received_rows[0]
|
||||
stream_name, token, row = received_event_rows[0]
|
||||
self.assertEqual("events", stream_name)
|
||||
self.assertIsInstance(row, EventsStreamRow)
|
||||
self.assertEqual(row.type, "ev")
|
||||
@@ -468,10 +503,12 @@ class EventsStreamTestCase(BaseStreamTestCase):
|
||||
)
|
||||
|
||||
# No updates have been received (because it was discard as old).
|
||||
received_rows = [
|
||||
row for row in self.test_handler.received_rdata_rows if row[0] == "events"
|
||||
received_event_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == EventsStream.NAME
|
||||
]
|
||||
self.assertEqual(len(received_rows), 0)
|
||||
self.assertEqual(len(received_event_rows), 0)
|
||||
|
||||
# Ensure the stream has not gone backwards.
|
||||
current_token = worker_events_stream.current_token("master")
|
||||
|
||||
@@ -38,24 +38,45 @@ class FederationStreamTestCase(BaseStreamTestCase):
|
||||
Makes sure that updates sent while we are offline are received later.
|
||||
"""
|
||||
fed_sender = self.hs.get_federation_sender()
|
||||
received_rows = self.test_handler.received_rdata_rows
|
||||
|
||||
# Send an update before we connect
|
||||
fed_sender.build_and_send_edu("testdest", "m.test_edu", {"a": "b"})
|
||||
|
||||
# Now reconnect and pull the updates
|
||||
self.reconnect()
|
||||
# FIXME: This seems odd, why aren't we calling `self.replicate()` here? but also
|
||||
# doing so, causes other assumptions to fail (multiple HTTP replication attempts
|
||||
# are made).
|
||||
self.reactor.advance(0)
|
||||
|
||||
# check we're testing what we think we are: no rows should yet have been
|
||||
# Check we're testing what we think we are: no rows should yet have been
|
||||
# received
|
||||
self.assertEqual(received_rows, [])
|
||||
#
|
||||
# Filter the updates to only include typing changes
|
||||
received_federation_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == FederationStream.NAME
|
||||
]
|
||||
self.assertEqual(received_federation_rows, [])
|
||||
|
||||
# We should now see an attempt to connect to the master
|
||||
request = self.handle_http_replication_attempt()
|
||||
self.assert_request_is_get_repl_stream_updates(request, "federation")
|
||||
self.assert_request_is_get_repl_stream_updates(request, FederationStream.NAME)
|
||||
|
||||
# we should have received an update row
|
||||
stream_name, token, row = received_rows.pop()
|
||||
self.assertEqual(stream_name, "federation")
|
||||
received_federation_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == FederationStream.NAME
|
||||
]
|
||||
self.assertEqual(
|
||||
len(received_federation_rows),
|
||||
1,
|
||||
"Expected exactly one row for the federation stream",
|
||||
)
|
||||
(stream_name, token, row) = received_federation_rows[0]
|
||||
self.assertEqual(stream_name, FederationStream.NAME)
|
||||
self.assertIsInstance(row, FederationStream.FederationStreamRow)
|
||||
self.assertEqual(row.type, EduRow.TypeId)
|
||||
edurow = EduRow.from_data(row.data)
|
||||
@@ -63,19 +84,30 @@ class FederationStreamTestCase(BaseStreamTestCase):
|
||||
self.assertEqual(edurow.edu.origin, self.hs.hostname)
|
||||
self.assertEqual(edurow.edu.destination, "testdest")
|
||||
self.assertEqual(edurow.edu.content, {"a": "b"})
|
||||
|
||||
self.assertEqual(received_rows, [])
|
||||
# Clear out the received rows that we've checked so we can check for new ones later
|
||||
self.test_handler.received_rdata_rows.clear()
|
||||
|
||||
# additional updates should be transferred without an HTTP hit
|
||||
fed_sender.build_and_send_edu("testdest", "m.test1", {"c": "d"})
|
||||
self.reactor.advance(0)
|
||||
# Pull in the updates
|
||||
self.replicate()
|
||||
|
||||
# there should be no http hit
|
||||
self.assertEqual(len(self.reactor.tcpClients), 0)
|
||||
# ... but we should have a row
|
||||
self.assertEqual(len(received_rows), 1)
|
||||
|
||||
stream_name, token, row = received_rows.pop()
|
||||
self.assertEqual(stream_name, "federation")
|
||||
# ... but we should have a row
|
||||
received_federation_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == FederationStream.NAME
|
||||
]
|
||||
self.assertEqual(
|
||||
len(received_federation_rows),
|
||||
1,
|
||||
"Expected exactly one row for the federation stream",
|
||||
)
|
||||
(stream_name, token, row) = received_federation_rows[0]
|
||||
self.assertEqual(stream_name, FederationStream.NAME)
|
||||
self.assertIsInstance(row, FederationStream.FederationStreamRow)
|
||||
self.assertEqual(row.type, EduRow.TypeId)
|
||||
edurow = EduRow.from_data(row.data)
|
||||
|
||||
@@ -20,7 +20,6 @@
|
||||
|
||||
# type: ignore
|
||||
|
||||
from unittest.mock import Mock
|
||||
|
||||
from synapse.replication.tcp.streams._base import ReceiptsStream
|
||||
|
||||
@@ -30,9 +29,6 @@ USER_ID = "@feeling:blue"
|
||||
|
||||
|
||||
class ReceiptsStreamTestCase(BaseStreamTestCase):
|
||||
def _build_replication_data_handler(self):
|
||||
return Mock(wraps=super()._build_replication_data_handler())
|
||||
|
||||
def test_receipt(self):
|
||||
self.reconnect()
|
||||
|
||||
@@ -50,23 +46,30 @@ class ReceiptsStreamTestCase(BaseStreamTestCase):
|
||||
self.replicate()
|
||||
|
||||
# there should be one RDATA command
|
||||
self.test_handler.on_rdata.assert_called_once()
|
||||
stream_name, _, token, rdata_rows = self.test_handler.on_rdata.call_args[0]
|
||||
self.assertEqual(stream_name, "receipts")
|
||||
self.assertEqual(1, len(rdata_rows))
|
||||
row: ReceiptsStream.ReceiptsStreamRow = rdata_rows[0]
|
||||
received_receipt_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == ReceiptsStream.NAME
|
||||
]
|
||||
self.assertEqual(
|
||||
len(received_receipt_rows),
|
||||
1,
|
||||
"Expected exactly one row for the receipts stream",
|
||||
)
|
||||
(stream_name, token, row) = received_receipt_rows[0]
|
||||
self.assertEqual(stream_name, ReceiptsStream.NAME)
|
||||
self.assertEqual("!room:blue", row.room_id)
|
||||
self.assertEqual("m.read", row.receipt_type)
|
||||
self.assertEqual(USER_ID, row.user_id)
|
||||
self.assertEqual("$event:blue", row.event_id)
|
||||
self.assertIsNone(row.thread_id)
|
||||
self.assertEqual({"a": 1}, row.data)
|
||||
# Clear out the received rows that we've checked so we can check for new ones later
|
||||
self.test_handler.received_rdata_rows.clear()
|
||||
|
||||
# Now let's disconnect and insert some data.
|
||||
self.disconnect()
|
||||
|
||||
self.test_handler.on_rdata.reset_mock()
|
||||
|
||||
self.get_success(
|
||||
self.hs.get_datastores().main.insert_receipt(
|
||||
"!room2:blue",
|
||||
@@ -79,20 +82,27 @@ class ReceiptsStreamTestCase(BaseStreamTestCase):
|
||||
)
|
||||
self.replicate()
|
||||
|
||||
# Nothing should have happened as we are disconnected
|
||||
self.test_handler.on_rdata.assert_not_called()
|
||||
# Not yet connected: no rows should yet have been received
|
||||
self.assertEqual([], self.test_handler.received_rdata_rows)
|
||||
|
||||
# Now reconnect and pull the updates
|
||||
self.reconnect()
|
||||
self.pump(0.1)
|
||||
self.replicate()
|
||||
|
||||
# We should now have caught up and get the missing data
|
||||
self.test_handler.on_rdata.assert_called_once()
|
||||
stream_name, _, token, rdata_rows = self.test_handler.on_rdata.call_args[0]
|
||||
self.assertEqual(stream_name, "receipts")
|
||||
received_receipt_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == ReceiptsStream.NAME
|
||||
]
|
||||
self.assertEqual(
|
||||
len(received_receipt_rows),
|
||||
1,
|
||||
"Expected exactly one row for the receipts stream",
|
||||
)
|
||||
(stream_name, token, row) = received_receipt_rows[0]
|
||||
self.assertEqual(stream_name, ReceiptsStream.NAME)
|
||||
self.assertEqual(token, 3)
|
||||
self.assertEqual(1, len(rdata_rows))
|
||||
|
||||
row: ReceiptsStream.ReceiptsStreamRow = rdata_rows[0]
|
||||
self.assertEqual("!room2:blue", row.room_id)
|
||||
self.assertEqual("m.read", row.receipt_type)
|
||||
self.assertEqual(USER_ID, row.user_id)
|
||||
|
||||
@@ -88,15 +88,15 @@ class ThreadSubscriptionsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# We should have received all the expected rows in the right order
|
||||
# Filter the updates to only include thread subscription changes
|
||||
received_rows = [
|
||||
upd
|
||||
for upd in self.test_handler.received_rdata_rows
|
||||
if upd[0] == ThreadSubscriptionsStream.NAME
|
||||
received_thread_subscription_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == ThreadSubscriptionsStream.NAME
|
||||
]
|
||||
|
||||
# Verify all the thread subscription updates
|
||||
for thread_id in updates:
|
||||
(stream_name, token, row) = received_rows.pop(0)
|
||||
(stream_name, token, row) = received_thread_subscription_rows.pop(0)
|
||||
self.assertEqual(stream_name, ThreadSubscriptionsStream.NAME)
|
||||
self.assertIsInstance(row, ThreadSubscriptionsStream.ROW_TYPE)
|
||||
self.assertEqual(row.user_id, "@test_user:example.org")
|
||||
@@ -104,14 +104,14 @@ class ThreadSubscriptionsStreamTestCase(BaseStreamTestCase):
|
||||
self.assertEqual(row.event_id, thread_id)
|
||||
|
||||
# Verify the last update in the different room
|
||||
(stream_name, token, row) = received_rows.pop(0)
|
||||
(stream_name, token, row) = received_thread_subscription_rows.pop(0)
|
||||
self.assertEqual(stream_name, ThreadSubscriptionsStream.NAME)
|
||||
self.assertIsInstance(row, ThreadSubscriptionsStream.ROW_TYPE)
|
||||
self.assertEqual(row.user_id, "@test_user:example.org")
|
||||
self.assertEqual(row.room_id, other_room_id)
|
||||
self.assertEqual(row.event_id, other_thread_root_id)
|
||||
|
||||
self.assertEqual([], received_rows)
|
||||
self.assertEqual([], received_thread_subscription_rows)
|
||||
|
||||
def test_multiple_users_thread_subscription_updates(self) -> None:
|
||||
"""Test replication with thread subscription updates for multiple users"""
|
||||
@@ -138,18 +138,18 @@ class ThreadSubscriptionsStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# We should have received all the expected rows
|
||||
# Filter the updates to only include thread subscription changes
|
||||
received_rows = [
|
||||
upd
|
||||
for upd in self.test_handler.received_rdata_rows
|
||||
if upd[0] == ThreadSubscriptionsStream.NAME
|
||||
received_thread_subscription_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == ThreadSubscriptionsStream.NAME
|
||||
]
|
||||
|
||||
# Should have one update per user
|
||||
self.assertEqual(len(received_rows), len(users))
|
||||
self.assertEqual(len(received_thread_subscription_rows), len(users))
|
||||
|
||||
# Verify all updates
|
||||
for i, user_id in enumerate(users):
|
||||
(stream_name, token, row) = received_rows[i]
|
||||
(stream_name, token, row) = received_thread_subscription_rows[i]
|
||||
self.assertEqual(stream_name, ThreadSubscriptionsStream.NAME)
|
||||
self.assertIsInstance(row, ThreadSubscriptionsStream.ROW_TYPE)
|
||||
self.assertEqual(row.user_id, user_id)
|
||||
|
||||
@@ -21,7 +21,10 @@
|
||||
import logging
|
||||
|
||||
import synapse
|
||||
from synapse.replication.tcp.streams._base import _STREAM_UPDATE_TARGET_ROW_COUNT
|
||||
from synapse.replication.tcp.streams._base import (
|
||||
_STREAM_UPDATE_TARGET_ROW_COUNT,
|
||||
ToDeviceStream,
|
||||
)
|
||||
from synapse.types import JsonDict
|
||||
|
||||
from tests.replication._base import BaseStreamTestCase
|
||||
@@ -82,7 +85,12 @@ class ToDeviceStreamTestCase(BaseStreamTestCase):
|
||||
)
|
||||
|
||||
# replication is disconnected so we shouldn't get any updates yet
|
||||
self.assertEqual([], self.test_handler.received_rdata_rows)
|
||||
received_to_device_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == ToDeviceStream.NAME
|
||||
]
|
||||
self.assertEqual([], received_to_device_rows)
|
||||
|
||||
# now reconnect to pull the updates
|
||||
self.reconnect()
|
||||
@@ -90,7 +98,15 @@ class ToDeviceStreamTestCase(BaseStreamTestCase):
|
||||
|
||||
# we should receive the fact that we have to_device updates
|
||||
# for user1 and user2
|
||||
received_rows = self.test_handler.received_rdata_rows
|
||||
self.assertEqual(len(received_rows), 2)
|
||||
self.assertEqual(received_rows[0][2].entity, user1)
|
||||
self.assertEqual(received_rows[1][2].entity, user2)
|
||||
received_to_device_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == ToDeviceStream.NAME
|
||||
]
|
||||
self.assertEqual(
|
||||
len(received_to_device_rows),
|
||||
2,
|
||||
"Expected two rows in the to_device stream",
|
||||
)
|
||||
self.assertEqual(received_to_device_rows[0][2].entity, user1)
|
||||
self.assertEqual(received_to_device_rows[1][2].entity, user2)
|
||||
|
||||
@@ -19,7 +19,6 @@
|
||||
#
|
||||
#
|
||||
import logging
|
||||
from unittest.mock import Mock
|
||||
|
||||
from synapse.handlers.typing import RoomMember, TypingWriterHandler
|
||||
from synapse.replication.tcp.streams import TypingStream
|
||||
@@ -27,6 +26,8 @@ from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
|
||||
from tests.replication._base import BaseStreamTestCase
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
USER_ID = "@feeling:blue"
|
||||
USER_ID_2 = "@da-ba-dee:blue"
|
||||
|
||||
@@ -35,10 +36,6 @@ ROOM_ID_2 = "!foo:blue"
|
||||
|
||||
|
||||
class TypingStreamTestCase(BaseStreamTestCase):
|
||||
def _build_replication_data_handler(self) -> Mock:
|
||||
self.mock_handler = Mock(wraps=super()._build_replication_data_handler())
|
||||
return self.mock_handler
|
||||
|
||||
def test_typing(self) -> None:
|
||||
typing = self.hs.get_typing_handler()
|
||||
assert isinstance(typing, TypingWriterHandler)
|
||||
@@ -47,51 +44,74 @@ class TypingStreamTestCase(BaseStreamTestCase):
|
||||
# update to fetch.
|
||||
typing._push_update(member=RoomMember(ROOM_ID, USER_ID), typing=True)
|
||||
|
||||
# Not yet connected: no rows should yet have been received
|
||||
self.assertEqual([], self.test_handler.received_rdata_rows)
|
||||
|
||||
# Reconnect
|
||||
self.reconnect()
|
||||
|
||||
typing._push_update(member=RoomMember(ROOM_ID, USER_ID), typing=True)
|
||||
|
||||
self.reactor.advance(0)
|
||||
# Pull in the updates
|
||||
self.replicate()
|
||||
|
||||
# We should now see an attempt to connect to the master
|
||||
request = self.handle_http_replication_attempt()
|
||||
self.assert_request_is_get_repl_stream_updates(request, "typing")
|
||||
self.assert_request_is_get_repl_stream_updates(request, TypingStream.NAME)
|
||||
|
||||
self.mock_handler.on_rdata.assert_called_once()
|
||||
stream_name, _, token, rdata_rows = self.mock_handler.on_rdata.call_args[0]
|
||||
self.assertEqual(stream_name, "typing")
|
||||
self.assertEqual(1, len(rdata_rows))
|
||||
row: TypingStream.TypingStreamRow = rdata_rows[0]
|
||||
self.assertEqual(ROOM_ID, row.room_id)
|
||||
self.assertEqual([USER_ID], row.user_ids)
|
||||
# Filter the updates to only include typing changes
|
||||
received_typing_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == TypingStream.NAME
|
||||
]
|
||||
self.assertEqual(
|
||||
len(received_typing_rows),
|
||||
1,
|
||||
"Expected exactly one row for the typing stream",
|
||||
)
|
||||
(stream_name, token, row) = received_typing_rows[0]
|
||||
self.assertEqual(stream_name, TypingStream.NAME)
|
||||
self.assertIsInstance(row, TypingStream.ROW_TYPE)
|
||||
self.assertEqual(row.room_id, ROOM_ID)
|
||||
self.assertEqual(row.user_ids, [USER_ID])
|
||||
# Clear out the received rows that we've checked so we can check for new ones later
|
||||
self.test_handler.received_rdata_rows.clear()
|
||||
|
||||
# Now let's disconnect and insert some data.
|
||||
self.disconnect()
|
||||
|
||||
self.mock_handler.on_rdata.reset_mock()
|
||||
|
||||
typing._push_update(member=RoomMember(ROOM_ID, USER_ID), typing=False)
|
||||
|
||||
self.mock_handler.on_rdata.assert_not_called()
|
||||
# Not yet connected: no rows should yet have been received
|
||||
self.assertEqual([], self.test_handler.received_rdata_rows)
|
||||
|
||||
# Now reconnect and pull the updates
|
||||
self.reconnect()
|
||||
self.pump(0.1)
|
||||
self.replicate()
|
||||
|
||||
# We should now see an attempt to connect to the master
|
||||
request = self.handle_http_replication_attempt()
|
||||
self.assert_request_is_get_repl_stream_updates(request, "typing")
|
||||
self.assert_request_is_get_repl_stream_updates(request, TypingStream.NAME)
|
||||
|
||||
# The from token should be the token from the last RDATA we got.
|
||||
assert request.args is not None
|
||||
self.assertEqual(int(request.args[b"from_token"][0]), token)
|
||||
|
||||
self.mock_handler.on_rdata.assert_called_once()
|
||||
stream_name, _, token, rdata_rows = self.mock_handler.on_rdata.call_args[0]
|
||||
self.assertEqual(stream_name, "typing")
|
||||
self.assertEqual(1, len(rdata_rows))
|
||||
row = rdata_rows[0]
|
||||
self.assertEqual(ROOM_ID, row.room_id)
|
||||
self.assertEqual([], row.user_ids)
|
||||
received_typing_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == TypingStream.NAME
|
||||
]
|
||||
self.assertEqual(
|
||||
len(received_typing_rows),
|
||||
1,
|
||||
"Expected exactly one row for the typing stream",
|
||||
)
|
||||
(stream_name, token, row) = received_typing_rows[0]
|
||||
self.assertEqual(stream_name, TypingStream.NAME)
|
||||
self.assertIsInstance(row, TypingStream.ROW_TYPE)
|
||||
self.assertEqual(row.room_id, ROOM_ID)
|
||||
self.assertEqual(row.user_ids, [])
|
||||
|
||||
def test_reset(self) -> None:
|
||||
"""
|
||||
@@ -116,33 +136,47 @@ class TypingStreamTestCase(BaseStreamTestCase):
|
||||
# update to fetch.
|
||||
typing._push_update(member=RoomMember(ROOM_ID, USER_ID), typing=True)
|
||||
|
||||
# Not yet connected: no rows should yet have been received
|
||||
self.assertEqual([], self.test_handler.received_rdata_rows)
|
||||
|
||||
# Now reconnect to pull the updates
|
||||
self.reconnect()
|
||||
|
||||
typing._push_update(member=RoomMember(ROOM_ID, USER_ID), typing=True)
|
||||
|
||||
self.reactor.advance(0)
|
||||
# Pull in the updates
|
||||
self.replicate()
|
||||
|
||||
# We should now see an attempt to connect to the master
|
||||
request = self.handle_http_replication_attempt()
|
||||
self.assert_request_is_get_repl_stream_updates(request, "typing")
|
||||
|
||||
self.mock_handler.on_rdata.assert_called_once()
|
||||
stream_name, _, token, rdata_rows = self.mock_handler.on_rdata.call_args[0]
|
||||
self.assertEqual(stream_name, "typing")
|
||||
self.assertEqual(1, len(rdata_rows))
|
||||
row: TypingStream.TypingStreamRow = rdata_rows[0]
|
||||
self.assertEqual(ROOM_ID, row.room_id)
|
||||
self.assertEqual([USER_ID], row.user_ids)
|
||||
received_typing_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == TypingStream.NAME
|
||||
]
|
||||
self.assertEqual(
|
||||
len(received_typing_rows),
|
||||
1,
|
||||
"Expected exactly one row for the typing stream",
|
||||
)
|
||||
(stream_name, token, row) = received_typing_rows[0]
|
||||
self.assertEqual(stream_name, TypingStream.NAME)
|
||||
self.assertIsInstance(row, TypingStream.ROW_TYPE)
|
||||
self.assertEqual(row.room_id, ROOM_ID)
|
||||
self.assertEqual(row.user_ids, [USER_ID])
|
||||
|
||||
# Push the stream forward a bunch so it can be reset.
|
||||
for i in range(100):
|
||||
typing._push_update(
|
||||
member=RoomMember(ROOM_ID, "@test%s:blue" % i), typing=True
|
||||
)
|
||||
self.reactor.advance(0)
|
||||
# Pull in the updates
|
||||
self.replicate()
|
||||
|
||||
# Disconnect.
|
||||
self.disconnect()
|
||||
self.test_handler.received_rdata_rows.clear()
|
||||
|
||||
# Reset the typing handler
|
||||
self.hs.get_replication_streams()["typing"].last_token = 0
|
||||
@@ -155,30 +189,34 @@ class TypingStreamTestCase(BaseStreamTestCase):
|
||||
)
|
||||
typing._reset()
|
||||
|
||||
# Reconnect.
|
||||
# Now reconnect and pull the updates
|
||||
self.reconnect()
|
||||
self.pump(0.1)
|
||||
self.replicate()
|
||||
|
||||
# We should now see an attempt to connect to the master
|
||||
request = self.handle_http_replication_attempt()
|
||||
self.assert_request_is_get_repl_stream_updates(request, "typing")
|
||||
|
||||
# Reset the test code.
|
||||
self.mock_handler.on_rdata.reset_mock()
|
||||
self.mock_handler.on_rdata.assert_not_called()
|
||||
|
||||
# Push additional data.
|
||||
typing._push_update(member=RoomMember(ROOM_ID_2, USER_ID_2), typing=False)
|
||||
self.reactor.advance(0)
|
||||
|
||||
self.mock_handler.on_rdata.assert_called_once()
|
||||
stream_name, _, token, rdata_rows = self.mock_handler.on_rdata.call_args[0]
|
||||
self.assertEqual(stream_name, "typing")
|
||||
self.assertEqual(1, len(rdata_rows))
|
||||
row = rdata_rows[0]
|
||||
self.assertEqual(ROOM_ID_2, row.room_id)
|
||||
self.assertEqual([], row.user_ids)
|
||||
# Pull the updates
|
||||
self.replicate()
|
||||
|
||||
received_typing_rows = [
|
||||
row
|
||||
for row in self.test_handler.received_rdata_rows
|
||||
if row[0] == TypingStream.NAME
|
||||
]
|
||||
self.assertEqual(
|
||||
len(received_typing_rows),
|
||||
1,
|
||||
"Expected exactly one row for the typing stream",
|
||||
)
|
||||
(stream_name, token, row) = received_typing_rows[0]
|
||||
self.assertEqual(stream_name, TypingStream.NAME)
|
||||
self.assertIsInstance(row, TypingStream.ROW_TYPE)
|
||||
self.assertEqual(row.room_id, ROOM_ID_2)
|
||||
self.assertEqual(row.user_ids, [])
|
||||
# The token should have been reset.
|
||||
self.assertEqual(token, 1)
|
||||
finally:
|
||||
|
||||
@@ -533,16 +533,6 @@ class MSC4190AppserviceDevicesTestCase(unittest.HomeserverTestCase):
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# On the regular service, that API should not allow for the
|
||||
# creation of new devices.
|
||||
channel = self.make_request(
|
||||
"PUT",
|
||||
"/_matrix/client/v3/devices/AABBCCDD?user_id=@bob:test",
|
||||
content={"display_name": "Bob's device"},
|
||||
access_token=self.pre_msc_service.token,
|
||||
)
|
||||
self.assertEqual(channel.code, 404, channel.json_body)
|
||||
|
||||
def test_DELETE_device(self) -> None:
|
||||
self.register_appservice_user(
|
||||
"alice", self.msc4190_service.token, inhibit_login=True
|
||||
|
||||
@@ -110,13 +110,13 @@ class MonthlyActiveUsersTestCase(unittest.HomeserverTestCase):
|
||||
self.assertGreater(timestamp, 0)
|
||||
|
||||
# Test that users with reserved 3pids are not removed from the MAU table
|
||||
# XXX some of this is redundant. poking things into the config shouldn't
|
||||
# work, and in any case it's not obvious what we expect to happen when
|
||||
# we advance the reactor.
|
||||
self.hs.config.server.max_mau_value = 0
|
||||
#
|
||||
# The `start_phone_stats_home()` looping call will cause us to run
|
||||
# `reap_monthly_active_users` after the time has advanced
|
||||
self.reactor.advance(FORTY_DAYS)
|
||||
self.hs.config.server.max_mau_value = 5
|
||||
|
||||
# I guess we call this one more time for good measure? Perhaps because
|
||||
# previously, the phone home stats weren't running in tests?
|
||||
self.get_success(self.store.reap_monthly_active_users())
|
||||
|
||||
active_count = self.get_success(self.store.get_monthly_active_count())
|
||||
|
||||
@@ -75,7 +75,7 @@ class CommonMetricsTestCase(HomeserverTestCase):
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.metrics_manager = hs.get_common_usage_metrics_manager()
|
||||
self.get_success(self.metrics_manager.setup())
|
||||
self.metrics_manager.setup()
|
||||
|
||||
def test_dau(self) -> None:
|
||||
"""Tests that the daily active users count is correctly updated."""
|
||||
|
||||
Reference in New Issue
Block a user