Compare commits

...

37 Commits

Author SHA1 Message Date
Andrew Morgan
abe974cd2b 1.138.4 2025-10-07 16:28:59 +01:00
Andrew Morgan
0ae1f105b2 Update KeyUploadServlet to handle case where client sends device_keys: null (#19023) 2025-10-07 16:27:58 +01:00
Andrew Morgan
527e831b61 1.138.3 2025-10-07 12:54:43 +01:00
Till
7069636c2d Validate the body of requests to /keys/upload (#17097)
Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-authored-by: Eric Eastwood <erice@element.io>
2025-10-07 11:41:00 +01:00
Andrew Morgan
dde1e012a4 Remove unstable prefixes for MSC2732: Olm fallback keys (#18996)
Co-authored-by: Eric Eastwood <erice@element.io>
2025-10-07 11:40:55 +01:00
Andrew Morgan
533d5e0a7a Remove unstable prefixes for MSC2732
This MSC was accepted in 2022. We shouldn't need to continue supporting the unstable field names.
2025-10-07 11:40:50 +01:00
Andrew Morgan
6c292dc4ee 1.138.2 2025-09-24 12:26:49 +01:00
Andrew Morgan
120389b077 Note ubuntu release support update in the upgrade notes 2025-09-24 12:25:41 +01:00
Andrew Morgan
71b34b3a07 Drop support for Ubuntu 24.10 'Oracular Oriole', add support for Ubuntu 25.04 'Plucky Puffin' (#18962) 2025-09-24 12:24:32 +01:00
Andrew Morgan
0fbf296c99 1.138.1 2025-09-24 11:32:48 +01:00
Andrew Morgan
0c8594c9a8 Fix performance regression related to delayed events processing (#18926) 2025-09-24 11:30:47 +01:00
Andrew Morgan
fcffd2e897 1.138.0 2025-09-09 11:21:30 +01:00
Quentin Gliech
09a489e198 1.138.0rc1 2025-09-02 14:16:55 +02:00
Quentin Gliech
537e14169e Support stable endpoint and scopes from the MSC3861 family (#18549)
This adds stable APIs for both MSC2965 and MSC2967
2025-09-02 13:55:12 +02:00
Eric Eastwood
68068de3a4 Trace how much work is being done while "recursively fetching redactions" (#18854)
Spawning from observing this trace for a `/messages` request
(`RoomMessageListRestServlet`). We don't know if it took a while for the
database to fetch a single redaction or a whole chain of redactions.
2025-08-27 12:27:33 -05:00
Eric Eastwood
356cc4a0a1 Instrument _ByteProducer with tracing to measure potential dead time while writing bytes to the request (#18804)
This will allow to easily see how much time is taken up by
being able to filter by the `write_bytes_to_request` operation
in Jaeger.

Spawning from https://github.com/element-hq/synapse/issues/17722

The `write_bytes_to_request` span won't show up in the trace until
https://github.com/element-hq/synapse/pull/18849 is merged.

Note: It's totally fine for a span child to finish after the parent. See
https://opentracing.io/specification/#references-between-spans which
shows "Child Span D" outliving the "Parent Span"
2025-08-27 12:26:42 -05:00
Eric Eastwood
27fc3389f3 Switch to OpenTracing's ContextVarsScopeManager (#18849)
Switch to OpenTracing's `ContextVarsScopeManager` instead of our own
custom `LogContextScopeManager`.

This is now possible because the linked Twisted issue from the comment
in our custom `LogContextScopeManager` is resolved:
https://twistedmatrix.com/trac/ticket/10301

This PR is spawning from exploring different possibilities to solve the
`scope` loss problem I was encountering in
https://github.com/element-hq/synapse/pull/18804#discussion_r2268254424.
This appears to solve the problem and I've added the additional test
from there to this PR 
2025-08-27 11:41:00 -05:00
Eric Eastwood
df2cfb3932 Link upstream Twisted bug: Idle connection timeout incorrectly enforced while sending large response with Request.write(...) (#18855)
Link upstream Twisted bug ->
https://github.com/twisted/twisted/issues/12498

Spawning from https://github.com/element-hq/synapse/pull/18852
2025-08-27 11:25:57 -05:00
Andrew Ferrazzutti
c339021ce8 Reduce strictness of delayed event delta fetching (#18858) 2025-08-27 13:26:10 +01:00
dependabot[bot]
499f947c67 Bump actions/checkout from 4.3.0 to 5.0.0 (#18834)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:47:19 +01:00
dependabot[bot]
e76a9af4d7 Bump types-jsonschema from 4.25.0.20250720 to 4.25.1.20250822 (#18867)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:28:01 +01:00
dependabot[bot]
eec1ca6e93 Bump serde_json from 1.0.142 to 1.0.143 (#18866)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:04:50 +01:00
dependabot[bot]
56b5759c0f Bump ruff from 0.12.7 to 0.12.10 (#18865)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:03:41 +01:00
dependabot[bot]
767177ca5a Bump regex from 1.11.1 to 1.11.2 (#18864)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:01:59 +01:00
dependabot[bot]
5b8e6e7911 Bump actions/add-to-project from c0c5949b017d0d4a39f7ba888255881bdac2a823 to 4515659e2b458b27365e167605ac44f219494b66 (#18863)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:01:12 +01:00
dependabot[bot]
6a6be6fbe2 Bump dtolnay/rust-toolchain from b3b07ba8b418998c39fb20f53e8b695cdcc8de1b to e97e2d8cc328f1b50210efc529dca0028893a2d9 (#18862)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:00:07 +01:00
dependabot[bot]
21c7841228 Bump reqwest from 0.12.22 to 0.12.23 (#18842)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:57:17 +01:00
dependabot[bot]
5b55e3f15d Bump anyhow from 1.0.98 to 1.0.99 (#18841)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:55:31 +01:00
dependabot[bot]
0e2b92bcbc Bump types-bleach from 6.2.0.20250514 to 6.2.0.20250809 (#18838)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:54:32 +01:00
dependabot[bot]
481987eb83 Bump phonenumbers from 9.0.11 to 9.0.12 (#18837)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:53:52 +01:00
dependabot[bot]
5fd30c7ea7 Bump types-psycopg2 from 2.9.21.20250718 to 2.9.21.20250809 (#18836)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:53:28 +01:00
dependabot[bot]
d527c794fb Bump docker/login-action from 3.4.0 to 3.5.0 (#18835)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:52:28 +01:00
Erik Johnston
19fe3f001e Merge branch 'master' into develop 2025-08-26 10:54:46 +01:00
Erik Johnston
f8a44638eb 1.137.0 2025-08-26 10:23:44 +01:00
Richard van der Hoff
7ec5e60671 Introduce EventPersistencePair type (#18857)
`Tuple[EventBase, EventContext]` is everywhere and I keep misspelling
it. Let's just define a type for it.
2025-08-26 10:15:03 +01:00
Ben Banfield-Zanin
48184eefa3 Fix worker documentation around room Admin APIs (#18853)
Discovered via https://github.com/element-hq/ess-helm/issues/677.
Looking at
https://github.com/element-hq/synapse/blob/v1.136.0/synapse/rest/admin/__init__.py#L266
only `RoomRestServlet` is generally worker capable. This is just the
Room Details API and the v1 Room Delete API and not all the APIs
documented on
https://element-hq.github.io/synapse/latest/admin_api/rooms.html
2025-08-26 10:04:47 +02:00
Shay
205d9e4fc4 Improve redact_on_ban performance (#18851)
Co-authored-by: Erik Johnston <erikj@jki.re>
2025-08-23 11:43:50 +01:00
55 changed files with 1166 additions and 575 deletions

View File

@@ -31,7 +31,7 @@ jobs:
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Extract version from pyproject.toml
# Note: explicitly requesting bash will mean bash is invoked with `-eo pipefail`, see
@@ -41,13 +41,13 @@ jobs:
echo "SYNAPSE_VERSION=$(grep "^version" pyproject.toml | sed -E 's/version\s*=\s*["]([^"]*)["]/\1/')" >> $GITHUB_ENV
- name: Log in to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -102,14 +102,14 @@ jobs:
merge-multiple: true
- name: Log in to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
if: ${{ startsWith(matrix.repository, 'docker.io') }}
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
if: ${{ startsWith(matrix.repository, 'ghcr.io') }}
with:
registry: ghcr.io

View File

@@ -13,7 +13,7 @@ jobs:
name: GitHub Pages
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
@@ -50,7 +50,7 @@ jobs:
name: Check links in documentation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2.0.0

View File

@@ -50,7 +50,7 @@ jobs:
needs:
- pre
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0

View File

@@ -18,10 +18,10 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
components: clippy, rustfmt

View File

@@ -42,9 +42,9 @@ jobs:
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -77,10 +77,10 @@ jobs:
postgres-version: "14"
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -152,10 +152,10 @@ jobs:
BLACKLIST: ${{ matrix.workers && 'synapse-blacklist-with-workers' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -202,7 +202,7 @@ jobs:
steps:
- name: Check out synapse codebase
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: synapse
@@ -234,7 +234,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -16,7 +16,7 @@ jobs:
name: "Check locked dependencies have sdists"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.x'

View File

@@ -33,22 +33,22 @@ jobs:
packages: write
steps:
- name: Checkout specific branch (debug build)
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
if: github.event_name == 'workflow_dispatch'
with:
ref: ${{ inputs.branch }}
- name: Checkout clean copy of develop (scheduled build)
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
if: github.event_name == 'schedule'
with:
ref: develop
- name: Checkout clean copy of master (on-push)
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
if: github.event_name == 'push'
with:
ref: master
- name: Login to registry
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.actor }}

View File

@@ -27,7 +27,7 @@ jobs:
name: "Calculate list of debian distros"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -55,7 +55,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: src
@@ -132,7 +132,7 @@ jobs:
os: "ubuntu-24.04-arm"
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
@@ -165,7 +165,7 @@ jobs:
if: ${{ !startsWith(github.ref, 'refs/pull/') }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.10"

View File

@@ -14,7 +14,7 @@ jobs:
name: Ensure Synapse config schema is valid
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -40,7 +40,7 @@ jobs:
name: Ensure generated documentation is up-to-date
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"

View File

@@ -86,9 +86,9 @@ jobs:
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -106,7 +106,7 @@ jobs:
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -116,7 +116,7 @@ jobs:
check-lockfile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -129,7 +129,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
@@ -151,10 +151,10 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -187,7 +187,7 @@ jobs:
lint-crlf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Check line endings
run: scripts-dev/check_line_terminators.sh
@@ -195,7 +195,7 @@ jobs:
if: ${{ (github.base_ref == 'develop' || contains(github.base_ref, 'release-')) && github.actor != 'dependabot[bot]' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -213,11 +213,11 @@ jobs:
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -233,10 +233,10 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
components: clippy
toolchain: ${{ env.RUST_VERSION }}
@@ -252,10 +252,10 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: nightly-2025-04-23
components: clippy
@@ -270,10 +270,10 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -306,10 +306,10 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
# We use nightly so that we can use some unstable options that we use in
# `.rustfmt.toml`.
@@ -326,7 +326,7 @@ jobs:
needs: changes
if: ${{ needs.changes.outputs.linting_readme == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -376,7 +376,7 @@ jobs:
needs: linting-done
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -397,7 +397,7 @@ jobs:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.trial_test_matrix) }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.job.postgres-version }}
if: ${{ matrix.job.postgres-version }}
@@ -412,7 +412,7 @@ jobs:
postgres:${{ matrix.job.postgres-version }}
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -453,10 +453,10 @@ jobs:
- changes
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -518,7 +518,7 @@ jobs:
extras: ["all"]
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Install libs necessary for PyPy to build binary wheels for dependencies
- run: sudo apt-get -qq install xmlsec1 libxml2-dev libxslt-dev
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
@@ -568,12 +568,12 @@ jobs:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.sytest_test_matrix) }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Prepare test blacklist
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -615,7 +615,7 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
@@ -659,7 +659,7 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Add PostgreSQL apt repository
# We need a version of pg_dump that can handle the version of
# PostgreSQL being tested against. The Ubuntu package repository lags
@@ -714,12 +714,12 @@ jobs:
steps:
- name: Checkout synapse codebase
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: synapse
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -750,10 +750,10 @@ jobs:
- changes
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -770,10 +770,10 @@ jobs:
- changes
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: nightly-2022-12-01
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0

View File

@@ -11,7 +11,7 @@ jobs:
if: >
contains(github.event.issue.labels.*.name, 'X-Needs-Info')
steps:
- uses: actions/add-to-project@c0c5949b017d0d4a39f7ba888255881bdac2a823 # v1.0.2
- uses: actions/add-to-project@4515659e2b458b27365e167605ac44f219494b66 # v1.0.2
id: add_project
with:
project-url: "https://github.com/orgs/matrix-org/projects/67"

View File

@@ -43,10 +43,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -70,11 +70,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- run: sudo apt-get -qq install xmlsec1
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -117,10 +117,10 @@ jobs:
- ${{ github.workspace }}:/src
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -175,7 +175,7 @@ jobs:
steps:
- name: Run actions/checkout@v4 for synapse
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: synapse
@@ -217,7 +217,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,3 +1,98 @@
# Synapse 1.138.4 (2025-10-07)
## Bugfixes
- Fix a bug introduced in 1.138.3 where a client could receive an Internal Server Error if they set `device_keys: null` in the request to [`POST /_matrix/client/v3/keys/upload`](https://spec.matrix.org/v1.16/client-server-api/#post_matrixclientv3keysupload). ([\#19023](https://github.com/element-hq/synapse/issues/19023))
# Synapse 1.138.3 (2025-10-07)
## Security Fixes
- Fix [CVE-2025-61672](https://www.cve.org/CVERecord?id=CVE-2025-61672) / [GHSA-fh66-fcv5-jjfr](https://github.com/element-hq/synapse/security/advisories/GHSA-fh66-fcv5-jjfr). Lack of validation for device keys in Synapse before 1.139.1 allows an attacker registered on the victim homeserver to degrade federation functionality, unpredictably breaking outbound federation to other homeservers. ([\#17097](https://github.com/element-hq/synapse/issues/17097))
## Deprecations and Removals
- Drop support for unstable field names from the long-accepted [MSC2732](https://github.com/matrix-org/matrix-spec-proposals/pull/2732) (Olm fallback keys) proposal. This change allows unit tests to pass following the security patch above. ([\#18996](https://github.com/element-hq/synapse/issues/18996))
# Synapse 1.138.2 (2025-09-24)
## Internal Changes
- Drop support for Ubuntu 24.10 Oracular Oriole, and add support for Ubuntu 25.04 Plucky Puffin. ([\#18962](https://github.com/element-hq/synapse/issues/18962))
# Synapse 1.138.1 (2025-09-24)
## Bugfixes
- Fix a performance regression related to the experimental Delayed Events ([MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140)) feature. ([\#18926](https://github.com/element-hq/synapse/issues/18926))
# Synapse 1.138.0 (2025-09-09)
No significant changes since 1.138.0rc1.
# Synapse 1.138.0rc1 (2025-09-02)
### Features
- Support for the stable endpoint and scopes of [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861) & co. ([\#18549](https://github.com/element-hq/synapse/issues/18549))
### Bugfixes
- Improve database performance of [MSC4293](https://github.com/matrix-org/matrix-spec-proposals/pull/4293) - Redact on Kick/Ban. ([\#18851](https://github.com/element-hq/synapse/issues/18851))
- Do not throw an error when fetching a rejected delayed state event on startup. ([\#18858](https://github.com/element-hq/synapse/issues/18858))
### Improved Documentation
- Fix worker documentation incorrectly indicating all room Admin API requests were capable of being handled by workers. ([\#18853](https://github.com/element-hq/synapse/issues/18853))
### Internal Changes
- Instrument `_ByteProducer` with tracing to measure potential dead time while writing bytes to the request. ([\#18804](https://github.com/element-hq/synapse/issues/18804))
- Switch to OpenTracing's `ContextVarsScopeManager` instead of our own custom `LogContextScopeManager`. ([\#18849](https://github.com/element-hq/synapse/issues/18849))
- Trace how much work is being done while "recursively fetching redactions". ([\#18854](https://github.com/element-hq/synapse/issues/18854))
- Link [upstream Twisted bug](https://github.com/twisted/twisted/issues/12498) tracking the problem that explains why we have to use a `Producer` to write bytes to the request. ([\#18855](https://github.com/element-hq/synapse/issues/18855))
- Introduce `EventPersistencePair` type. ([\#18857](https://github.com/element-hq/synapse/issues/18857))
### Updates to locked dependencies
* Bump actions/add-to-project from c0c5949b017d0d4a39f7ba888255881bdac2a823 to 4515659e2b458b27365e167605ac44f219494b66. ([\#18863](https://github.com/element-hq/synapse/issues/18863))
* Bump actions/checkout from 4.3.0 to 5.0.0. ([\#18834](https://github.com/element-hq/synapse/issues/18834))
* Bump anyhow from 1.0.98 to 1.0.99. ([\#18841](https://github.com/element-hq/synapse/issues/18841))
* Bump docker/login-action from 3.4.0 to 3.5.0. ([\#18835](https://github.com/element-hq/synapse/issues/18835))
* Bump dtolnay/rust-toolchain from b3b07ba8b418998c39fb20f53e8b695cdcc8de1b to e97e2d8cc328f1b50210efc529dca0028893a2d9. ([\#18862](https://github.com/element-hq/synapse/issues/18862))
* Bump phonenumbers from 9.0.11 to 9.0.12. ([\#18837](https://github.com/element-hq/synapse/issues/18837))
* Bump regex from 1.11.1 to 1.11.2. ([\#18864](https://github.com/element-hq/synapse/issues/18864))
* Bump reqwest from 0.12.22 to 0.12.23. ([\#18842](https://github.com/element-hq/synapse/issues/18842))
* Bump ruff from 0.12.7 to 0.12.10. ([\#18865](https://github.com/element-hq/synapse/issues/18865))
* Bump serde_json from 1.0.142 to 1.0.143. ([\#18866](https://github.com/element-hq/synapse/issues/18866))
* Bump types-bleach from 6.2.0.20250514 to 6.2.0.20250809. ([\#18838](https://github.com/element-hq/synapse/issues/18838))
* Bump types-jsonschema from 4.25.0.20250720 to 4.25.1.20250822. ([\#18867](https://github.com/element-hq/synapse/issues/18867))
* Bump types-psycopg2 from 2.9.21.20250718 to 2.9.21.20250809. ([\#18836](https://github.com/element-hq/synapse/issues/18836))
# Synapse 1.137.0 (2025-08-26)
No significant changes since 1.137.0rc1.
# Synapse 1.137.0rc1 (2025-08-19)
### Bugfixes

16
Cargo.lock generated
View File

@@ -28,9 +28,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.98"
version = "1.0.99"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e16d2d3311acee920a9eb8d33b8cbc1787ce4a264e85f964c2404b969bdcd487"
checksum = "b0674a1ddeecb70197781e945de4b3b8ffb61fa939a5597bcf48503737663100"
[[package]]
name = "arc-swap"
@@ -1062,9 +1062,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.11.1"
version = "1.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b544ef1b4eac5dc2db33ea63606ae9ffcfac26c1416a2806ae0bf5f56b201191"
checksum = "23d7fd106d8c02486a8d64e778353d1cffe08ce79ac2e82f540c86d0facf6912"
dependencies = [
"aho-corasick",
"memchr",
@@ -1091,9 +1091,9 @@ checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
[[package]]
name = "reqwest"
version = "0.12.22"
version = "0.12.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cbc931937e6ca3a06e3b6c0aa7841849b160a90351d6ab467a8b9b9959767531"
checksum = "d429f34c8092b2d42c7c93cec323bb4adeb7c67698f70839adec842ec10c7ceb"
dependencies = [
"base64",
"bytes",
@@ -1270,9 +1270,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.142"
version = "1.0.143"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "030fedb782600dcbd6f02d479bf0d817ac3bb40d644745b769d6a96bc3afc5a7"
checksum = "d401abef1d108fbd9cbaebc3e46611f4b1021f714a0597a71f41ee463f5f4a5a"
dependencies = [
"itoa",
"memchr",

42
debian/changelog vendored
View File

@@ -1,3 +1,45 @@
matrix-synapse-py3 (1.138.4) stable; urgency=medium
* New Synapse release 1.138.4.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 16:28:38 +0100
matrix-synapse-py3 (1.138.3) stable; urgency=medium
* New Synapse release 1.138.3.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 12:54:18 +0100
matrix-synapse-py3 (1.138.2) stable; urgency=medium
* New Synapse release 1.138.2.
-- Synapse Packaging team <packages@matrix.org> Wed, 24 Sep 2025 12:26:16 +0100
matrix-synapse-py3 (1.138.1) stable; urgency=medium
* New Synapse release 1.138.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 24 Sep 2025 11:32:38 +0100
matrix-synapse-py3 (1.138.0) stable; urgency=medium
* New Synapse release 1.138.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 09 Sep 2025 11:21:25 +0100
matrix-synapse-py3 (1.138.0~rc1) stable; urgency=medium
* New synapse release 1.138.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 02 Sep 2025 12:16:14 +0000
matrix-synapse-py3 (1.137.0) stable; urgency=medium
* New Synapse release 1.137.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Aug 2025 10:23:41 +0100
matrix-synapse-py3 (1.137.0~rc1) stable; urgency=medium
* New Synapse release 1.137.0rc1.

View File

@@ -117,6 +117,16 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.138.1
## Drop support for Ubuntu 24.10 Oracular Oriole, and add support for Ubuntu 25.04 Plucky Puffin
Ubuntu 24.10 Oracular Oriole [has been end-of-life since 10 Jul
2025](https://endoflife.date/ubuntu). This release drops support for Ubuntu
24.10, and in its place adds support for Ubuntu 25.04 Plucky Puffin.
This notice also applies to the v1.139.0 release.
# Upgrading to v1.136.0
## Deprecate `run_as_background_process` exported as part of the module API interface in favor of `ModuleApi.run_as_background_process`

View File

@@ -252,7 +252,7 @@ information.
^/_matrix/client/(api/v1|r0|v3|unstable)/directory/room/.*$
^/_matrix/client/(r0|v3|unstable)/capabilities$
^/_matrix/client/(r0|v3|unstable)/notifications$
^/_synapse/admin/v1/rooms/
^/_synapse/admin/v1/rooms/[^/]+$
# Encryption requests
^/_matrix/client/(r0|v3|unstable)/keys/query$

65
poetry.lock generated
View File

@@ -1531,14 +1531,14 @@ files = [
[[package]]
name = "phonenumbers"
version = "9.0.11"
version = "9.0.12"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "phonenumbers-9.0.11-py2.py3-none-any.whl", hash = "sha256:a8ebb2136f1f14dfdbadb98be01cb71b96f880dea011eb5e0921967fe3a23abf"},
{file = "phonenumbers-9.0.11.tar.gz", hash = "sha256:6573858dcf0a7a2753a071375e154d9fc11791546c699b575af95d2ba7d84a1d"},
{file = "phonenumbers-9.0.12-py2.py3-none-any.whl", hash = "sha256:900633afc3e12191458d710262df5efc117838bd1e2e613b64fa254a86bb20a1"},
{file = "phonenumbers-9.0.12.tar.gz", hash = "sha256:ccadff6b949494bd606836d8c9678bee5b55cb1cbad1e98bf7adae108e6fd0be"},
]
[[package]]
@@ -2396,30 +2396,31 @@ files = [
[[package]]
name = "ruff"
version = "0.12.7"
version = "0.12.10"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "ruff-0.12.7-py3-none-linux_armv6l.whl", hash = "sha256:76e4f31529899b8c434c3c1dede98c4483b89590e15fb49f2d46183801565303"},
{file = "ruff-0.12.7-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:789b7a03e72507c54fb3ba6209e4bb36517b90f1a3569ea17084e3fd295500fb"},
{file = "ruff-0.12.7-py3-none-macosx_11_0_arm64.whl", hash = "sha256:2e1c2a3b8626339bb6369116e7030a4cf194ea48f49b64bb505732a7fce4f4e3"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32dec41817623d388e645612ec70d5757a6d9c035f3744a52c7b195a57e03860"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:47ef751f722053a5df5fa48d412dbb54d41ab9b17875c6840a58ec63ff0c247c"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a828a5fc25a3efd3e1ff7b241fd392686c9386f20e5ac90aa9234a5faa12c423"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:5726f59b171111fa6a69d82aef48f00b56598b03a22f0f4170664ff4d8298efb"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:74e6f5c04c4dd4aba223f4fe6e7104f79e0eebf7d307e4f9b18c18362124bccd"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5d0bfe4e77fba61bf2ccadf8cf005d6133e3ce08793bbe870dd1c734f2699a3e"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06bfb01e1623bf7f59ea749a841da56f8f653d641bfd046edee32ede7ff6c606"},
{file = "ruff-0.12.7-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e41df94a957d50083fd09b916d6e89e497246698c3f3d5c681c8b3e7b9bb4ac8"},
{file = "ruff-0.12.7-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:4000623300563c709458d0ce170c3d0d788c23a058912f28bbadc6f905d67afa"},
{file = "ruff-0.12.7-py3-none-musllinux_1_2_i686.whl", hash = "sha256:69ffe0e5f9b2cf2b8e289a3f8945b402a1b19eff24ec389f45f23c42a3dd6fb5"},
{file = "ruff-0.12.7-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:a07a5c8ffa2611a52732bdc67bf88e243abd84fe2d7f6daef3826b59abbfeda4"},
{file = "ruff-0.12.7-py3-none-win32.whl", hash = "sha256:c928f1b2ec59fb77dfdf70e0419408898b63998789cc98197e15f560b9e77f77"},
{file = "ruff-0.12.7-py3-none-win_amd64.whl", hash = "sha256:9c18f3d707ee9edf89da76131956aba1270c6348bfee8f6c647de841eac7194f"},
{file = "ruff-0.12.7-py3-none-win_arm64.whl", hash = "sha256:dfce05101dbd11833a0776716d5d1578641b7fddb537fe7fa956ab85d1769b69"},
{file = "ruff-0.12.7.tar.gz", hash = "sha256:1fc3193f238bc2d7968772c82831a4ff69252f673be371fb49663f0068b7ec71"},
{file = "ruff-0.12.10-py3-none-linux_armv6l.whl", hash = "sha256:8b593cb0fb55cc8692dac7b06deb29afda78c721c7ccfed22db941201b7b8f7b"},
{file = "ruff-0.12.10-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:ebb7333a45d56efc7c110a46a69a1b32365d5c5161e7244aaf3aa20ce62399c1"},
{file = "ruff-0.12.10-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d59e58586829f8e4a9920788f6efba97a13d1fa320b047814e8afede381c6839"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:822d9677b560f1fdeab69b89d1f444bf5459da4aa04e06e766cf0121771ab844"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:37b4a64f4062a50c75019c61c7017ff598cb444984b638511f48539d3a1c98db"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2c6f4064c69d2542029b2a61d39920c85240c39837599d7f2e32e80d36401d6e"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:059e863ea3a9ade41407ad71c1de2badfbe01539117f38f763ba42a1206f7559"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1bef6161e297c68908b7218fa6e0e93e99a286e5ed9653d4be71e687dff101cf"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4f1345fbf8fb0531cd722285b5f15af49b2932742fc96b633e883da8d841896b"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1f68433c4fbc63efbfa3ba5db31727db229fa4e61000f452c540474b03de52a9"},
{file = "ruff-0.12.10-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:141ce3d88803c625257b8a6debf4a0473eb6eed9643a6189b68838b43e78165a"},
{file = "ruff-0.12.10-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:f3fc21178cd44c98142ae7590f42ddcb587b8e09a3b849cbc84edb62ee95de60"},
{file = "ruff-0.12.10-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:7d1a4e0bdfafcd2e3e235ecf50bf0176f74dd37902f241588ae1f6c827a36c56"},
{file = "ruff-0.12.10-py3-none-musllinux_1_2_i686.whl", hash = "sha256:e67d96827854f50b9e3e8327b031647e7bcc090dbe7bb11101a81a3a2cbf1cc9"},
{file = "ruff-0.12.10-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:ae479e1a18b439c59138f066ae79cc0f3ee250712a873d00dbafadaad9481e5b"},
{file = "ruff-0.12.10-py3-none-win32.whl", hash = "sha256:9de785e95dc2f09846c5e6e1d3a3d32ecd0b283a979898ad427a9be7be22b266"},
{file = "ruff-0.12.10-py3-none-win_amd64.whl", hash = "sha256:7837eca8787f076f67aba2ca559cefd9c5cbc3a9852fd66186f4201b87c1563e"},
{file = "ruff-0.12.10-py3-none-win_arm64.whl", hash = "sha256:cc138cc06ed9d4bfa9d667a65af7172b47840e1a98b02ce7011c391e54635ffc"},
{file = "ruff-0.12.10.tar.gz", hash = "sha256:189ab65149d11ea69a2d775343adf5f49bb2426fc4780f65ee33b423ad2e47f9"},
]
[[package]]
@@ -2877,14 +2878,14 @@ twisted = "*"
[[package]]
name = "types-bleach"
version = "6.2.0.20250514"
version = "6.2.0.20250809"
description = "Typing stubs for bleach"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_bleach-6.2.0.20250514-py3-none-any.whl", hash = "sha256:380cb74f0db1e3c3b2e0cde217221108e975e07e95ef0970c9d41f7cd4e8ea3c"},
{file = "types_bleach-6.2.0.20250514.tar.gz", hash = "sha256:38c2e51d9cac51dc70c1b66121a11f4dad8bbf47fbad494bb7a77d8b8f3c4323"},
{file = "types_bleach-6.2.0.20250809-py3-none-any.whl", hash = "sha256:0b372a75117947d9ac8a31ae733fd0f8d92ec75c4772e7b37093ba3fa5b48fb9"},
{file = "types_bleach-6.2.0.20250809.tar.gz", hash = "sha256:188d7a1119f6c953140b513ed57ba4213755695815472c19d0c22ac09c79b90b"},
]
[package.dependencies]
@@ -2919,14 +2920,14 @@ files = [
[[package]]
name = "types-jsonschema"
version = "4.25.0.20250720"
version = "4.25.1.20250822"
description = "Typing stubs for jsonschema"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_jsonschema-4.25.0.20250720-py3-none-any.whl", hash = "sha256:7d7897c715310d8bf9ae27a2cedba78bbb09e4cad83ce06d2aa79b73a88941df"},
{file = "types_jsonschema-4.25.0.20250720.tar.gz", hash = "sha256:765a3b6144798fe3161fd8cbe570a756ed3e8c0e5adb7c09693eb49faad39dbd"},
{file = "types_jsonschema-4.25.1.20250822-py3-none-any.whl", hash = "sha256:f82c2d7fa1ce1c0b84ba1de4ed6798469768188884db04e66421913a4e181294"},
{file = "types_jsonschema-4.25.1.20250822.tar.gz", hash = "sha256:aac69ed4b23f49aaceb7fcb834141d61b9e4e6a7f6008cb2f0d3b831dfa8464a"},
]
[package.dependencies]
@@ -2970,14 +2971,14 @@ files = [
[[package]]
name = "types-psycopg2"
version = "2.9.21.20250718"
version = "2.9.21.20250809"
description = "Typing stubs for psycopg2"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_psycopg2-2.9.21.20250718-py3-none-any.whl", hash = "sha256:bcf085d4293bda48f5943a46dadf0389b2f98f7e8007722f7e1c12ee0f541858"},
{file = "types_psycopg2-2.9.21.20250718.tar.gz", hash = "sha256:dc09a97272ef67e739e57b9f4740b761208f4514257e311c0b05c8c7a37d04b4"},
{file = "types_psycopg2-2.9.21.20250809-py3-none-any.whl", hash = "sha256:59b7b0ed56dcae9efae62b8373497274fc1a0484bdc5135cdacbe5a8f44e1d7b"},
{file = "types_psycopg2-2.9.21.20250809.tar.gz", hash = "sha256:b7c2cbdcf7c0bd16240f59ba694347329b0463e43398de69784ea4dee45f3c6d"},
]
[[package]]
@@ -3255,4 +3256,4 @@ url-preview = ["lxml"]
[metadata]
lock-version = "2.1"
python-versions = "^3.9.0"
content-hash = "600a349d08dde732df251583094a121b5385eb43ae0c6ceff10dcf9749359446"
content-hash = "2e8ea085e1a0c6f0ac051d4bc457a96827d01f621b1827086de01a5ffa98cf79"

View File

@@ -101,7 +101,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.137.0rc1"
version = "1.138.4"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"
@@ -324,7 +324,7 @@ all = [
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.12.7"
ruff = "0.12.10"
# Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2"

View File

@@ -1,5 +1,5 @@
$schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.137/synapse-config.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.138/synapse-config.schema.json
type: object
properties:
modules:

View File

@@ -32,7 +32,7 @@ DISTS = (
"debian:sid", # (rolling distro, no EOL)
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)
"ubuntu:noble", # 24.04 LTS (EOL 2029-06)
"ubuntu:oracular", # 24.10 (EOL 2025-07)
"ubuntu:plucky", # 25.04 (EOL 2026-01)
"debian:trixie", # (EOL not specified yet)
)

View File

@@ -13,7 +13,7 @@
#
#
import logging
from typing import TYPE_CHECKING, Optional
from typing import TYPE_CHECKING, Optional, Set
from urllib.parse import urlencode
from synapse._pydantic_compat import (
@@ -57,8 +57,10 @@ logger = logging.getLogger(__name__)
# Scope as defined by MSC2967
# https://github.com/matrix-org/matrix-spec-proposals/pull/2967
SCOPE_MATRIX_API = "urn:matrix:org.matrix.msc2967.client:api:*"
SCOPE_MATRIX_DEVICE_PREFIX = "urn:matrix:org.matrix.msc2967.client:device:"
UNSTABLE_SCOPE_MATRIX_API = "urn:matrix:org.matrix.msc2967.client:api:*"
UNSTABLE_SCOPE_MATRIX_DEVICE_PREFIX = "urn:matrix:org.matrix.msc2967.client:device:"
STABLE_SCOPE_MATRIX_API = "urn:matrix:client:api:*"
STABLE_SCOPE_MATRIX_DEVICE_PREFIX = "urn:matrix:client:device:"
class ServerMetadata(BaseModel):
@@ -334,7 +336,10 @@ class MasDelegatedAuth(BaseAuth):
scope = introspection_result.get_scope_set()
# Determine type of user based on presence of particular scopes
if SCOPE_MATRIX_API not in scope:
if (
UNSTABLE_SCOPE_MATRIX_API not in scope
and STABLE_SCOPE_MATRIX_API not in scope
):
raise InvalidClientTokenError(
"Token doesn't grant access to the Matrix C-S API"
)
@@ -366,11 +371,12 @@ class MasDelegatedAuth(BaseAuth):
# We only allow a single device_id in the scope, so we find them all in the
# scope list, and raise if there are more than one. The OIDC server should be
# the one enforcing valid scopes, so we raise a 500 if we find an invalid scope.
device_ids = [
tok[len(SCOPE_MATRIX_DEVICE_PREFIX) :]
for tok in scope
if tok.startswith(SCOPE_MATRIX_DEVICE_PREFIX)
]
device_ids: Set[str] = set()
for tok in scope:
if tok.startswith(UNSTABLE_SCOPE_MATRIX_DEVICE_PREFIX):
device_ids.add(tok[len(UNSTABLE_SCOPE_MATRIX_DEVICE_PREFIX) :])
elif tok.startswith(STABLE_SCOPE_MATRIX_DEVICE_PREFIX):
device_ids.add(tok[len(STABLE_SCOPE_MATRIX_DEVICE_PREFIX) :])
if len(device_ids) > 1:
raise AuthError(
@@ -378,7 +384,7 @@ class MasDelegatedAuth(BaseAuth):
"Multiple device IDs in scope",
)
device_id = device_ids[0] if device_ids else None
device_id = next(iter(device_ids), None)
if device_id is not None:
# Sanity check the device_id

View File

@@ -20,7 +20,7 @@
#
import logging
from dataclasses import dataclass
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Set
from urllib.parse import urlencode
from authlib.oauth2 import ClientAuth
@@ -34,7 +34,6 @@ from synapse.api.errors import (
AuthError,
HttpResponseException,
InvalidClientTokenError,
OAuthInsufficientScopeError,
SynapseError,
UnrecognizedRequestError,
)
@@ -63,9 +62,10 @@ logger = logging.getLogger(__name__)
# Scope as defined by MSC2967
# https://github.com/matrix-org/matrix-spec-proposals/pull/2967
SCOPE_MATRIX_API = "urn:matrix:org.matrix.msc2967.client:api:*"
SCOPE_MATRIX_GUEST = "urn:matrix:org.matrix.msc2967.client:api:guest"
SCOPE_MATRIX_DEVICE_PREFIX = "urn:matrix:org.matrix.msc2967.client:device:"
UNSTABLE_SCOPE_MATRIX_API = "urn:matrix:org.matrix.msc2967.client:api:*"
UNSTABLE_SCOPE_MATRIX_DEVICE_PREFIX = "urn:matrix:org.matrix.msc2967.client:device:"
STABLE_SCOPE_MATRIX_API = "urn:matrix:client:api:*"
STABLE_SCOPE_MATRIX_DEVICE_PREFIX = "urn:matrix:client:device:"
# Scope which allows access to the Synapse admin API
SCOPE_SYNAPSE_ADMIN = "urn:synapse:admin:*"
@@ -444,9 +444,6 @@ class MSC3861DelegatedAuth(BaseAuth):
if not self._is_access_token_the_admin_token(access_token):
await self._record_request(request, requester)
if not allow_guest and requester.is_guest:
raise OAuthInsufficientScopeError([SCOPE_MATRIX_API])
request.requester = requester
return requester
@@ -528,10 +525,11 @@ class MSC3861DelegatedAuth(BaseAuth):
scope: List[str] = introspection_result.get_scope_list()
# Determine type of user based on presence of particular scopes
has_user_scope = SCOPE_MATRIX_API in scope
has_guest_scope = SCOPE_MATRIX_GUEST in scope
has_user_scope = (
UNSTABLE_SCOPE_MATRIX_API in scope or STABLE_SCOPE_MATRIX_API in scope
)
if not has_user_scope and not has_guest_scope:
if not has_user_scope:
raise InvalidClientTokenError("No scope in token granting user rights")
# Match via the sub claim
@@ -579,11 +577,12 @@ class MSC3861DelegatedAuth(BaseAuth):
# We only allow a single device_id in the scope, so we find them all in the
# scope list, and raise if there are more than one. The OIDC server should be
# the one enforcing valid scopes, so we raise a 500 if we find an invalid scope.
device_ids = [
tok[len(SCOPE_MATRIX_DEVICE_PREFIX) :]
for tok in scope
if tok.startswith(SCOPE_MATRIX_DEVICE_PREFIX)
]
device_ids: Set[str] = set()
for tok in scope:
if tok.startswith(UNSTABLE_SCOPE_MATRIX_DEVICE_PREFIX):
device_ids.add(tok[len(UNSTABLE_SCOPE_MATRIX_DEVICE_PREFIX) :])
elif tok.startswith(STABLE_SCOPE_MATRIX_DEVICE_PREFIX):
device_ids.add(tok[len(STABLE_SCOPE_MATRIX_DEVICE_PREFIX) :])
if len(device_ids) > 1:
raise AuthError(
@@ -591,7 +590,7 @@ class MSC3861DelegatedAuth(BaseAuth):
"Multiple device IDs in scope",
)
device_id = device_ids[0] if device_ids else None
device_id = next(iter(device_ids), None)
if device_id is not None:
# Sanity check the device_id
@@ -617,5 +616,4 @@ class MSC3861DelegatedAuth(BaseAuth):
user_id=user_id,
device_id=device_id,
scope=scope,
is_guest=(has_guest_scope and not has_user_scope),
)

View File

@@ -306,6 +306,12 @@ class EventContext(UnpersistedEventContextBase):
)
EventPersistencePair = Tuple[EventBase, EventContext]
"""
The combination of an event to be persisted and its context.
"""
@attr.s(slots=True, auto_attribs=True)
class UnpersistedEventContext(UnpersistedEventContextBase):
"""
@@ -363,7 +369,7 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
room_id: str,
last_known_state_group: int,
datastore: "StateGroupDataStore",
) -> List[Tuple[EventBase, EventContext]]:
) -> List[EventPersistencePair]:
"""
Takes a list of events and their associated unpersisted contexts and persists
the unpersisted contexts, returning a list of events and persisted contexts.

View File

@@ -59,7 +59,7 @@ from synapse.api.errors import (
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
from synapse.crypto.event_signing import compute_event_signature
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.events.snapshot import EventPersistencePair
from synapse.federation.federation_base import (
FederationBase,
InvalidEventSignatureError,
@@ -914,7 +914,7 @@ class FederationServer(FederationBase):
async def _on_send_membership_event(
self, origin: str, content: JsonDict, membership_type: str, room_id: str
) -> Tuple[EventBase, EventContext]:
) -> EventPersistencePair:
"""Handle an on_send_{join,leave,knock} request
Does some preliminary validation before passing the request on to the

View File

@@ -18,7 +18,7 @@ from typing import TYPE_CHECKING, List, Optional, Set, Tuple
from twisted.internet.interfaces import IDelayedCall
from synapse.api.constants import EventTypes
from synapse.api.errors import ShadowBanError
from synapse.api.errors import ShadowBanError, SynapseError
from synapse.api.ratelimiting import Ratelimiter
from synapse.config.workers import MAIN_PROCESS_INSTANCE_NAME
from synapse.logging.opentracing import set_tag
@@ -45,6 +45,7 @@ from synapse.types import (
)
from synapse.util.events import generate_fake_event_id
from synapse.util.metrics import Measure
from synapse.util.sentinel import Sentinel
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -146,10 +147,37 @@ class DelayedEventsHandler:
)
async def _unsafe_process_new_event(self) -> None:
# We purposefully fetch the current max room stream ordering before
# doing anything else, as it could increment duing processing of state
# deltas. We want to avoid updating `delayed_events_stream_pos` past
# the stream ordering of the state deltas we've processed. Otherwise
# we'll leave gaps in our processing.
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
# Check that there are actually any delayed events to process. If not, bail early.
delayed_events_count = await self._store.get_count_of_delayed_events()
if delayed_events_count == 0:
# There are no delayed events to process. Update the
# `delayed_events_stream_pos` to the latest `events` stream pos and
# exit early.
self._event_pos = room_max_stream_ordering
logger.debug(
"No delayed events to process. Updating `delayed_events_stream_pos` to max stream ordering (%s)",
room_max_stream_ordering,
)
await self._store.update_delayed_events_stream_pos(room_max_stream_ordering)
event_processing_positions.labels(
name="delayed_events", **{SERVER_NAME_LABEL: self.server_name}
).set(room_max_stream_ordering)
return
# If self._event_pos is None then means we haven't fetched it from the DB yet
if self._event_pos is None:
self._event_pos = await self._store.get_delayed_events_stream_pos()
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self._event_pos > room_max_stream_ordering:
# apparently, we've processed more events than exist in the database!
# this can happen if events are removed with history purge or similar.
@@ -167,7 +195,7 @@ class DelayedEventsHandler:
self._clock, name="delayed_events_delta", server_name=self.server_name
):
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self._event_pos == room_max_stream_ordering:
if self._event_pos >= room_max_stream_ordering:
return
logger.debug(
@@ -202,23 +230,81 @@ class DelayedEventsHandler:
Process current state deltas to cancel other users' pending delayed events
that target the same state.
"""
# Get the senders of each delta's state event (as sender information is
# not currently stored in the `current_state_deltas` table).
event_id_and_sender_dict = await self._store.get_senders_for_event_ids(
[delta.event_id for delta in deltas if delta.event_id is not None]
)
# Note: No need to batch as `get_current_state_deltas` will only ever
# return 100 rows at a time.
for delta in deltas:
logger.debug(
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
)
# `delta.event_id` and `delta.sender` can be `None` in a few valid
# cases (see the docstring of
# `get_current_state_delta_membership_changes_for_user` for details).
if delta.event_id is None:
logger.debug(
"Not handling delta for deleted state: %r %r",
# TODO: Differentiate between this being caused by a state reset
# which removed a user from a room, or the homeserver
# purposefully having left the room. We can do so by checking
# whether there are any local memberships still left in the
# room. If so, then this is the result of a state reset.
#
# If it is a state reset, we should avoid cancelling new,
# delayed state events due to old state resurfacing. So we
# should skip and log a warning in this case.
#
# If the homeserver has left the room, then we should cancel all
# delayed state events intended for this room, as there is no
# need to try and send a delayed event into a room we've left.
logger.warning(
"Skipping state delta (%r, %r) without corresponding event ID. "
"This can happen if the homeserver has left the room (in which "
"case this can be ignored), or if there has been a state reset "
"which has caused the sender to be kicked out of the room",
delta.event_type,
delta.state_key,
)
continue
logger.debug(
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
sender_str = event_id_and_sender_dict.get(
delta.event_id, Sentinel.UNSET_SENTINEL
)
if sender_str is None:
# An event exists, but the `sender` field was "null" and Synapse
# incorrectly accepted the event. This is not expected.
logger.error(
"Skipping state delta with event ID '%s' as 'sender' was None. "
"This is unexpected - please report it as a bug!",
delta.event_id,
)
continue
if sender_str is Sentinel.UNSET_SENTINEL:
# We have an event ID, but the event was not found in the
# datastore. This can happen if a room, or its history, is
# purged. State deltas related to the room are left behind, but
# the event no longer exists.
#
# As we cannot get the sender of this event, we can't calculate
# whether to cancel delayed events related to this one. So we skip.
logger.debug(
"Skipping state delta with event ID '%s' - the room, or its history, may have been purged",
delta.event_id,
)
continue
event = await self._store.get_event(
delta.event_id, check_room_id=delta.room_id
)
sender = UserID.from_string(event.sender)
try:
sender = UserID.from_string(sender_str)
except SynapseError as e:
logger.error(
"Skipping state delta with Matrix User ID '%s' that failed to parse: %s",
sender_str,
e,
)
continue
next_send_ts = await self._store.cancel_delayed_state_events(
room_id=delta.room_id,

View File

@@ -57,7 +57,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
ONE_TIME_KEY_UPLOAD = "one_time_key_upload_lock"
@@ -848,14 +847,22 @@ class E2eKeysHandler:
"""
time_now = self.clock.time_msec()
# TODO: Validate the JSON to make sure it has the right keys.
device_keys = keys.get("device_keys", None)
if device_keys:
log_kv(
{
"message": "Updating device_keys for user.",
"user_id": user_id,
"device_id": device_id,
}
)
await self.upload_device_keys_for_user(
user_id=user_id,
device_id=device_id,
keys={"device_keys": device_keys},
)
else:
log_kv({"message": "Did not update device_keys", "reason": "not a dict"})
one_time_keys = keys.get("one_time_keys", None)
if one_time_keys:
@@ -873,10 +880,9 @@ class E2eKeysHandler:
log_kv(
{"message": "Did not update one_time_keys", "reason": "no keys given"}
)
fallback_keys = keys.get("fallback_keys") or keys.get(
"org.matrix.msc2732.fallback_keys"
)
if fallback_keys and isinstance(fallback_keys, dict):
fallback_keys = keys.get("fallback_keys")
if fallback_keys:
log_kv(
{
"message": "Updating fallback_keys for device.",
@@ -885,8 +891,6 @@ class E2eKeysHandler:
}
)
await self.store.set_e2e_fallback_keys(user_id, device_id, fallback_keys)
elif fallback_keys:
log_kv({"message": "Did not update fallback_keys", "reason": "not a dict"})
else:
log_kv(
{"message": "Did not update fallback_keys", "reason": "no keys given"}

View File

@@ -66,7 +66,11 @@ from synapse.event_auth import (
validate_event_for_room_version,
)
from synapse.events import EventBase
from synapse.events.snapshot import EventContext, UnpersistedEventContextBase
from synapse.events.snapshot import (
EventContext,
EventPersistencePair,
UnpersistedEventContextBase,
)
from synapse.federation.federation_client import InvalidResponseError, PulledPduInfo
from synapse.logging.context import nested_logging_context
from synapse.logging.opentracing import (
@@ -341,7 +345,7 @@ class FederationEventHandler:
async def on_send_membership_event(
self, origin: str, event: EventBase
) -> Tuple[EventBase, EventContext]:
) -> EventPersistencePair:
"""
We have received a join/leave/knock event for a room via send_join/leave/knock.
@@ -1712,7 +1716,7 @@ class FederationEventHandler:
)
auth_map.update(persisted_events)
events_and_contexts_to_persist: List[Tuple[EventBase, EventContext]] = []
events_and_contexts_to_persist: List[EventPersistencePair] = []
async def prep(event: EventBase) -> None:
with nested_logging_context(suffix=event.event_id):
@@ -2225,7 +2229,7 @@ class FederationEventHandler:
async def persist_events_and_notify(
self,
room_id: str,
event_and_contexts: Sequence[Tuple[EventBase, EventContext]],
event_and_contexts: Sequence[EventPersistencePair],
backfilled: bool = False,
) -> int:
"""Persists events and tells the notifier/pushers about them, if

View File

@@ -57,6 +57,7 @@ from synapse.events import EventBase, relation_from_event
from synapse.events.builder import EventBuilder
from synapse.events.snapshot import (
EventContext,
EventPersistencePair,
UnpersistedEventContext,
UnpersistedEventContextBase,
)
@@ -1439,7 +1440,7 @@ class EventCreationHandler:
async def handle_new_client_event(
self,
requester: Requester,
events_and_context: List[Tuple[EventBase, EventContext]],
events_and_context: List[EventPersistencePair],
ratelimit: bool = True,
extra_users: Optional[List[UserID]] = None,
ignore_shadow_ban: bool = False,
@@ -1651,7 +1652,7 @@ class EventCreationHandler:
async def _persist_events(
self,
requester: Requester,
events_and_context: List[Tuple[EventBase, EventContext]],
events_and_context: List[EventPersistencePair],
ratelimit: bool = True,
extra_users: Optional[List[UserID]] = None,
) -> EventBase:
@@ -1737,7 +1738,7 @@ class EventCreationHandler:
raise
async def cache_joined_hosts_for_events(
self, events_and_context: List[Tuple[EventBase, EventContext]]
self, events_and_context: List[EventPersistencePair]
) -> None:
"""Precalculate the joined hosts at each of the given events, when using Redis, so that
external federation senders don't have to recalculate it themselves.
@@ -1843,7 +1844,7 @@ class EventCreationHandler:
async def persist_and_notify_client_events(
self,
requester: Requester,
events_and_context: List[Tuple[EventBase, EventContext]],
events_and_context: List[EventPersistencePair],
ratelimit: bool = True,
extra_users: Optional[List[UserID]] = None,
) -> EventBase:

View File

@@ -1540,7 +1540,7 @@ class PresenceHandler(BasePresenceHandler):
self.clock, name="presence_delta", server_name=self.server_name
):
room_max_stream_ordering = self.store.get_room_max_stream_ordering()
if self._event_pos == room_max_stream_ordering:
if self._event_pos >= room_max_stream_ordering:
return
logger.debug(

View File

@@ -13,7 +13,6 @@
#
import enum
import logging
from itertools import chain
from typing import (
@@ -75,6 +74,7 @@ from synapse.types.handlers.sliding_sync import (
)
from synapse.types.state import StateFilter
from synapse.util import MutableOverlayMapping
from synapse.util.sentinel import Sentinel
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -83,12 +83,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
# type of a dictionary lookup and subsequent type narrowing.
UNSET_SENTINEL = object()
# Helper definition for the types that we might return. We do this to avoid
# copying data between types (which can be expensive for many rooms).
RoomsForUserType = Union[RoomsForUserStateReset, RoomsForUser, RoomsForUserSlidingSync]

View File

@@ -702,6 +702,10 @@ class _ByteProducer:
self._request: Optional[Request] = request
self._iterator = iterator
self._paused = False
self.tracing_scope = start_active_span(
"write_bytes_to_request",
)
self.tracing_scope.__enter__()
try:
self._request.registerProducer(self, True)
@@ -712,8 +716,8 @@ class _ByteProducer:
logger.info("Connection disconnected before response was written: %r", e)
# We drop our references to data we'll not use.
self._request = None
self._iterator = iter(())
self.tracing_scope.__exit__(type(e), None, e.__traceback__)
else:
# Start producing if `registerProducer` was successful
self.resumeProducing()
@@ -727,6 +731,9 @@ class _ByteProducer:
self._request.write(b"".join(data))
def pauseProducing(self) -> None:
opentracing_span = active_span()
if opentracing_span is not None:
opentracing_span.log_kv({"event": "producer_paused"})
self._paused = True
def resumeProducing(self) -> None:
@@ -737,6 +744,10 @@ class _ByteProducer:
self._paused = False
opentracing_span = active_span()
if opentracing_span is not None:
opentracing_span.log_kv({"event": "producer_resumed"})
# Write until there's backpressure telling us to stop.
while not self._paused:
# Get the next chunk and write it to the request.
@@ -771,6 +782,7 @@ class _ByteProducer:
def stopProducing(self) -> None:
# Clear a circular reference.
self._request = None
self.tracing_scope.__exit__(None, None, None)
def _encode_json_bytes(json_object: object) -> bytes:
@@ -913,8 +925,9 @@ def _write_bytes_to_request(request: Request, bytes_to_write: bytes) -> None:
# once (via `Request.write`) is that doing so starts the timeout for the
# next request to be received: so if it takes longer than 60s to stream back
# the response to the client, the client never gets it.
# c.f https://github.com/twisted/twisted/issues/12498
#
# The correct solution is to use a Producer; then the timeout is only
# One workaround is to use a `Producer`; then the timeout is only
# started once all of the content is sent over the TCP connection.
# To make sure we don't write all of the bytes at once we split it up into

View File

@@ -56,7 +56,6 @@ from twisted.internet import defer, threads
from twisted.python.threadpool import ThreadPool
if TYPE_CHECKING:
from synapse.logging.scopecontextmanager import _LogContextScope
from synapse.types import ISynapseReactor
logger = logging.getLogger(__name__)
@@ -230,14 +229,13 @@ LoggingContextOrSentinel = Union["LoggingContext", "_Sentinel"]
class _Sentinel:
"""Sentinel to represent the root context"""
__slots__ = ["previous_context", "finished", "request", "scope", "tag"]
__slots__ = ["previous_context", "finished", "request", "tag"]
def __init__(self) -> None:
# Minimal set for compatibility with LoggingContext
self.previous_context = None
self.finished = False
self.request = None
self.scope = None
self.tag = None
def __str__(self) -> str:
@@ -290,7 +288,6 @@ class LoggingContext:
"finished",
"request",
"tag",
"scope",
]
def __init__(
@@ -311,7 +308,6 @@ class LoggingContext:
self.main_thread = get_thread_id()
self.request = None
self.tag = ""
self.scope: Optional["_LogContextScope"] = None
# keep track of whether we have hit the __exit__ block for this context
# (suggesting that the the thing that created the context thinks it should
@@ -324,9 +320,6 @@ class LoggingContext:
# we track the current request_id
self.request = self.parent_context.request
# we also track the current scope:
self.scope = self.parent_context.scope
if request is not None:
# the request param overrides the request from the parent context
self.request = request

View File

@@ -251,18 +251,17 @@ class _DummyTagNames:
try:
import opentracing
import opentracing.tags
from opentracing.scope_managers.contextvars import ContextVarsScopeManager
tags = opentracing.tags
except ImportError:
opentracing = None # type: ignore[assignment]
tags = _DummyTagNames # type: ignore[assignment]
ContextVarsScopeManager = None # type: ignore
try:
from jaeger_client import Config as JaegerConfig
from synapse.logging.scopecontextmanager import LogContextScopeManager
except ImportError:
JaegerConfig = None # type: ignore
LogContextScopeManager = None # type: ignore
try:
@@ -484,7 +483,7 @@ def init_tracer(hs: "HomeServer") -> None:
config = JaegerConfig(
config=jaeger_config,
service_name=f"{hs.config.server.server_name} {instance_name_by_type}",
scope_manager=LogContextScopeManager(),
scope_manager=ContextVarsScopeManager(),
metrics_factory=PrometheusMetricsFactory(),
)

View File

@@ -1,161 +0,0 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2019 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
import logging
from typing import Optional
from opentracing import Scope, ScopeManager, Span
from synapse.logging.context import (
LoggingContext,
current_context,
nested_logging_context,
)
logger = logging.getLogger(__name__)
class LogContextScopeManager(ScopeManager):
"""
The LogContextScopeManager tracks the active scope in opentracing
by using the log contexts which are native to synapse. This is so
that the basic opentracing api can be used across twisted defereds.
It would be nice just to use opentracing's ContextVarsScopeManager,
but currently that doesn't work due to https://twistedmatrix.com/trac/ticket/10301.
"""
def __init__(self) -> None:
pass
@property
def active(self) -> Optional[Scope]:
"""
Returns the currently active Scope which can be used to access the
currently active Scope.span.
If there is a non-null Scope, its wrapped Span
becomes an implicit parent of any newly-created Span at
Tracer.start_active_span() time.
Return:
The Scope that is active, or None if not available.
"""
ctx = current_context()
return ctx.scope
def activate(self, span: Span, finish_on_close: bool) -> Scope:
"""
Makes a Span active.
Args
span: the span that should become active.
finish_on_close: whether Span should be automatically finished when
Scope.close() is called.
Returns:
Scope to control the end of the active period for
*span*. It is a programming error to neglect to call
Scope.close() on the returned instance.
"""
ctx = current_context()
if not ctx:
logger.error("Tried to activate scope outside of loggingcontext")
return Scope(None, span) # type: ignore[arg-type]
if ctx.scope is not None:
# start a new logging context as a child of the existing one.
# Doing so -- rather than updating the existing logcontext -- means that
# creating several concurrent spans under the same logcontext works
# correctly.
ctx = nested_logging_context("")
enter_logcontext = True
else:
# if there is no span currently associated with the current logcontext, we
# just store the scope in it.
#
# This feels a bit dubious, but it does hack around a problem where a
# span outlasts its parent logcontext (which would otherwise lead to
# "Re-starting finished log context" errors).
enter_logcontext = False
scope = _LogContextScope(self, span, ctx, enter_logcontext, finish_on_close)
ctx.scope = scope
if enter_logcontext:
ctx.__enter__()
return scope
class _LogContextScope(Scope):
"""
A custom opentracing scope, associated with a LogContext
* When the scope is closed, the logcontext's active scope is reset to None.
and - if enter_logcontext was set - the logcontext is finished too.
"""
def __init__(
self,
manager: LogContextScopeManager,
span: Span,
logcontext: LoggingContext,
enter_logcontext: bool,
finish_on_close: bool,
):
"""
Args:
manager:
the manager that is responsible for this scope.
span:
the opentracing span which this scope represents the local
lifetime for.
logcontext:
the log context to which this scope is attached.
enter_logcontext:
if True the log context will be exited when the scope is finished
finish_on_close:
if True finish the span when the scope is closed
"""
super().__init__(manager, span)
self.logcontext = logcontext
self._finish_on_close = finish_on_close
self._enter_logcontext = enter_logcontext
def __str__(self) -> str:
return f"Scope<{self.span}>"
def close(self) -> None:
active_scope = self.manager.active
if active_scope is not self:
logger.error(
"Closing scope %s which is not the currently-active one %s",
self,
active_scope,
)
if self._finish_on_close:
self.span.finish()
self.logcontext.scope = None
if self._enter_logcontext:
self.logcontext.__exit__(None, None, None)

View File

@@ -49,7 +49,7 @@ from synapse.api.constants import (
from synapse.api.room_versions import PushRuleRoomFlag
from synapse.event_auth import auth_types_for_event, get_user_power_level
from synapse.events import EventBase, relation_from_event
from synapse.events.snapshot import EventContext
from synapse.events.snapshot import EventContext, EventPersistencePair
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.metrics import SERVER_NAME_LABEL
from synapse.state import CREATE_KEY, POWER_KEY
@@ -352,7 +352,7 @@ class BulkPushRuleEvaluator:
return related_events
async def action_for_events_by_user(
self, events_and_context: List[Tuple[EventBase, EventContext]]
self, events_and_context: List[EventPersistencePair]
) -> None:
"""Given a list of events and their associated contexts, evaluate the push rules
for each event, check if the message should increment the unread count, and

View File

@@ -24,8 +24,8 @@ from typing import TYPE_CHECKING, List, Tuple
from twisted.web.server import Request
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
from synapse.events import EventBase, make_event_from_dict
from synapse.events.snapshot import EventContext
from synapse.events import make_event_from_dict
from synapse.events.snapshot import EventContext, EventPersistencePair
from synapse.http.server import HttpServer
from synapse.replication.http._base import ReplicationEndpoint
from synapse.types import JsonDict
@@ -86,7 +86,7 @@ class ReplicationFederationSendEventsRestServlet(ReplicationEndpoint):
async def _serialize_payload( # type: ignore[override]
store: "DataStore",
room_id: str,
event_and_contexts: List[Tuple[EventBase, EventContext]],
event_and_contexts: List[EventPersistencePair],
backfilled: bool,
) -> JsonDict:
"""

View File

@@ -25,8 +25,8 @@ from typing import TYPE_CHECKING, List, Tuple
from twisted.web.server import Request
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import EventBase, make_event_from_dict
from synapse.events.snapshot import EventContext
from synapse.events import make_event_from_dict
from synapse.events.snapshot import EventContext, EventPersistencePair
from synapse.http.server import HttpServer
from synapse.replication.http._base import ReplicationEndpoint
from synapse.types import JsonDict, Requester, UserID
@@ -85,7 +85,7 @@ class ReplicationSendEventsRestServlet(ReplicationEndpoint):
@staticmethod
async def _serialize_payload( # type: ignore[override]
events_and_context: List[Tuple[EventBase, EventContext]],
events_and_context: List[EventPersistencePair],
store: "DataStore",
requester: Requester,
ratelimit: bool,

View File

@@ -76,11 +76,17 @@ class AuthMetadataServlet(RestServlet):
Advertises the OAuth 2.0 server metadata for the homeserver.
"""
PATTERNS = client_patterns(
"/org.matrix.msc2965/auth_metadata$",
unstable=True,
releases=(),
)
PATTERNS = [
*client_patterns(
"/auth_metadata$",
releases=("v1",),
),
*client_patterns(
"/org.matrix.msc2965/auth_metadata$",
unstable=True,
releases=(),
),
]
def __init__(self, hs: "HomeServer"):
super().__init__()

View File

@@ -23,10 +23,19 @@
import logging
import re
from collections import Counter
from typing import TYPE_CHECKING, Any, Dict, Optional, Tuple
from http import HTTPStatus
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Tuple, Union
from typing_extensions import Self
from synapse._pydantic_compat import (
StrictBool,
StrictStr,
validator,
)
from synapse.api.auth.mas import MasDelegatedAuth
from synapse.api.errors import (
Codes,
InteractiveAuthIncompleteError,
InvalidAPICallError,
SynapseError,
@@ -37,11 +46,13 @@ from synapse.http.servlet import (
parse_integer,
parse_json_object_from_request,
parse_string,
validate_json_object,
)
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import log_kv, set_tag
from synapse.rest.client._base import client_patterns, interactive_auth_handler
from synapse.types import JsonDict, StreamToken
from synapse.types.rest import RequestBodyModel
from synapse.util.cancellation import cancellable
if TYPE_CHECKING:
@@ -59,7 +70,6 @@ class KeyUploadServlet(RestServlet):
"device_keys": {
"user_id": "<user_id>",
"device_id": "<device_id>",
"valid_until_ts": <millisecond_timestamp>,
"algorithms": [
"m.olm.curve25519-aes-sha2",
]
@@ -111,12 +121,123 @@ class KeyUploadServlet(RestServlet):
self._clock = hs.get_clock()
self._store = hs.get_datastores().main
class KeyUploadRequestBody(RequestBodyModel):
"""
The body of a `POST /_matrix/client/v3/keys/upload` request.
Based on https://spec.matrix.org/v1.16/client-server-api/#post_matrixclientv3keysupload.
"""
class DeviceKeys(RequestBodyModel):
algorithms: List[StrictStr]
"""The encryption algorithms supported by this device."""
device_id: StrictStr
"""The ID of the device these keys belong to. Must match the device ID used when logging in."""
keys: Mapping[StrictStr, StrictStr]
"""
Public identity keys. The names of the properties should be in the
format `<algorithm>:<device_id>`. The keys themselves should be encoded as
specified by the key algorithm.
"""
signatures: Mapping[StrictStr, Mapping[StrictStr, StrictStr]]
"""Signatures for the device key object. A map from user ID, to a map from "<algorithm>:<device_id>" to the signature."""
user_id: StrictStr
"""The ID of the user the device belongs to. Must match the user ID used when logging in."""
class KeyObject(RequestBodyModel):
key: StrictStr
"""The key, encoded using unpadded base64."""
fallback: Optional[StrictBool] = False
"""Whether this is a fallback key. Only used when handling fallback keys."""
signatures: Mapping[StrictStr, Mapping[StrictStr, StrictStr]]
"""Signature for the device. Mapped from user ID to another map of key signing identifier to the signature itself.
See the following for more detail: https://spec.matrix.org/v1.16/appendices/#signing-details
"""
device_keys: Optional[DeviceKeys] = None
"""Identity keys for the device. May be absent if no new identity keys are required."""
fallback_keys: Optional[Mapping[StrictStr, Union[StrictStr, KeyObject]]]
"""
The public key which should be used if the device's one-time keys are
exhausted. The fallback key is not deleted once used, but should be
replaced when additional one-time keys are being uploaded. The server
will notify the client of the fallback key being used through `/sync`.
There can only be at most one key per algorithm uploaded, and the server
will only persist one key per algorithm.
When uploading a signed key, an additional fallback: true key should be
included to denote that the key is a fallback key.
May be absent if a new fallback key is not required.
"""
@validator("fallback_keys", pre=True)
def validate_fallback_keys(cls: Self, v: Any) -> Any:
if v is None:
return v
if not isinstance(v, dict):
raise TypeError("fallback_keys must be a mapping")
for k in v.keys():
if not len(k.split(":")) == 2:
raise SynapseError(
code=HTTPStatus.BAD_REQUEST,
errcode=Codes.BAD_JSON,
msg=f"Invalid fallback_keys key {k!r}. "
'Expected "<algorithm>:<device_id>".',
)
return v
one_time_keys: Optional[Mapping[StrictStr, Union[StrictStr, KeyObject]]] = None
"""
One-time public keys for "pre-key" messages. The names of the properties
should be in the format `<algorithm>:<key_id>`.
The format of the key is determined by the key algorithm, see:
https://spec.matrix.org/v1.16/client-server-api/#key-algorithms.
"""
@validator("one_time_keys", pre=True)
def validate_one_time_keys(cls: Self, v: Any) -> Any:
if v is None:
return v
if not isinstance(v, dict):
raise TypeError("one_time_keys must be a mapping")
for k, _ in v.items():
if not len(k.split(":")) == 2:
raise SynapseError(
code=HTTPStatus.BAD_REQUEST,
errcode=Codes.BAD_JSON,
msg=f"Invalid one_time_keys key {k!r}. "
'Expected "<algorithm>:<device_id>".',
)
return v
async def on_POST(
self, request: SynapseRequest, device_id: Optional[str]
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
# Parse the request body. Validate separately, as the handler expects a
# plain dict, rather than any parsed object.
#
# Note: It would be nice to work with a parsed object, but the handler
# needs to encode portions of the request body as canonical JSON before
# storing the result in the DB. There's little point in converted to a
# parsed object and then back to a dict.
body = parse_json_object_from_request(request)
validate_json_object(body, self.KeyUploadRequestBody)
if device_id is not None:
# Providing the device_id should only be done for setting keys
@@ -149,8 +270,31 @@ class KeyUploadServlet(RestServlet):
400, "To upload keys, you must pass device_id when authenticating"
)
if "device_keys" in body and isinstance(body["device_keys"], dict):
# Validate the provided `user_id` and `device_id` fields in
# `device_keys` match that of the requesting user. We can't do
# this directly in the pydantic model as we don't have access
# to the requester yet.
#
# TODO: We could use ValidationInfo when we switch to Pydantic v2.
# https://docs.pydantic.dev/latest/concepts/validators/#validation-info
if body["device_keys"].get("user_id") != user_id:
raise SynapseError(
code=HTTPStatus.BAD_REQUEST,
errcode=Codes.BAD_JSON,
msg="Provided `user_id` in `device_keys` does not match that of the authenticated user",
)
if body["device_keys"].get("device_id") != device_id:
raise SynapseError(
code=HTTPStatus.BAD_REQUEST,
errcode=Codes.BAD_JSON,
msg="Provided `device_id` in `device_keys` does not match that of the authenticated user device",
)
result = await self.e2e_keys_handler.upload_keys_for_user(
user_id=user_id, device_id=device_id, keys=body
user_id=user_id,
device_id=device_id,
keys=body,
)
return 200, result

View File

@@ -363,9 +363,6 @@ class SyncRestServlet(RestServlet):
# https://github.com/matrix-org/matrix-doc/blob/54255851f642f84a4f1aaf7bc063eebe3d76752b/proposals/2732-olm-fallback-keys.md
# states that this field should always be included, as long as the server supports the feature.
response["org.matrix.msc2732.device_unused_fallback_key_types"] = (
sync_result.device_unused_fallback_key_types
)
response["device_unused_fallback_key_types"] = (
sync_result.device_unused_fallback_key_types
)

View File

@@ -51,7 +51,7 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.events.snapshot import EventContext, EventPersistencePair
from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
from synapse.logging.opentracing import (
@@ -144,7 +144,7 @@ class _PersistEventsTask:
name: ClassVar[str] = "persist_event_batch" # used for opentracing
events_and_contexts: List[Tuple[EventBase, EventContext]]
events_and_contexts: List[EventPersistencePair]
backfilled: bool
def try_merge(self, task: "_EventPersistQueueTask") -> bool:
@@ -391,7 +391,7 @@ class EventsPersistenceStorageController:
@trace
async def persist_events(
self,
events_and_contexts: Iterable[Tuple[EventBase, EventContext]],
events_and_contexts: Iterable[EventPersistencePair],
backfilled: bool = False,
) -> Tuple[List[EventBase], RoomStreamToken]:
"""
@@ -414,7 +414,7 @@ class EventsPersistenceStorageController:
a room that has been un-partial stated.
"""
event_ids: List[str] = []
partitioned: Dict[str, List[Tuple[EventBase, EventContext]]] = {}
partitioned: Dict[str, List[EventPersistencePair]] = {}
for event, ctx in events_and_contexts:
partitioned.setdefault(event.room_id, []).append((event, ctx))
event_ids.append(event.event_id)
@@ -430,7 +430,7 @@ class EventsPersistenceStorageController:
set_tag(SynapseTags.FUNC_ARG_PREFIX + "backfilled", str(backfilled))
async def enqueue(
item: Tuple[str, List[Tuple[EventBase, EventContext]]],
item: Tuple[str, List[EventPersistencePair]],
) -> Dict[str, str]:
room_id, evs_ctxs = item
return await self._event_persist_queue.add_to_queue(
@@ -677,7 +677,7 @@ class EventsPersistenceStorageController:
return replaced_events
async def _calculate_new_forward_extremities_and_state_delta(
self, room_id: str, ev_ctx_rm: List[Tuple[EventBase, EventContext]]
self, room_id: str, ev_ctx_rm: List[EventPersistencePair]
) -> Tuple[Optional[Set[str]], Optional[DeltaState]]:
"""Calculates the new forward extremities and state delta for a room
given events to persist.
@@ -802,7 +802,7 @@ class EventsPersistenceStorageController:
async def _calculate_new_extremities(
self,
room_id: str,
event_contexts: List[Tuple[EventBase, EventContext]],
event_contexts: List[EventPersistencePair],
latest_event_ids: AbstractSet[str],
) -> Set[str]:
"""Calculates the new forward extremities for a room given events to
@@ -862,7 +862,7 @@ class EventsPersistenceStorageController:
async def _get_new_state_after_events(
self,
room_id: str,
events_context: List[Tuple[EventBase, EventContext]],
events_context: List[EventPersistencePair],
old_latest_event_ids: AbstractSet[str],
new_latest_event_ids: Set[str],
) -> Tuple[Optional[StateMap[str]], Optional[StateMap[str]], Set[str]]:
@@ -1039,7 +1039,7 @@ class EventsPersistenceStorageController:
new_latest_event_ids: Set[str],
resolved_state_group: int,
event_id_to_state_group: Dict[str, int],
events_context: List[Tuple[EventBase, EventContext]],
events_context: List[EventPersistencePair],
) -> Set[str]:
"""See if we can prune any of the extremities after calculating the
resolved state.
@@ -1176,7 +1176,7 @@ class EventsPersistenceStorageController:
async def _is_server_still_joined(
self,
room_id: str,
ev_ctx_rm: List[Tuple[EventBase, EventContext]],
ev_ctx_rm: List[EventPersistencePair],
delta: DeltaState,
) -> bool:
"""Check if the server will still be joined after the given events have

View File

@@ -682,6 +682,8 @@ class StateStorageController:
- the stream id which these results go up to
- list of current_state_delta_stream rows. If it is empty, we are
up to date.
A maximum of 100 rows will be returned.
"""
# FIXME(faster_joins): what do we do here?
# https://github.com/matrix-org/synapse/issues/13008

View File

@@ -182,6 +182,21 @@ class DelayedEventsStore(SQLBaseStore):
"restart_delayed_event", restart_delayed_event_txn
)
async def get_count_of_delayed_events(self) -> int:
"""Returns the number of pending delayed events in the DB."""
def _get_count_of_delayed_events(txn: LoggingTransaction) -> int:
sql = "SELECT count(*) FROM delayed_events"
txn.execute(sql)
resp = txn.fetchone()
return resp[0] if resp is not None else 0
return await self.db_pool.runInteraction(
"get_count_of_delayed_events",
_get_count_of_delayed_events,
)
async def get_all_delayed_events_for_user(
self,
user_localpart: str,

View File

@@ -57,7 +57,7 @@ from synapse.events import (
is_creator,
relation_from_event,
)
from synapse.events.snapshot import EventContext
from synapse.events.snapshot import EventPersistencePair
from synapse.events.utils import parse_stripped_state_event
from synapse.logging.opentracing import trace
from synapse.metrics import SERVER_NAME_LABEL
@@ -274,7 +274,7 @@ class PersistEventsStore:
async def _persist_events_and_state_updates(
self,
room_id: str,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
*,
state_delta_for_room: Optional[DeltaState],
new_forward_extremities: Optional[Set[str]],
@@ -532,7 +532,7 @@ class PersistEventsStore:
async def _calculate_sliding_sync_table_changes(
self,
room_id: str,
events_and_contexts: Sequence[Tuple[EventBase, EventContext]],
events_and_contexts: Sequence[EventPersistencePair],
delta_state: DeltaState,
) -> SlidingSyncTableChanges:
"""
@@ -1016,7 +1016,7 @@ class PersistEventsStore:
txn: LoggingTransaction,
*,
room_id: str,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
inhibit_local_membership_updates: bool,
state_delta_for_room: Optional[DeltaState],
new_forward_extremities: Optional[Set[str]],
@@ -1666,7 +1666,7 @@ class PersistEventsStore:
def _persist_transaction_ids_txn(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
) -> None:
"""Persist the mapping from transaction IDs to event IDs (if defined)."""
@@ -2316,7 +2316,7 @@ class PersistEventsStore:
self,
txn: LoggingTransaction,
room_id: str,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
) -> None:
"""
Update the latest `event_stream_ordering`/`bump_stamp` columns in the
@@ -2456,8 +2456,8 @@ class PersistEventsStore:
@classmethod
def _filter_events_and_contexts_for_duplicates(
cls, events_and_contexts: List[Tuple[EventBase, EventContext]]
) -> List[Tuple[EventBase, EventContext]]:
cls, events_and_contexts: List[EventPersistencePair]
) -> List[EventPersistencePair]:
"""Ensure that we don't have the same event twice.
Pick the earliest non-outlier if there is one, else the earliest one.
@@ -2468,9 +2468,7 @@ class PersistEventsStore:
Returns:
filtered list
"""
new_events_and_contexts: OrderedDict[str, Tuple[EventBase, EventContext]] = (
OrderedDict()
)
new_events_and_contexts: OrderedDict[str, EventPersistencePair] = OrderedDict()
for event, context in events_and_contexts:
prev_event_context = new_events_and_contexts.get(event.event_id)
if prev_event_context:
@@ -2488,7 +2486,7 @@ class PersistEventsStore:
self,
txn: LoggingTransaction,
room_id: str,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
) -> None:
"""Update min_depth for each room
@@ -2530,8 +2528,8 @@ class PersistEventsStore:
def _update_outliers_txn(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
) -> List[Tuple[EventBase, EventContext]]:
events_and_contexts: List[EventPersistencePair],
) -> List[EventPersistencePair]:
"""Update any outliers with new event info.
This turns outliers into ex-outliers (unless the new event was rejected), and
@@ -2638,7 +2636,7 @@ class PersistEventsStore:
def _store_event_txn(
self,
txn: LoggingTransaction,
events_and_contexts: Collection[Tuple[EventBase, EventContext]],
events_and_contexts: Collection[EventPersistencePair],
) -> None:
"""Insert new events into the event, event_json, redaction and
state_events tables.
@@ -2742,8 +2740,8 @@ class PersistEventsStore:
def _store_rejected_events_txn(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
) -> List[Tuple[EventBase, EventContext]]:
events_and_contexts: List[EventPersistencePair],
) -> List[EventPersistencePair]:
"""Add rows to the 'rejections' table for received events which were
rejected
@@ -2770,8 +2768,8 @@ class PersistEventsStore:
self,
txn: LoggingTransaction,
*,
events_and_contexts: List[Tuple[EventBase, EventContext]],
all_events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
all_events_and_contexts: List[EventPersistencePair],
inhibit_local_membership_updates: bool = False,
) -> None:
"""Update all the miscellaneous tables for new events
@@ -2865,7 +2863,7 @@ class PersistEventsStore:
def _add_to_cache(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
) -> None:
to_prefill: List[EventCacheEntry] = []
@@ -3338,8 +3336,8 @@ class PersistEventsStore:
def _set_push_actions_for_event_and_users_txn(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
all_events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
all_events_and_contexts: List[EventPersistencePair],
) -> None:
"""Handles moving push actions from staging table to main
event_push_actions table for all events in `events_and_contexts`.
@@ -3422,7 +3420,7 @@ class PersistEventsStore:
def _store_event_state_mappings_txn(
self,
txn: LoggingTransaction,
events_and_contexts: Collection[Tuple[EventBase, EventContext]],
events_and_contexts: Collection[EventPersistencePair],
) -> None:
"""
Raises:

View File

@@ -81,6 +81,7 @@ from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
LoggingTransaction,
make_tuple_in_list_sql_clause,
)
from synapse.storage.types import Cursor
from synapse.storage.util.id_generators import (
@@ -1337,6 +1338,7 @@ class EventsWorkerStore(SQLBaseStore):
fetched_event_ids: Set[str] = set()
fetched_events: Dict[str, _EventRow] = {}
@trace
async def _fetch_event_ids_and_get_outstanding_redactions(
event_ids_to_fetch: Collection[str],
) -> Collection[str]:
@@ -1344,6 +1346,10 @@ class EventsWorkerStore(SQLBaseStore):
Fetch all of the given event_ids and return any associated redaction event_ids
that we still need to fetch in the next iteration.
"""
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids_to_fetch.length",
str(len(event_ids_to_fetch)),
)
row_map = await self._enqueue_events(event_ids_to_fetch)
# we need to recursively fetch any redactions of those events
@@ -1617,21 +1623,28 @@ class EventsWorkerStore(SQLBaseStore):
# likely that some of these events may be for the same room/user combo, in
# which case we don't need to do redundant queries
to_check_set = set(to_check)
for room_and_user in to_check_set:
room_redactions_sql = "SELECT redacting_event_id, redact_end_ordering FROM room_ban_redactions WHERE room_id = ? and user_id = ?"
txn.execute(room_redactions_sql, room_and_user)
res = txn.fetchone()
# we have a redaction for a room, user_id combo - apply it to matching events
if not res:
continue
room_redaction_sql = "SELECT room_id, user_id, redacting_event_id, redact_end_ordering FROM room_ban_redactions WHERE "
(
in_list_clause,
room_redaction_args,
) = make_tuple_in_list_sql_clause(
self.database_engine, ("room_id", "user_id"), to_check_set
)
txn.execute(room_redaction_sql + in_list_clause, room_redaction_args)
for (
returned_room_id,
returned_user_id,
redacting_event_id,
redact_end_ordering,
) in txn:
for e_row in events:
e_json = json.loads(e_row.json)
room_id = e_json.get("room_id")
user_id = e_json.get("sender")
room_and_user = (returned_room_id, returned_user_id)
# check if we have a redaction match for this room, user combination
if room_and_user != (room_id, user_id):
continue
redacting_event_id, redact_end_ordering = res
if redact_end_ordering:
# Avoid redacting any events arriving *after* the membership event which
# ends an active redaction - note that this will always redact
@@ -2122,6 +2135,39 @@ class EventsWorkerStore(SQLBaseStore):
return rows, to_token, True
async def get_senders_for_event_ids(
self, event_ids: Collection[str]
) -> Dict[str, Optional[str]]:
"""
Given a sequence of event IDs, return the sender associated with each.
Args:
event_ids: A collection of event IDs as strings.
Returns:
A dict of event ID -> sender of the event.
If a given event ID does not exist in the `events` table, then no entry
for that event ID will be returned.
"""
def _get_senders_for_event_ids(
txn: LoggingTransaction,
) -> Dict[str, Optional[str]]:
rows = self.db_pool.simple_select_many_txn(
txn=txn,
table="events",
column="event_id",
iterable=event_ids,
keyvalues={},
retcols=["event_id", "sender"],
)
return dict(rows)
return await self.db_pool.runInteraction(
"get_senders_for_event_ids", _get_senders_for_event_ids
)
@cached(max_entries=5000)
async def get_event_ordering(self, event_id: str, room_id: str) -> Tuple[int, int]:
res = await self.db_pool.simple_select_one(

View File

@@ -94,6 +94,8 @@ class StateDeltasStore(SQLBaseStore):
- the stream id which these results go up to
- list of current_state_delta_stream rows. If it is empty, we are
up to date.
A maximum of 100 rows will be returned.
"""
prev_stream_id = int(prev_stream_id)

View File

@@ -25,8 +25,7 @@ from typing import (
Tuple,
)
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.events.snapshot import EventPersistencePair
from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
@@ -228,7 +227,7 @@ class StateDeletionDataStore:
@contextlib.asynccontextmanager
async def persisting_state_group_references(
self, event_and_contexts: Collection[Tuple[EventBase, EventContext]]
self, event_and_contexts: Collection[EventPersistencePair]
) -> AsyncIterator[None]:
"""Wraps the persistence of the given events and contexts, ensuring that
any state groups referenced still exist and that they don't get deleted

21
synapse/util/sentinel.py Normal file
View File

@@ -0,0 +1,21 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import enum
class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
# type of a dictionary lookup and subsequent type narrowing.
UNSET_SENTINEL = object()

View File

@@ -410,7 +410,6 @@ class E2eKeysHandlerTestCase(unittest.HomeserverTestCase):
device_id = "xyz"
fallback_key = {"alg1:k1": "fallback_key1"}
fallback_key2 = {"alg1:k2": "fallback_key2"}
fallback_key3 = {"alg1:k2": "fallback_key3"}
otk = {"alg1:k2": "key2"}
# we shouldn't have any unused fallback keys yet
@@ -531,28 +530,6 @@ class E2eKeysHandlerTestCase(unittest.HomeserverTestCase):
{"failures": {}, "one_time_keys": {local_user: {device_id: fallback_key2}}},
)
# using the unstable prefix should also set the fallback key
self.get_success(
self.handler.upload_keys_for_user(
local_user,
device_id,
{"org.matrix.msc2732.fallback_keys": fallback_key3},
)
)
claim_res = self.get_success(
self.handler.claim_one_time_keys(
{local_user: {device_id: {"alg1": 1}}},
self.requester,
timeout=None,
always_include_fallback_keys=False,
)
)
self.assertEqual(
claim_res,
{"failures": {}, "one_time_keys": {local_user: {device_id: fallback_key3}}},
)
def test_fallback_key_bulk(self) -> None:
"""Like test_fallback_key, but claims multiple keys in one handler call."""
alice = f"@alice:{self.hs.hostname}"

View File

@@ -25,11 +25,11 @@ import time
from http import HTTPStatus
from http.server import BaseHTTPRequestHandler, HTTPServer
from io import BytesIO
from typing import Any, Coroutine, Dict, Generator, Optional, TypeVar, Union
from typing import Any, ClassVar, Coroutine, Dict, Generator, Optional, TypeVar, Union
from unittest.mock import ANY, AsyncMock, Mock
from urllib.parse import parse_qs
from parameterized import parameterized_class
from parameterized.parameterized import parameterized_class
from signedjson.key import (
encode_verify_key_base64,
generate_signing_key,
@@ -46,7 +46,6 @@ from synapse.api.errors import (
Codes,
HttpResponseException,
InvalidClientTokenError,
OAuthInsufficientScopeError,
SynapseError,
)
from synapse.appservice import ApplicationService
@@ -78,11 +77,7 @@ JWKS_URI = ISSUER + ".well-known/jwks.json"
INTROSPECTION_ENDPOINT = ISSUER + "introspect"
SYNAPSE_ADMIN_SCOPE = "urn:synapse:admin:*"
MATRIX_USER_SCOPE = "urn:matrix:org.matrix.msc2967.client:api:*"
MATRIX_GUEST_SCOPE = "urn:matrix:org.matrix.msc2967.client:api:guest"
MATRIX_DEVICE_SCOPE_PREFIX = "urn:matrix:org.matrix.msc2967.client:device:"
DEVICE = "AABBCCDD"
MATRIX_DEVICE_SCOPE = MATRIX_DEVICE_SCOPE_PREFIX + DEVICE
SUBJECT = "abc-def-ghi"
USERNAME = "test-user"
USER_ID = "@" + USERNAME + ":" + SERVER_NAME
@@ -112,7 +107,24 @@ async def get_json(url: str) -> JsonDict:
@skip_unless(HAS_AUTHLIB, "requires authlib")
@parameterized_class(
("device_scope_prefix", "api_scope"),
[
("urn:matrix:client:device:", "urn:matrix:client:api:*"),
(
"urn:matrix:org.matrix.msc2967.client:device:",
"urn:matrix:org.matrix.msc2967.client:api:*",
),
],
)
class MSC3861OAuthDelegation(HomeserverTestCase):
device_scope_prefix: ClassVar[str]
api_scope: ClassVar[str]
@property
def device_scope(self) -> str:
return self.device_scope_prefix + DEVICE
servlets = [
account.register_servlets,
keys.register_servlets,
@@ -212,7 +224,7 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
"""The handler should return a 500 when no subject is present."""
self._set_introspection_returnvalue(
{"active": True, "scope": " ".join([MATRIX_USER_SCOPE])}
{"active": True, "scope": " ".join([self.api_scope])},
)
request = Mock(args={})
@@ -235,7 +247,7 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
{
"active": True,
"sub": SUBJECT,
"scope": " ".join([MATRIX_DEVICE_SCOPE]),
"scope": " ".join([self.device_scope]),
}
)
request = Mock(args={})
@@ -282,7 +294,7 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
{
"active": True,
"sub": SUBJECT,
"scope": " ".join([SYNAPSE_ADMIN_SCOPE, MATRIX_USER_SCOPE]),
"scope": " ".join([SYNAPSE_ADMIN_SCOPE, self.api_scope]),
"username": USERNAME,
}
)
@@ -312,9 +324,7 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
{
"active": True,
"sub": SUBJECT,
"scope": " ".join(
[SYNAPSE_ADMIN_SCOPE, MATRIX_USER_SCOPE, MATRIX_GUEST_SCOPE]
),
"scope": " ".join([SYNAPSE_ADMIN_SCOPE, self.api_scope]),
"username": USERNAME,
}
)
@@ -344,7 +354,7 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
{
"active": True,
"sub": SUBJECT,
"scope": " ".join([MATRIX_USER_SCOPE]),
"scope": " ".join([self.api_scope]),
"username": USERNAME,
}
)
@@ -374,7 +384,7 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
{
"active": True,
"sub": SUBJECT,
"scope": " ".join([MATRIX_USER_SCOPE, MATRIX_DEVICE_SCOPE]),
"scope": " ".join([self.api_scope, self.device_scope]),
"username": USERNAME,
}
)
@@ -404,7 +414,7 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
{
"active": True,
"sub": SUBJECT,
"scope": " ".join([MATRIX_USER_SCOPE]),
"scope": " ".join([self.api_scope]),
"device_id": DEVICE,
"username": USERNAME,
}
@@ -444,9 +454,9 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
"sub": SUBJECT,
"scope": " ".join(
[
MATRIX_USER_SCOPE,
f"{MATRIX_DEVICE_SCOPE_PREFIX}AABBCC",
f"{MATRIX_DEVICE_SCOPE_PREFIX}DDEEFF",
self.api_scope,
f"{self.device_scope_prefix}AABBCC",
f"{self.device_scope_prefix}DDEEFF",
]
),
"username": USERNAME,
@@ -457,68 +467,6 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
self.get_failure(self.auth.get_user_by_req(request), AuthError)
def test_active_guest_not_allowed(self) -> None:
"""The handler should return an insufficient scope error."""
self._set_introspection_returnvalue(
{
"active": True,
"sub": SUBJECT,
"scope": " ".join([MATRIX_GUEST_SCOPE, MATRIX_DEVICE_SCOPE]),
"username": USERNAME,
}
)
request = Mock(args={})
request.args[b"access_token"] = [b"mockAccessToken"]
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
error = self.get_failure(
self.auth.get_user_by_req(request), OAuthInsufficientScopeError
)
self.http_client.get_json.assert_called_once_with(WELL_KNOWN)
self._rust_client.post.assert_called_once_with(
url=INTROSPECTION_ENDPOINT,
response_limit=ANY,
request_body=ANY,
headers=ANY,
)
self._assertParams()
self.assertEqual(
getattr(error.value, "headers", {})["WWW-Authenticate"],
'Bearer error="insufficient_scope", scope="urn:matrix:org.matrix.msc2967.client:api:*"',
)
def test_active_guest_allowed(self) -> None:
"""The handler should return a requester with guest user rights and a device ID."""
self._set_introspection_returnvalue(
{
"active": True,
"sub": SUBJECT,
"scope": " ".join([MATRIX_GUEST_SCOPE, MATRIX_DEVICE_SCOPE]),
"username": USERNAME,
}
)
request = Mock(args={})
request.args[b"access_token"] = [b"mockAccessToken"]
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
requester = self.get_success(
self.auth.get_user_by_req(request, allow_guest=True)
)
self.http_client.get_json.assert_called_once_with(WELL_KNOWN)
self._rust_client.post.assert_called_once_with(
url=INTROSPECTION_ENDPOINT,
response_limit=ANY,
request_body=ANY,
headers=ANY,
)
self._assertParams()
self.assertEqual(requester.user.to_string(), "@%s:%s" % (USERNAME, SERVER_NAME))
self.assertEqual(requester.is_guest, True)
self.assertEqual(
get_awaitable_result(self.auth.is_server_admin(requester)), False
)
self.assertEqual(requester.device_id, DEVICE)
def test_unavailable_introspection_endpoint(self) -> None:
"""The handler should return an internal server error."""
request = Mock(args={})
@@ -562,8 +510,8 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
"sub": SUBJECT,
"scope": " ".join(
[
MATRIX_USER_SCOPE,
f"{MATRIX_DEVICE_SCOPE_PREFIX}AABBCC",
self.api_scope,
f"{self.device_scope_prefix}AABBCC",
]
),
"username": USERNAME,
@@ -611,7 +559,7 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
{
"active": True,
"sub": SUBJECT,
"scope": " ".join([MATRIX_USER_SCOPE, MATRIX_DEVICE_SCOPE]),
"scope": " ".join([self.api_scope, self.device_scope]),
"username": USERNAME,
}
)
@@ -676,7 +624,7 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
return json.dumps(
{
"active": True,
"scope": MATRIX_USER_SCOPE,
"scope": self.api_scope,
"sub": SUBJECT,
"username": USERNAME,
},
@@ -842,8 +790,24 @@ class FakeMasServer(HTTPServer):
T = TypeVar("T")
@parameterized_class(
("device_scope_prefix", "api_scope"),
[
("urn:matrix:client:device:", "urn:matrix:client:api:*"),
(
"urn:matrix:org.matrix.msc2967.client:device:",
"urn:matrix:org.matrix.msc2967.client:api:*",
),
],
)
class MasAuthDelegation(HomeserverTestCase):
server: FakeMasServer
device_scope_prefix: ClassVar[str]
api_scope: ClassVar[str]
@property
def device_scope(self) -> str:
return self.device_scope_prefix + DEVICE
def till_deferred_has_result(
self,
@@ -914,12 +878,7 @@ class MasAuthDelegation(HomeserverTestCase):
self.server.introspection_response = {
"active": True,
"sub": SUBJECT,
"scope": " ".join(
[
MATRIX_USER_SCOPE,
f"{MATRIX_DEVICE_SCOPE_PREFIX}{DEVICE}",
]
),
"scope": " ".join([self.api_scope, self.device_scope]),
"username": USERNAME,
"expires_in": 60,
}
@@ -943,12 +902,7 @@ class MasAuthDelegation(HomeserverTestCase):
self.server.introspection_response = {
"active": True,
"sub": SUBJECT,
"scope": " ".join(
[
MATRIX_USER_SCOPE,
f"{MATRIX_DEVICE_SCOPE_PREFIX}{DEVICE}",
]
),
"scope": " ".join([self.api_scope, self.device_scope]),
"username": USERNAME,
}
@@ -971,12 +925,7 @@ class MasAuthDelegation(HomeserverTestCase):
self.server.introspection_response = {
"active": True,
"sub": SUBJECT,
"scope": " ".join(
[
MATRIX_USER_SCOPE,
f"{MATRIX_DEVICE_SCOPE_PREFIX}ABCDEF",
]
),
"scope": " ".join([self.api_scope, f"{self.device_scope_prefix}ABCDEF"]),
"username": USERNAME,
"expires_in": 60,
}
@@ -993,7 +942,7 @@ class MasAuthDelegation(HomeserverTestCase):
self.server.introspection_response = {
"active": True,
"sub": SUBJECT,
"scope": " ".join([MATRIX_USER_SCOPE]),
"scope": " ".join([self.api_scope]),
"username": "inexistent_user",
"expires_in": 60,
}
@@ -1039,7 +988,7 @@ class MasAuthDelegation(HomeserverTestCase):
self.server.introspection_response = {
"active": True,
"sub": SUBJECT,
"scope": MATRIX_USER_SCOPE,
"scope": self.api_scope,
"username": USERNAME,
"expires_in": 60,
"device_id": DEVICE,
@@ -1057,7 +1006,7 @@ class MasAuthDelegation(HomeserverTestCase):
self.server.introspection_response = {
"active": True,
"sub": SUBJECT,
"scope": " ".join([SYNAPSE_ADMIN_SCOPE, MATRIX_USER_SCOPE]),
"scope": " ".join([SYNAPSE_ADMIN_SCOPE, self.api_scope]),
"username": USERNAME,
"expires_in": 60,
}
@@ -1079,12 +1028,7 @@ class MasAuthDelegation(HomeserverTestCase):
self.server.introspection_response = {
"active": True,
"sub": SUBJECT,
"scope": " ".join(
[
MATRIX_USER_SCOPE,
f"{MATRIX_DEVICE_SCOPE_PREFIX}{DEVICE}",
]
),
"scope": " ".join([self.api_scope, self.device_scope]),
"username": USERNAME,
"expires_in": 60,
}

View File

@@ -19,7 +19,7 @@
#
#
from typing import Awaitable, cast
from typing import Awaitable, Dict, cast
from twisted.internet import defer
from twisted.internet.testing import MemoryReactorClock
@@ -38,9 +38,11 @@ from synapse.logging.opentracing import (
from synapse.util import Clock
try:
from synapse.logging.scopecontextmanager import LogContextScopeManager
import opentracing
from opentracing.scope_managers.contextvars import ContextVarsScopeManager
except ImportError:
LogContextScopeManager = None # type: ignore
opentracing = None # type: ignore
ContextVarsScopeManager = None # type: ignore
try:
import jaeger_client
@@ -54,9 +56,10 @@ from tests.unittest import TestCase
logger = logging.getLogger(__name__)
class LogContextScopeManagerTestCase(TestCase):
class TracingScopeTestCase(TestCase):
"""
Test logging contexts and active opentracing spans.
Test that our tracing machinery works well in a variety of situations (especially
with Twisted's runtime and deferreds).
There's casts throughout this from generic opentracing objects (e.g.
opentracing.Span) to the ones specific to Jaeger since they have additional
@@ -64,7 +67,7 @@ class LogContextScopeManagerTestCase(TestCase):
opentracing backend is Jaeger.
"""
if LogContextScopeManager is None:
if opentracing is None:
skip = "Requires opentracing" # type: ignore[unreachable]
if jaeger_client is None:
skip = "Requires jaeger_client" # type: ignore[unreachable]
@@ -74,7 +77,7 @@ class LogContextScopeManagerTestCase(TestCase):
# global variables that power opentracing. We create our own tracer instance
# and test with it.
scope_manager = LogContextScopeManager()
scope_manager = ContextVarsScopeManager()
config = jaeger_client.config.Config(
config={}, service_name="test", scope_manager=scope_manager
)
@@ -208,6 +211,135 @@ class LogContextScopeManagerTestCase(TestCase):
[scopes[1].span, scopes[2].span, scopes[0].span],
)
def test_run_in_background_active_scope_still_available(self) -> None:
"""
Test that tasks running via `run_in_background` still have access to the
active tracing scope.
This is a regression test for a previous Synapse issue where the tracing scope
would `__exit__` and close before the `run_in_background` task completed and our
own previous custom `_LogContextScope.close(...)` would clear
`LoggingContext.scope` preventing further tracing spans from having the correct
parent.
"""
reactor = MemoryReactorClock()
clock = Clock(reactor)
scope_map: Dict[str, opentracing.Scope] = {}
async def async_task() -> None:
root_scope = scope_map["root"]
root_context = cast(jaeger_client.SpanContext, root_scope.span.context)
self.assertEqual(
self._tracer.active_span,
root_scope.span,
"expected to inherit the root tracing scope from where this was run",
)
# Return control back to the reactor thread and wait an arbitrary amount
await clock.sleep(4)
# This is a key part of what we're testing! In a previous version of
# Synapse, we would lose the active span at this point.
self.assertEqual(
self._tracer.active_span,
root_scope.span,
"expected to still have a root tracing scope/span active",
)
# For complete-ness sake, let's also trace more sub-tasks here and assert
# they have the correct span parents as well (root)
# Start tracing some other sub-task.
#
# This is a key part of what we're testing! In a previous version of
# Synapse, it would have the incorrect span parents.
scope = start_active_span(
"task1",
tracer=self._tracer,
)
scope_map["task1"] = scope
# Ensure the span parent is pointing to the root scope
context = cast(jaeger_client.SpanContext, scope.span.context)
self.assertEqual(
context.parent_id,
root_context.span_id,
"expected task1 parent to be the root span",
)
# Ensure that the active span is our new sub-task now
self.assertEqual(self._tracer.active_span, scope.span)
# Return control back to the reactor thread and wait an arbitrary amount
await clock.sleep(4)
# We should still see the active span as the scope wasn't closed yet
self.assertEqual(self._tracer.active_span, scope.span)
scope.close()
async def root() -> None:
with start_active_span(
"root span",
tracer=self._tracer,
# We will close this off later. We're basically just mimicking the same
# pattern for how we handle requests. We pass the span off to the
# request for it to finish.
finish_on_close=False,
) as root_scope:
scope_map["root"] = root_scope
self.assertEqual(self._tracer.active_span, root_scope.span)
# Fire-and-forget a task
#
# XXX: The root scope context manager will `__exit__` before this task
# completes.
run_in_background(async_task)
# Because we used `run_in_background`, the active span should still be
# the root.
self.assertEqual(self._tracer.active_span, root_scope.span)
# We shouldn't see any active spans outside of the scope
self.assertIsNone(self._tracer.active_span)
with LoggingContext("root context"):
# Start the test off
d_root = defer.ensureDeferred(root())
# Let the tasks complete
reactor.pump((2,) * 8)
self.successResultOf(d_root)
# After we see all of the tasks are done (like a request when it
# `_finished_processing`), let's finish our root span
scope_map["root"].span.finish()
# Sanity check again: We shouldn't see any active spans leftover in this
# this context.
self.assertIsNone(self._tracer.active_span)
# The spans should be reported in order of their finishing: task 1, task 2,
# root.
#
# We use `assertIncludes` just as an easier way to see if items are missing or
# added. We assert the order just below
self.assertIncludes(
set(self._reporter.get_spans()),
{
scope_map["task1"].span,
scope_map["root"].span,
},
exact=True,
)
# This is where we actually assert the correct order
self.assertEqual(
self._reporter.get_spans(),
[
scope_map["task1"].span,
scope_map["root"].span,
],
)
def test_trace_decorator_sync(self) -> None:
"""
Test whether we can use `@trace_with_opname` (`@trace`) and `@tag_args`

View File

@@ -18,8 +18,11 @@
# [This file includes modifications made by New Vector Limited]
#
from http import HTTPStatus
from typing import ClassVar
from unittest.mock import AsyncMock
from parameterized import parameterized_class
from synapse.rest.client import auth_metadata
from tests.unittest import HomeserverTestCase, override_config, skip_unless
@@ -85,17 +88,22 @@ class AuthIssuerTestCase(HomeserverTestCase):
req_mock.assert_not_called()
@parameterized_class(
("endpoint",),
[
("/_matrix/client/unstable/org.matrix.msc2965/auth_metadata",),
("/_matrix/client/v1/auth_metadata",),
],
)
class AuthMetadataTestCase(HomeserverTestCase):
endpoint: ClassVar[str]
servlets = [
auth_metadata.register_servlets,
]
def test_returns_404_when_msc3861_disabled(self) -> None:
# Make an unauthenticated request for the discovery info.
channel = self.make_request(
"GET",
"/_matrix/client/unstable/org.matrix.msc2965/auth_metadata",
)
channel = self.make_request("GET", self.endpoint)
self.assertEqual(channel.code, HTTPStatus.NOT_FOUND)
@skip_unless(HAS_AUTHLIB, "requires authlib")
@@ -124,10 +132,7 @@ class AuthMetadataTestCase(HomeserverTestCase):
)
self.hs.get_proxied_http_client().get_json = req_mock # type: ignore[method-assign]
channel = self.make_request(
"GET",
"/_matrix/client/unstable/org.matrix.msc2965/auth_metadata",
)
channel = self.make_request("GET", self.endpoint)
self.assertEqual(channel.code, HTTPStatus.OK)
self.assertEqual(

View File

@@ -40,6 +40,147 @@ from tests.unittest import override_config
from tests.utils import HAS_AUTHLIB
class KeyUploadTestCase(unittest.HomeserverTestCase):
servlets = [
keys.register_servlets,
admin.register_servlets_for_client_rest_resource,
login.register_servlets,
]
def test_upload_keys_fails_on_invalid_structure(self) -> None:
"""Check that we validate the structure of keys upon upload.
Regression test for https://github.com/element-hq/synapse/pull/17097
"""
self.register_user("alice", "wonderland")
alice_token = self.login("alice", "wonderland")
channel = self.make_request(
"POST",
"/_matrix/client/v3/keys/upload",
{
# Error: device_keys must be a dict
"device_keys": ["some", "stuff", "weewoo"]
},
alice_token,
)
self.assertEqual(channel.code, HTTPStatus.BAD_REQUEST, channel.result)
self.assertEqual(
channel.json_body["errcode"],
Codes.BAD_JSON,
channel.result,
)
channel = self.make_request(
"POST",
"/_matrix/client/v3/keys/upload",
{
# Error: properties of fallback_keys must be in the form `<algorithm>:<device_id>`
"fallback_keys": {"invalid_key": "signature_base64"}
},
alice_token,
)
self.assertEqual(channel.code, HTTPStatus.BAD_REQUEST, channel.result)
self.assertEqual(
channel.json_body["errcode"],
Codes.BAD_JSON,
channel.result,
)
channel = self.make_request(
"POST",
"/_matrix/client/v3/keys/upload",
{
# Same as above, but for one_time_keys
"one_time_keys": {"invalid_key": "signature_base64"}
},
alice_token,
)
self.assertEqual(channel.code, HTTPStatus.BAD_REQUEST, channel.result)
self.assertEqual(
channel.json_body["errcode"],
Codes.BAD_JSON,
channel.result,
)
def test_upload_keys_fails_on_invalid_user_id_or_device_id(self) -> None:
"""
Validate that the requesting user is uploading their own keys and nobody
else's.
"""
device_id = "DEVICE_ID"
alice_user_id = self.register_user("alice", "wonderland")
alice_token = self.login("alice", "wonderland", device_id=device_id)
channel = self.make_request(
"POST",
"/_matrix/client/v3/keys/upload",
{
"device_keys": {
# Included `user_id` does not match requesting user.
"user_id": "@unknown_user:test",
"device_id": device_id,
"algorithms": ["m.olm.curve25519-aes-sha2"],
"keys": {
f"ed25519:{device_id}": "publickey",
},
"signatures": {},
}
},
alice_token,
)
self.assertEqual(channel.code, HTTPStatus.BAD_REQUEST, channel.result)
self.assertEqual(
channel.json_body["errcode"],
Codes.BAD_JSON,
channel.result,
)
channel = self.make_request(
"POST",
"/_matrix/client/v3/keys/upload",
{
"device_keys": {
"user_id": alice_user_id,
# Included `device_id` does not match requesting user's.
"device_id": "UNKNOWN_DEVICE_ID",
"algorithms": ["m.olm.curve25519-aes-sha2"],
"keys": {
f"ed25519:{device_id}": "publickey",
},
"signatures": {},
}
},
alice_token,
)
self.assertEqual(channel.code, HTTPStatus.BAD_REQUEST, channel.result)
self.assertEqual(
channel.json_body["errcode"],
Codes.BAD_JSON,
channel.result,
)
def test_upload_keys_succeeds_when_fields_are_explicitly_set_to_null(self) -> None:
"""
This is a regression test for https://github.com/element-hq/synapse/pull/19023.
"""
device_id = "DEVICE_ID"
self.register_user("alice", "wonderland")
alice_token = self.login("alice", "wonderland", device_id=device_id)
channel = self.make_request(
"POST",
"/_matrix/client/v3/keys/upload",
{
"device_keys": None,
"one_time_keys": None,
"fallback_keys": None,
},
alice_token,
)
self.assertEqual(channel.code, HTTPStatus.OK, channel.result)
class KeyQueryTestCase(unittest.HomeserverTestCase):
servlets = [
keys.register_servlets,

View File

@@ -20,7 +20,7 @@
#
import logging
from typing import List, Optional
from typing import Dict, List, Optional
from twisted.internet.testing import MemoryReactor
@@ -39,6 +39,77 @@ from tests.unittest import HomeserverTestCase
logger = logging.getLogger(__name__)
class EventsTestCase(HomeserverTestCase):
servlets = [
admin.register_servlets,
room.register_servlets,
login.register_servlets,
]
def prepare(
self, reactor: MemoryReactor, clock: Clock, homeserver: HomeServer
) -> None:
self._store = self.hs.get_datastores().main
def test_get_senders_for_event_ids(self) -> None:
"""Tests the `get_senders_for_event_ids` storage function."""
users_and_tokens: Dict[str, str] = {}
for localpart_suffix in range(10):
localpart = f"user_{localpart_suffix}"
user_id = self.register_user(localpart, "rabbit")
token = self.login(localpart, "rabbit")
users_and_tokens[user_id] = token
room_creator_user_id = self.register_user("room_creator", "rabbit")
room_creator_token = self.login("room_creator", "rabbit")
users_and_tokens[room_creator_user_id] = room_creator_token
# Create a room and invite some users.
room_id = self.helper.create_room_as(
room_creator_user_id, tok=room_creator_token
)
event_ids_to_senders: Dict[str, str] = {}
for user_id, token in users_and_tokens.items():
if user_id == room_creator_user_id:
continue
self.helper.invite(
room=room_id,
targ=user_id,
tok=room_creator_token,
)
# Have the user accept the invite and join the room.
self.helper.join(
room=room_id,
user=user_id,
tok=token,
)
# Have the user send an event.
response = self.helper.send_event(
room_id=room_id,
type="m.room.message",
content={
"msgtype": "m.text",
"body": f"hello, I'm {user_id}!",
},
tok=token,
)
# Record the event ID and sender.
event_id = response["event_id"]
event_ids_to_senders[event_id] = user_id
# Check that `get_senders_for_event_ids` returns the correct data.
response = self.get_success(
self._store.get_senders_for_event_ids(list(event_ids_to_senders.keys()))
)
self.assert_dict(event_ids_to_senders, response)
class ExtremPruneTestCase(HomeserverTestCase):
servlets = [
admin.register_servlets,