mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-07 01:20:16 +00:00
Compare commits
18 Commits
rei/fix_re
...
rei/init_t
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d5a029d62e | ||
|
|
e007d640ab | ||
|
|
0c02022a07 | ||
|
|
79dc224000 | ||
|
|
caa2012154 | ||
|
|
5064f35958 | ||
|
|
c30157b3cb | ||
|
|
fda1ffe5b8 | ||
|
|
a4c476305e | ||
|
|
1803a62db4 | ||
|
|
8295de87a7 | ||
|
|
350e84a8a4 | ||
|
|
69aceef8f6 | ||
|
|
b7946c29be | ||
|
|
d7e238c8ee | ||
|
|
70f41c4541 | ||
|
|
26d9ce80c5 | ||
|
|
aa4a7b75d7 |
10
.ci/before_build_wheel.sh
Normal file
10
.ci/before_build_wheel.sh
Normal file
@@ -0,0 +1,10 @@
|
||||
#!/bin/sh
|
||||
set -xeu
|
||||
|
||||
# On 32-bit Linux platforms, we need libatomic1 to use rustup
|
||||
if command -v yum &> /dev/null; then
|
||||
yum install -y libatomic
|
||||
fi
|
||||
|
||||
# Install a Rust toolchain
|
||||
curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain 1.82.0 -y --profile minimal
|
||||
2
.github/workflows/release-artifacts.yml
vendored
2
.github/workflows/release-artifacts.yml
vendored
@@ -139,7 +139,7 @@ jobs:
|
||||
python-version: "3.x"
|
||||
|
||||
- name: Install cibuildwheel
|
||||
run: python -m pip install cibuildwheel==2.19.1
|
||||
run: python -m pip install cibuildwheel==2.23.0
|
||||
|
||||
- name: Set up QEMU to emulate aarch64
|
||||
if: matrix.arch == 'aarch64'
|
||||
|
||||
76
CHANGES.md
76
CHANGES.md
@@ -1,10 +1,82 @@
|
||||
# Synapse 1.126.0 (2025-03-11)
|
||||
Administrators using the Debian/Ubuntu packages from `packages.matrix.org`, please check
|
||||
[the relevant section in the upgrade notes](https://github.com/element-hq/synapse/blob/release-v1.126/docs/upgrade.md#change-of-signing-key-expiry-date-for-the-debianubuntu-package-repository)
|
||||
as we have recently updated the expiry date on the repository's GPG signing key. The old version of the key will expire on `2025-03-15`.
|
||||
|
||||
No significant changes since 1.126.0rc3.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.126.0rc3 (2025-03-07)
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Revert the background job to clear unreferenced state groups (that was introduced in v1.126.0rc1), due to [a suspected issue](https://github.com/element-hq/synapse/issues/18217) that causes increased disk usage. ([\#18222](https://github.com/element-hq/synapse/issues/18222))
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.126.0rc2 (2025-03-05)
|
||||
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Fix wheel building configuration in CI by installing libatomic1. ([\#18212](https://github.com/element-hq/synapse/issues/18212), [\#18213](https://github.com/element-hq/synapse/issues/18213))
|
||||
|
||||
# Synapse 1.126.0rc1 (2025-03-04)
|
||||
|
||||
Synapse 1.126.0rc1 was not fully released due to an error in CI.
|
||||
|
||||
### Features
|
||||
|
||||
- Define ratelimit configuration for delayed event management. ([\#18019](https://github.com/element-hq/synapse/issues/18019))
|
||||
- Add `form_secret_path` config option. ([\#18090](https://github.com/element-hq/synapse/issues/18090))
|
||||
- Add the `--no-secrets-in-config` command line option. ([\#18092](https://github.com/element-hq/synapse/issues/18092))
|
||||
- Add background job to clear unreferenced state groups. ([\#18154](https://github.com/element-hq/synapse/issues/18154))
|
||||
- Add support for specifying/overriding `id_token_signing_alg_values_supported` for an OpenID identity provider. ([\#18177](https://github.com/element-hq/synapse/issues/18177))
|
||||
- Add `worker_replication_secret_path` config option. ([\#18191](https://github.com/element-hq/synapse/issues/18191))
|
||||
- Add support for specifying/overriding `redirect_uri` in the authorization and token requests against an OpenID identity provider. ([\#18197](https://github.com/element-hq/synapse/issues/18197))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Make sure we advertise registration as disabled when [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861) is enabled. ([\#17661](https://github.com/element-hq/synapse/issues/17661))
|
||||
- Prevent suspended users from sending encrypted messages. ([\#18157](https://github.com/element-hq/synapse/issues/18157))
|
||||
- Cleanup deleted state group references. ([\#18165](https://github.com/element-hq/synapse/issues/18165))
|
||||
- Fix [MSC4108 QR-code login](https://github.com/matrix-org/matrix-spec-proposals/pull/4108) not working with some reverse-proxy setups. ([\#18178](https://github.com/element-hq/synapse/issues/18178))
|
||||
- Support device IDs that can't be represented in a scope when delegating auth to Matrix Authentication Service 0.15.0+. ([\#18174](https://github.com/element-hq/synapse/issues/18174))
|
||||
|
||||
### Updates to the Docker image
|
||||
|
||||
- Speed up the building of the Docker image. ([\#18038](https://github.com/element-hq/synapse/issues/18038))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Move incorrectly placed version indicator in User Event Redaction Admin API docs. ([\#18152](https://github.com/element-hq/synapse/issues/18152))
|
||||
- Document suspension Admin API. ([\#18162](https://github.com/element-hq/synapse/issues/18162))
|
||||
|
||||
### Deprecations and Removals
|
||||
|
||||
- Disable room list publication by default. ([\#18175](https://github.com/element-hq/synapse/issues/18175))
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump anyhow from 1.0.95 to 1.0.96. ([\#18187](https://github.com/element-hq/synapse/issues/18187))
|
||||
* Bump authlib from 1.4.0 to 1.4.1. ([\#18190](https://github.com/element-hq/synapse/issues/18190))
|
||||
* Bump click from 8.1.7 to 8.1.8. ([\#18189](https://github.com/element-hq/synapse/issues/18189))
|
||||
* Bump log from 0.4.25 to 0.4.26. ([\#18184](https://github.com/element-hq/synapse/issues/18184))
|
||||
* Bump pyo3-log from 0.12.0 to 0.12.1. ([\#18046](https://github.com/element-hq/synapse/issues/18046))
|
||||
* Bump serde from 1.0.217 to 1.0.218. ([\#18183](https://github.com/element-hq/synapse/issues/18183))
|
||||
* Bump serde_json from 1.0.138 to 1.0.139. ([\#18186](https://github.com/element-hq/synapse/issues/18186))
|
||||
* Bump sigstore/cosign-installer from 3.8.0 to 3.8.1. ([\#18185](https://github.com/element-hq/synapse/issues/18185))
|
||||
* Bump types-psycopg2 from 2.9.21.20241019 to 2.9.21.20250121. ([\#18188](https://github.com/element-hq/synapse/issues/18188))
|
||||
|
||||
|
||||
# Synapse 1.125.0 (2025-02-25)
|
||||
|
||||
No significant changes since 1.125.0rc1.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.125.0rc1 (2025-02-18)
|
||||
|
||||
### Features
|
||||
|
||||
24
Cargo.lock
generated
24
Cargo.lock
generated
@@ -277,9 +277,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "pyo3"
|
||||
version = "0.23.4"
|
||||
version = "0.23.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "57fe09249128b3173d092de9523eaa75136bf7ba85e0d69eca241c7939c933cc"
|
||||
checksum = "7778bffd85cf38175ac1f545509665d0b9b92a198ca7941f131f85f7a4f9a872"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"cfg-if",
|
||||
@@ -296,9 +296,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "pyo3-build-config"
|
||||
version = "0.23.4"
|
||||
version = "0.23.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1cd3927b5a78757a0d71aa9dff669f903b1eb64b54142a9bd9f757f8fde65fd7"
|
||||
checksum = "94f6cbe86ef3bf18998d9df6e0f3fc1050a8c5efa409bf712e661a4366e010fb"
|
||||
dependencies = [
|
||||
"once_cell",
|
||||
"target-lexicon",
|
||||
@@ -306,9 +306,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "pyo3-ffi"
|
||||
version = "0.23.4"
|
||||
version = "0.23.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "dab6bb2102bd8f991e7749f130a70d05dd557613e39ed2deeee8e9ca0c4d548d"
|
||||
checksum = "e9f1b4c431c0bb1c8fb0a338709859eed0d030ff6daa34368d3b152a63dfdd8d"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"pyo3-build-config",
|
||||
@@ -327,9 +327,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "pyo3-macros"
|
||||
version = "0.23.4"
|
||||
version = "0.23.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "91871864b353fd5ffcb3f91f2f703a22a9797c91b9ab497b1acac7b07ae509c7"
|
||||
checksum = "fbc2201328f63c4710f68abdf653c89d8dbc2858b88c5d88b0ff38a75288a9da"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"pyo3-macros-backend",
|
||||
@@ -339,9 +339,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "pyo3-macros-backend"
|
||||
version = "0.23.4"
|
||||
version = "0.23.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "43abc3b80bc20f3facd86cd3c60beed58c3e2aa26213f3cda368de39c60a27e4"
|
||||
checksum = "fca6726ad0f3da9c9de093d6f116a93c1a38e417ed73bf138472cf4064f72028"
|
||||
dependencies = [
|
||||
"heck",
|
||||
"proc-macro2",
|
||||
@@ -457,9 +457,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serde_json"
|
||||
version = "1.0.139"
|
||||
version = "1.0.140"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "44f86c3acccc9c65b153fe1b85a3be07fe5515274ec9f0653b4a0875731c72a6"
|
||||
checksum = "20068b6e96dc6c9bd23e01df8827e6c7e1f2fddd43c21810382803c136b99373"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"memchr",
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Make sure we advertise registration as disabled when MSC3861 is enabled.
|
||||
@@ -1 +0,0 @@
|
||||
Define ratelimit configuration for delayed event management.
|
||||
@@ -1 +0,0 @@
|
||||
Speed up the building of the Docker image.
|
||||
@@ -1 +0,0 @@
|
||||
Bump pyo3-log from 0.12.0 to 0.12.1.
|
||||
@@ -1 +0,0 @@
|
||||
Add `form_secret_path` config option.
|
||||
@@ -1 +0,0 @@
|
||||
Add the `--no-secrets-in-config` command line option.
|
||||
@@ -1 +0,0 @@
|
||||
Move incorrectly placed version indicator in User Event Redaction Admin API docs.
|
||||
@@ -1 +0,0 @@
|
||||
Add background job to clear unreferenced state groups.
|
||||
@@ -1 +0,0 @@
|
||||
Prevent suspended users from sending encrypted messages.
|
||||
@@ -1 +0,0 @@
|
||||
Document suspension Admin API.
|
||||
@@ -1 +0,0 @@
|
||||
Cleanup deleted state group references.
|
||||
@@ -1 +0,0 @@
|
||||
Support device IDs that can't be represented in a scope when delegating auth to Matrix Authentication Service 0.15.0+.
|
||||
@@ -1 +0,0 @@
|
||||
Disable room list publication by default.
|
||||
@@ -1 +0,0 @@
|
||||
Add support for specifying/overriding `id_token_signing_alg_values_supported` for an OpenID identity provider.
|
||||
@@ -1 +0,0 @@
|
||||
Fix MSC4108 QR-code login not working with some reverse-proxy setups.
|
||||
@@ -1 +0,0 @@
|
||||
Add `worker_replication_secret_path` config option.
|
||||
@@ -1 +0,0 @@
|
||||
Add support for specifying/overriding `redirect_uri` in the authorization and token requests against an OpenID identity provider.
|
||||
1
changelog.d/18231.feature
Normal file
1
changelog.d/18231.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an access token introspection cache to make Matrix Authentication Service integration (MSC3861) more efficient.
|
||||
24
debian/changelog
vendored
24
debian/changelog
vendored
@@ -1,3 +1,27 @@
|
||||
matrix-synapse-py3 (1.126.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.126.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 11 Mar 2025 13:11:29 +0000
|
||||
|
||||
matrix-synapse-py3 (1.126.0~rc3) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.126.0rc3.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 07 Mar 2025 15:45:05 +0000
|
||||
|
||||
matrix-synapse-py3 (1.126.0~rc2) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.126.0rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 05 Mar 2025 14:29:12 +0000
|
||||
|
||||
matrix-synapse-py3 (1.126.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.126.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Mar 2025 13:11:51 +0000
|
||||
|
||||
matrix-synapse-py3 (1.125.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.125.0.
|
||||
|
||||
@@ -162,7 +162,7 @@ by a unique name, the current status (stored in JSON), and some dependency infor
|
||||
* Whether the update requires a previous update to be complete.
|
||||
* A rough ordering for which to complete updates.
|
||||
|
||||
A new background update needs to be added to the `background_updates` table:
|
||||
A new background updates needs to be added to the `background_updates` table:
|
||||
|
||||
```sql
|
||||
INSERT INTO background_updates (ordering, update_name, depends_on, progress_json) VALUES
|
||||
|
||||
@@ -137,6 +137,24 @@ room_list_publication_rules:
|
||||
|
||||
[`room_list_publication_rules`]: usage/configuration/config_documentation.md#room_list_publication_rules
|
||||
|
||||
## Change of signing key expiry date for the Debian/Ubuntu package repository
|
||||
|
||||
Administrators using the Debian/Ubuntu packages from `packages.matrix.org`,
|
||||
please be aware that we have recently updated the expiry date on the repository's GPG signing key,
|
||||
but this change must be imported into your keyring.
|
||||
|
||||
If you have the `matrix-org-archive-keyring` package installed and it updates before the current key expires, this should
|
||||
happen automatically.
|
||||
|
||||
Otherwise, if you see an error similar to `The following signatures were invalid: EXPKEYSIG F473DD4473365DE1`, you
|
||||
will need to get a fresh copy of the keys. You can do so with:
|
||||
|
||||
```sh
|
||||
sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
|
||||
```
|
||||
|
||||
The old version of the key will expire on `2025-03-15`.
|
||||
|
||||
# Upgrading to v1.122.0
|
||||
|
||||
## Dropping support for PostgreSQL 11 and 12
|
||||
|
||||
@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.125.0"
|
||||
version = "1.126.0"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
@@ -390,7 +390,7 @@ skip = "cp36* cp37* cp38* pp37* pp38* *-musllinux_i686 pp*aarch64 *-musllinux_aa
|
||||
#
|
||||
# We temporarily pin Rust to 1.82.0 to work around
|
||||
# https://github.com/element-hq/synapse/issues/17988
|
||||
before-all = "curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain 1.82.0 -y --profile minimal"
|
||||
before-all = "sh .ci/before_build_wheel.sh"
|
||||
environment= { PATH = "$PATH:$HOME/.cargo/bin" }
|
||||
|
||||
# For some reason if we don't manually clean the build directory we
|
||||
|
||||
@@ -30,7 +30,7 @@ http = "1.1.0"
|
||||
lazy_static = "1.4.0"
|
||||
log = "0.4.17"
|
||||
mime = "0.3.17"
|
||||
pyo3 = { version = "0.23.2", features = [
|
||||
pyo3 = { version = "0.23.5", features = [
|
||||
"macros",
|
||||
"anyhow",
|
||||
"abi3",
|
||||
|
||||
@@ -191,11 +191,6 @@ APPEND_ONLY_TABLES = [
|
||||
|
||||
|
||||
IGNORED_TABLES = {
|
||||
# Porting the auto generated sequence in this table is non-trivial.
|
||||
# None of the entries in this list are mandatory for Synapse to keep working.
|
||||
# If state group disk space is an issue after the port, the
|
||||
# `delete_unreferenced_state_groups_bg_update` background task can be run again.
|
||||
"state_groups_pending_deletion",
|
||||
# We don't port these tables, as they're a faff and we can regenerate
|
||||
# them anyway.
|
||||
"user_directory",
|
||||
@@ -221,15 +216,6 @@ IGNORED_TABLES = {
|
||||
}
|
||||
|
||||
|
||||
# These background updates will not be applied upon creation of the postgres database.
|
||||
IGNORED_BACKGROUND_UPDATES = {
|
||||
# Reapplying this background update to the postgres database is unnecessary after
|
||||
# already having waited for the SQLite database to complete all running background
|
||||
# updates.
|
||||
"delete_unreferenced_state_groups_bg_update",
|
||||
}
|
||||
|
||||
|
||||
# Error returned by the run function. Used at the top-level part of the script to
|
||||
# handle errors and return codes.
|
||||
end_error: Optional[str] = None
|
||||
@@ -701,20 +687,6 @@ class Porter:
|
||||
# 0 means off. 1 means full. 2 means incremental.
|
||||
return autovacuum_setting != 0
|
||||
|
||||
async def remove_ignored_background_updates_from_database(self) -> None:
|
||||
def _remove_delete_unreferenced_state_groups_bg_updates(
|
||||
txn: LoggingTransaction,
|
||||
) -> None:
|
||||
txn.execute(
|
||||
"DELETE FROM background_updates WHERE update_name = ANY(?)",
|
||||
(list(IGNORED_BACKGROUND_UPDATES),),
|
||||
)
|
||||
|
||||
await self.postgres_store.db_pool.runInteraction(
|
||||
"remove_delete_unreferenced_state_groups_bg_updates",
|
||||
_remove_delete_unreferenced_state_groups_bg_updates,
|
||||
)
|
||||
|
||||
async def run(self) -> None:
|
||||
"""Ports the SQLite database to a PostgreSQL database.
|
||||
|
||||
@@ -760,8 +732,6 @@ class Porter:
|
||||
self.hs_config.database.get_single_database()
|
||||
)
|
||||
|
||||
await self.remove_ignored_background_updates_from_database()
|
||||
|
||||
await self.run_background_updates_on_postgres()
|
||||
|
||||
self.progress.set_state("Creating port tables")
|
||||
|
||||
@@ -19,6 +19,7 @@
|
||||
#
|
||||
#
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional
|
||||
from urllib.parse import urlencode
|
||||
|
||||
@@ -47,6 +48,7 @@ from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.types import Requester, UserID, create_requester
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
||||
from synapse.util.caches.response_cache import ResponseCache
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.rest.admin.experimental_features import ExperimentalFeature
|
||||
@@ -76,6 +78,61 @@ def scope_to_list(scope: str) -> List[str]:
|
||||
return scope.strip().split(" ")
|
||||
|
||||
|
||||
@dataclass
|
||||
class IntrospectionResult:
|
||||
_inner: IntrospectionToken
|
||||
|
||||
# when we retrieved this token,
|
||||
# in milliseconds since the Unix epoch
|
||||
retrieved_at_ms: int
|
||||
|
||||
def is_active(self, now_ms: int) -> bool:
|
||||
if not self._inner.get("active"):
|
||||
return False
|
||||
|
||||
expires_in = self._inner.get("expires_in")
|
||||
if expires_in is None:
|
||||
return True
|
||||
if not isinstance(expires_in, int):
|
||||
raise InvalidClientTokenError("token `expires_in` is not an int")
|
||||
|
||||
absolute_expiry_ms = expires_in * 1000 + self.retrieved_at_ms
|
||||
return now_ms < absolute_expiry_ms
|
||||
|
||||
def get_scope_list(self) -> List[str]:
|
||||
value = self._inner.get("scope")
|
||||
if not isinstance(value, str):
|
||||
return []
|
||||
return scope_to_list(value)
|
||||
|
||||
def get_sub(self) -> Optional[str]:
|
||||
value = self._inner.get("sub")
|
||||
if not isinstance(value, str):
|
||||
return None
|
||||
return value
|
||||
|
||||
def get_username(self) -> Optional[str]:
|
||||
value = self._inner.get("username")
|
||||
if not isinstance(value, str):
|
||||
return None
|
||||
return value
|
||||
|
||||
def get_name(self) -> Optional[str]:
|
||||
value = self._inner.get("name")
|
||||
if not isinstance(value, str):
|
||||
return None
|
||||
return value
|
||||
|
||||
def get_device_id(self) -> Optional[str]:
|
||||
value = self._inner.get("device_id")
|
||||
if value is not None and not isinstance(value, str):
|
||||
raise AuthError(
|
||||
500,
|
||||
"Invalid device ID in introspection result",
|
||||
)
|
||||
return value
|
||||
|
||||
|
||||
class PrivateKeyJWTWithKid(PrivateKeyJWT): # type: ignore[misc]
|
||||
"""An implementation of the private_key_jwt client auth method that includes a kid header.
|
||||
|
||||
@@ -121,6 +178,31 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
self._hostname = hs.hostname
|
||||
self._admin_token: Callable[[], Optional[str]] = self._config.admin_token
|
||||
|
||||
# # Token Introspection Cache
|
||||
# This remembers what users/devices are represented by which access tokens,
|
||||
# in order to reduce overall system load:
|
||||
# - on Synapse (as requests are relatively expensive)
|
||||
# - on the network
|
||||
# - on MAS
|
||||
#
|
||||
# Since there is no invalidation mechanism currently,
|
||||
# the entries expire after 2 minutes.
|
||||
# This does mean tokens can be treated as valid by Synapse
|
||||
# for longer than reality.
|
||||
#
|
||||
# Ideally, tokens should logically be invalidated in the following circumstances:
|
||||
# - If a session logout happens.
|
||||
# In this case, MAS will delete the device within Synapse
|
||||
# anyway and this is good enough as an invalidation.
|
||||
# - If the client refreshes their token in MAS.
|
||||
# In this case, the device still exists and it's not the end of the world for
|
||||
# the old access token to continue working for a short time.
|
||||
self._introspection_cache: ResponseCache[str] = ResponseCache(
|
||||
self._clock,
|
||||
"token_introspection",
|
||||
timeout_ms=120_000,
|
||||
)
|
||||
|
||||
self._issuer_metadata = RetryOnExceptionCachedCall[OpenIDProviderMetadata](
|
||||
self._load_metadata
|
||||
)
|
||||
@@ -193,7 +275,7 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
metadata = await self._issuer_metadata.get()
|
||||
return metadata.get("introspection_endpoint")
|
||||
|
||||
async def _introspect_token(self, token: str) -> IntrospectionToken:
|
||||
async def _introspect_token(self, token: str) -> IntrospectionResult:
|
||||
"""
|
||||
Send a token to the introspection endpoint and returns the introspection response
|
||||
|
||||
@@ -266,7 +348,9 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
"The introspection endpoint returned an invalid JSON response."
|
||||
)
|
||||
|
||||
return IntrospectionToken(**resp)
|
||||
return IntrospectionResult(
|
||||
IntrospectionToken(**resp), retrieved_at_ms=self._clock.time_msec()
|
||||
)
|
||||
|
||||
async def is_server_admin(self, requester: Requester) -> bool:
|
||||
return "urn:synapse:admin:*" in requester.scope
|
||||
@@ -344,7 +428,9 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
)
|
||||
|
||||
try:
|
||||
introspection_result = await self._introspect_token(token)
|
||||
introspection_result = await self._introspection_cache.wrap(
|
||||
token, self._introspect_token, token
|
||||
)
|
||||
except Exception:
|
||||
logger.exception("Failed to introspect token")
|
||||
raise SynapseError(503, "Unable to introspect the access token")
|
||||
@@ -353,11 +439,11 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
|
||||
# TODO: introspection verification should be more extensive, especially:
|
||||
# - verify the audience
|
||||
if not introspection_result.get("active"):
|
||||
if not introspection_result.is_active(self._clock.time_msec()):
|
||||
raise InvalidClientTokenError("Token is not active")
|
||||
|
||||
# Let's look at the scope
|
||||
scope: List[str] = scope_to_list(introspection_result.get("scope", ""))
|
||||
scope: List[str] = introspection_result.get_scope_list()
|
||||
|
||||
# Determine type of user based on presence of particular scopes
|
||||
has_user_scope = SCOPE_MATRIX_API in scope
|
||||
@@ -367,7 +453,7 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
raise InvalidClientTokenError("No scope in token granting user rights")
|
||||
|
||||
# Match via the sub claim
|
||||
sub: Optional[str] = introspection_result.get("sub")
|
||||
sub: Optional[str] = introspection_result.get_sub()
|
||||
if sub is None:
|
||||
raise InvalidClientTokenError(
|
||||
"Invalid sub claim in the introspection result"
|
||||
@@ -381,7 +467,7 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
# or the external_id was never recorded
|
||||
|
||||
# TODO: claim mapping should be configurable
|
||||
username: Optional[str] = introspection_result.get("username")
|
||||
username: Optional[str] = introspection_result.get_username()
|
||||
if username is None or not isinstance(username, str):
|
||||
raise AuthError(
|
||||
500,
|
||||
@@ -399,7 +485,7 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
|
||||
# TODO: claim mapping should be configurable
|
||||
# If present, use the name claim as the displayname
|
||||
name: Optional[str] = introspection_result.get("name")
|
||||
name: Optional[str] = introspection_result.get_name()
|
||||
|
||||
await self.store.register_user(
|
||||
user_id=user_id.to_string(), create_profile_with_displayname=name
|
||||
@@ -414,15 +500,8 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
|
||||
# MAS 0.15+ will give us the device ID as an explicit value for compatibility sessions
|
||||
# If present, we get it from here, if not we get it in thee scope
|
||||
device_id = introspection_result.get("device_id")
|
||||
if device_id is not None:
|
||||
# We got the device ID explicitly, just sanity check that it's a string
|
||||
if not isinstance(device_id, str):
|
||||
raise AuthError(
|
||||
500,
|
||||
"Invalid device ID in introspection result",
|
||||
)
|
||||
else:
|
||||
device_id = introspection_result.get_device_id()
|
||||
if device_id is None:
|
||||
# Find device_ids in scope
|
||||
# We only allow a single device_id in the scope, so we find them all in the
|
||||
# scope list, and raise if there are more than one. The OIDC server should be
|
||||
|
||||
@@ -21,18 +21,11 @@
|
||||
|
||||
import itertools
|
||||
import logging
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Collection,
|
||||
Mapping,
|
||||
Set,
|
||||
)
|
||||
from typing import TYPE_CHECKING, Collection, Mapping, Set
|
||||
|
||||
from synapse.logging.context import nested_logging_context
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.storage.database import LoggingTransaction
|
||||
from synapse.storage.databases import Databases
|
||||
from synapse.types.storage import _BackgroundUpdates
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -51,11 +44,6 @@ class PurgeEventsStorageController:
|
||||
self._delete_state_groups_loop, 60 * 1000
|
||||
)
|
||||
|
||||
self.stores.state.db_pool.updates.register_background_update_handler(
|
||||
_BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE,
|
||||
self._background_delete_unrefereneced_state_groups,
|
||||
)
|
||||
|
||||
async def purge_room(self, room_id: str) -> None:
|
||||
"""Deletes all record of a room"""
|
||||
|
||||
@@ -92,6 +80,68 @@ class PurgeEventsStorageController:
|
||||
sg_to_delete
|
||||
)
|
||||
|
||||
async def _find_unreferenced_groups(
|
||||
self, state_groups: Collection[int]
|
||||
) -> Set[int]:
|
||||
"""Used when purging history to figure out which state groups can be
|
||||
deleted.
|
||||
|
||||
Args:
|
||||
state_groups: Set of state groups referenced by events
|
||||
that are going to be deleted.
|
||||
|
||||
Returns:
|
||||
The set of state groups that can be deleted.
|
||||
"""
|
||||
# Set of events that we have found to be referenced by events
|
||||
referenced_groups = set()
|
||||
|
||||
# Set of state groups we've already seen
|
||||
state_groups_seen = set(state_groups)
|
||||
|
||||
# Set of state groups to handle next.
|
||||
next_to_search = set(state_groups)
|
||||
while next_to_search:
|
||||
# We bound size of groups we're looking up at once, to stop the
|
||||
# SQL query getting too big
|
||||
if len(next_to_search) < 100:
|
||||
current_search = next_to_search
|
||||
next_to_search = set()
|
||||
else:
|
||||
current_search = set(itertools.islice(next_to_search, 100))
|
||||
next_to_search -= current_search
|
||||
|
||||
referenced = await self.stores.main.get_referenced_state_groups(
|
||||
current_search
|
||||
)
|
||||
referenced_groups |= referenced
|
||||
|
||||
# We don't continue iterating up the state group graphs for state
|
||||
# groups that are referenced.
|
||||
current_search -= referenced
|
||||
|
||||
edges = await self.stores.state.get_previous_state_groups(current_search)
|
||||
|
||||
prevs = set(edges.values())
|
||||
# We don't bother re-handling groups we've already seen
|
||||
prevs -= state_groups_seen
|
||||
next_to_search |= prevs
|
||||
state_groups_seen |= prevs
|
||||
|
||||
# We also check to see if anything referencing the state groups are
|
||||
# also unreferenced. This helps ensure that we delete unreferenced
|
||||
# state groups, if we don't then we will de-delta them when we
|
||||
# delete the other state groups leading to increased DB usage.
|
||||
next_edges = await self.stores.state.get_next_state_groups(current_search)
|
||||
nexts = set(next_edges.keys())
|
||||
nexts -= state_groups_seen
|
||||
next_to_search |= nexts
|
||||
state_groups_seen |= nexts
|
||||
|
||||
to_delete = state_groups_seen - referenced_groups
|
||||
|
||||
return to_delete
|
||||
|
||||
@wrap_as_background_process("_delete_state_groups_loop")
|
||||
async def _delete_state_groups_loop(self) -> None:
|
||||
"""Background task that deletes any state groups that may be pending
|
||||
@@ -153,173 +203,3 @@ class PurgeEventsStorageController:
|
||||
room_id,
|
||||
groups_to_sequences,
|
||||
)
|
||||
|
||||
async def _background_delete_unrefereneced_state_groups(
|
||||
self, progress: dict, batch_size: int
|
||||
) -> int:
|
||||
"""This background update will slowly delete any unreferenced state groups"""
|
||||
|
||||
last_checked_state_group = progress.get("last_checked_state_group")
|
||||
max_state_group = progress.get("max_state_group")
|
||||
|
||||
if last_checked_state_group is None or max_state_group is None:
|
||||
# This is the first run.
|
||||
last_checked_state_group = 0
|
||||
|
||||
max_state_group = await self.stores.state.db_pool.simple_select_one_onecol(
|
||||
table="state_groups",
|
||||
keyvalues={},
|
||||
retcol="MAX(id)",
|
||||
allow_none=True,
|
||||
desc="get_max_state_group",
|
||||
)
|
||||
if max_state_group is None:
|
||||
# There are no state groups so the background process is finished.
|
||||
await self.stores.state.db_pool.updates._end_background_update(
|
||||
_BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE
|
||||
)
|
||||
return batch_size
|
||||
|
||||
(
|
||||
last_checked_state_group,
|
||||
final_batch,
|
||||
) = await self._delete_unreferenced_state_groups_batch(
|
||||
last_checked_state_group, batch_size, max_state_group
|
||||
)
|
||||
|
||||
if not final_batch:
|
||||
# There are more state groups to check.
|
||||
progress = {
|
||||
"last_checked_state_group": last_checked_state_group,
|
||||
"max_state_group": max_state_group,
|
||||
}
|
||||
await self.stores.state.db_pool.updates._background_update_progress(
|
||||
_BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE,
|
||||
progress,
|
||||
)
|
||||
else:
|
||||
# This background process is finished.
|
||||
await self.stores.state.db_pool.updates._end_background_update(
|
||||
_BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE
|
||||
)
|
||||
|
||||
return batch_size
|
||||
|
||||
async def _delete_unreferenced_state_groups_batch(
|
||||
self,
|
||||
last_checked_state_group: int,
|
||||
batch_size: int,
|
||||
max_state_group: int,
|
||||
) -> tuple[int, bool]:
|
||||
"""Looks for unreferenced state groups starting from the last state group
|
||||
checked, and any state groups which would become unreferenced if a state group
|
||||
was deleted, and marks them for deletion.
|
||||
|
||||
Args:
|
||||
last_checked_state_group: The last state group that was checked.
|
||||
batch_size: How many state groups to process in this iteration.
|
||||
|
||||
Returns:
|
||||
(last_checked_state_group, final_batch)
|
||||
"""
|
||||
|
||||
# Look for state groups that can be cleaned up.
|
||||
def get_next_state_groups_txn(txn: LoggingTransaction) -> Set[int]:
|
||||
state_group_sql = "SELECT id FROM state_groups WHERE ? < id AND id <= ? ORDER BY id LIMIT ?"
|
||||
txn.execute(
|
||||
state_group_sql, (last_checked_state_group, max_state_group, batch_size)
|
||||
)
|
||||
|
||||
next_set = {row[0] for row in txn}
|
||||
|
||||
return next_set
|
||||
|
||||
next_set = await self.stores.state.db_pool.runInteraction(
|
||||
"get_next_state_groups", get_next_state_groups_txn
|
||||
)
|
||||
|
||||
final_batch = False
|
||||
if len(next_set) < batch_size:
|
||||
final_batch = True
|
||||
else:
|
||||
last_checked_state_group = max(next_set)
|
||||
|
||||
if len(next_set) == 0:
|
||||
return last_checked_state_group, final_batch
|
||||
|
||||
# Find all state groups that can be deleted if the original set is deleted.
|
||||
# This set includes the original set, as well as any state groups that would
|
||||
# become unreferenced upon deleting the original set.
|
||||
to_delete = await self._find_unreferenced_groups(next_set)
|
||||
|
||||
if len(to_delete) == 0:
|
||||
return last_checked_state_group, final_batch
|
||||
|
||||
await self.stores.state_deletion.mark_state_groups_as_pending_deletion(
|
||||
to_delete
|
||||
)
|
||||
|
||||
return last_checked_state_group, final_batch
|
||||
|
||||
async def _find_unreferenced_groups(
|
||||
self,
|
||||
state_groups: Collection[int],
|
||||
) -> Set[int]:
|
||||
"""Used when purging history to figure out which state groups can be
|
||||
deleted.
|
||||
|
||||
Args:
|
||||
state_groups: Set of state groups referenced by events
|
||||
that are going to be deleted.
|
||||
|
||||
Returns:
|
||||
The set of state groups that can be deleted.
|
||||
"""
|
||||
# Set of events that we have found to be referenced by events
|
||||
referenced_groups = set()
|
||||
|
||||
# Set of state groups we've already seen
|
||||
state_groups_seen = set(state_groups)
|
||||
|
||||
# Set of state groups to handle next.
|
||||
next_to_search = set(state_groups)
|
||||
while next_to_search:
|
||||
# We bound size of groups we're looking up at once, to stop the
|
||||
# SQL query getting too big
|
||||
if len(next_to_search) < 100:
|
||||
current_search = next_to_search
|
||||
next_to_search = set()
|
||||
else:
|
||||
current_search = set(itertools.islice(next_to_search, 100))
|
||||
next_to_search -= current_search
|
||||
|
||||
referenced = await self.stores.main.get_referenced_state_groups(
|
||||
current_search
|
||||
)
|
||||
referenced_groups |= referenced
|
||||
|
||||
# We don't continue iterating up the state group graphs for state
|
||||
# groups that are referenced.
|
||||
current_search -= referenced
|
||||
|
||||
edges = await self.stores.state.get_previous_state_groups(current_search)
|
||||
|
||||
prevs = set(edges.values())
|
||||
# We don't bother re-handling groups we've already seen
|
||||
prevs -= state_groups_seen
|
||||
next_to_search |= prevs
|
||||
state_groups_seen |= prevs
|
||||
|
||||
# We also check to see if anything referencing the state groups are
|
||||
# also unreferenced. This helps ensure that we delete unreferenced
|
||||
# state groups, if we don't then we will de-delta them when we
|
||||
# delete the other state groups leading to increased DB usage.
|
||||
next_edges = await self.stores.state.get_next_state_groups(current_search)
|
||||
nexts = set(next_edges.keys())
|
||||
nexts -= state_groups_seen
|
||||
next_to_search |= nexts
|
||||
state_groups_seen |= nexts
|
||||
|
||||
to_delete = state_groups_seen - referenced_groups
|
||||
|
||||
return to_delete
|
||||
|
||||
@@ -20,15 +20,7 @@
|
||||
#
|
||||
|
||||
import logging
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Dict,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Tuple,
|
||||
Union,
|
||||
)
|
||||
from typing import TYPE_CHECKING, Dict, List, Mapping, Optional, Tuple, Union
|
||||
|
||||
from synapse.logging.opentracing import tag_args, trace
|
||||
from synapse.storage._base import SQLBaseStore
|
||||
|
||||
@@ -321,42 +321,18 @@ class StateDeletionDataStore:
|
||||
async def mark_state_groups_as_pending_deletion(
|
||||
self, state_groups: Collection[int]
|
||||
) -> None:
|
||||
"""Mark the given state groups as pending deletion.
|
||||
|
||||
If any of the state groups are already pending deletion, then those records are
|
||||
left as is.
|
||||
"""
|
||||
|
||||
await self.db_pool.runInteraction(
|
||||
"mark_state_groups_as_pending_deletion",
|
||||
self._mark_state_groups_as_pending_deletion_txn,
|
||||
state_groups,
|
||||
)
|
||||
|
||||
def _mark_state_groups_as_pending_deletion_txn(
|
||||
self,
|
||||
txn: LoggingTransaction,
|
||||
state_groups: Collection[int],
|
||||
) -> None:
|
||||
sql = """
|
||||
INSERT INTO state_groups_pending_deletion (state_group, insertion_ts)
|
||||
VALUES %s
|
||||
ON CONFLICT (state_group)
|
||||
DO NOTHING
|
||||
"""
|
||||
"""Mark the given state groups as pending deletion"""
|
||||
|
||||
now = self._clock.time_msec()
|
||||
rows = [
|
||||
(
|
||||
state_group,
|
||||
now,
|
||||
)
|
||||
for state_group in state_groups
|
||||
]
|
||||
if isinstance(txn.database_engine, PostgresEngine):
|
||||
txn.execute_values(sql % ("?",), rows, fetch=False)
|
||||
else:
|
||||
txn.execute_batch(sql % ("(?, ?)",), rows)
|
||||
|
||||
await self.db_pool.simple_upsert_many(
|
||||
table="state_groups_pending_deletion",
|
||||
key_names=("state_group",),
|
||||
key_values=[(state_group,) for state_group in state_groups],
|
||||
value_names=("insertion_ts",),
|
||||
value_values=[(now,) for _ in state_groups],
|
||||
desc="mark_state_groups_as_pending_deletion",
|
||||
)
|
||||
|
||||
async def mark_state_groups_as_used(self, state_groups: Collection[int]) -> None:
|
||||
"""Mark the given state groups as now being referenced"""
|
||||
|
||||
@@ -158,7 +158,6 @@ Changes in SCHEMA_VERSION = 88
|
||||
|
||||
Changes in SCHEMA_VERSION = 89
|
||||
- Add `state_groups_pending_deletion` and `state_groups_persisting` tables.
|
||||
- Add background update to delete unreferenced state groups.
|
||||
"""
|
||||
|
||||
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2025 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
-- Add a background update to delete any unreferenced state groups
|
||||
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||
(8902, 'delete_unreferenced_state_groups_bg_update', '{}');
|
||||
@@ -48,7 +48,3 @@ class _BackgroundUpdates:
|
||||
SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_FIX_FORGOTTEN_COLUMN_BG_UPDATE = (
|
||||
"sliding_sync_membership_snapshots_fix_forgotten_column_bg_update"
|
||||
)
|
||||
|
||||
DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE = (
|
||||
"delete_unreferenced_state_groups_bg_update"
|
||||
)
|
||||
|
||||
@@ -539,6 +539,44 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
|
||||
error = self.get_failure(self.auth.get_user_by_req(request), SynapseError)
|
||||
self.assertEqual(error.value.code, 503)
|
||||
|
||||
def test_cached_expired_introspection(self) -> None:
|
||||
"""The handler should raise an error if the introspection response gives
|
||||
an expiry time, the introspection response is cached and then the entry is
|
||||
re-requested after it has expired."""
|
||||
|
||||
self.http_client.request = introspection_mock = AsyncMock(
|
||||
return_value=FakeResponse.json(
|
||||
code=200,
|
||||
payload={
|
||||
"active": True,
|
||||
"sub": SUBJECT,
|
||||
"scope": " ".join(
|
||||
[
|
||||
MATRIX_USER_SCOPE,
|
||||
f"{MATRIX_DEVICE_SCOPE_PREFIX}AABBCC",
|
||||
]
|
||||
),
|
||||
"username": USERNAME,
|
||||
"expires_in": 60,
|
||||
},
|
||||
)
|
||||
)
|
||||
request = Mock(args={})
|
||||
request.args[b"access_token"] = [b"mockAccessToken"]
|
||||
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
|
||||
|
||||
# The first CS-API request causes a successful introspection
|
||||
self.get_success(self.auth.get_user_by_req(request))
|
||||
self.assertEqual(introspection_mock.call_count, 1)
|
||||
|
||||
# Sleep for 60 seconds so the token expires.
|
||||
self.reactor.advance(60.0)
|
||||
|
||||
# Now the CS-API request fails because the token expired
|
||||
self.get_failure(self.auth.get_user_by_req(request), InvalidClientTokenError)
|
||||
# Ensure another introspection request was not sent
|
||||
self.assertEqual(introspection_mock.call_count, 1)
|
||||
|
||||
def make_device_keys(self, user_id: str, device_id: str) -> JsonDict:
|
||||
# We only generate a master key to simplify the test.
|
||||
master_signing_key = generate_signing_key(device_id)
|
||||
|
||||
@@ -24,7 +24,6 @@ from synapse.api.errors import NotFoundError, SynapseError
|
||||
from synapse.rest.client import room
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types.state import StateFilter
|
||||
from synapse.types.storage import _BackgroundUpdates
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests.unittest import HomeserverTestCase
|
||||
@@ -304,99 +303,3 @@ class PurgeTests(HomeserverTestCase):
|
||||
)
|
||||
)
|
||||
self.assertEqual(len(state_groups), 1)
|
||||
|
||||
def test_clear_unreferenced_state_groups(self) -> None:
|
||||
"""Test that any unreferenced state groups are automatically cleaned up."""
|
||||
|
||||
self.helper.send(self.room_id, body="test1")
|
||||
state1 = self.helper.send_state(
|
||||
self.room_id, "org.matrix.test", body={"number": 2}
|
||||
)
|
||||
# Create enough state events to require multiple batches of
|
||||
# delete_unreferenced_state_groups_bg_update to be run.
|
||||
for i in range(200):
|
||||
self.helper.send_state(self.room_id, "org.matrix.test", body={"number": i})
|
||||
state2 = self.helper.send_state(
|
||||
self.room_id, "org.matrix.test", body={"number": 3}
|
||||
)
|
||||
self.helper.send(self.room_id, body="test4")
|
||||
last = self.helper.send(self.room_id, body="test5")
|
||||
|
||||
# Create an unreferenced state group that has a prev group of one of the
|
||||
# to-be-purged events.
|
||||
prev_group = self.get_success(
|
||||
self.store._get_state_group_for_event(state1["event_id"])
|
||||
)
|
||||
unreferenced_state_group = self.get_success(
|
||||
self.state_store.store_state_group(
|
||||
event_id=last["event_id"],
|
||||
room_id=self.room_id,
|
||||
prev_group=prev_group,
|
||||
delta_ids={("org.matrix.test", ""): state2["event_id"]},
|
||||
current_state_ids=None,
|
||||
)
|
||||
)
|
||||
|
||||
another_unreferenced_state_group = self.get_success(
|
||||
self.state_store.store_state_group(
|
||||
event_id=last["event_id"],
|
||||
room_id=self.room_id,
|
||||
prev_group=unreferenced_state_group,
|
||||
delta_ids={("org.matrix.test", ""): state2["event_id"]},
|
||||
current_state_ids=None,
|
||||
)
|
||||
)
|
||||
|
||||
# Insert and run the background update.
|
||||
self.get_success(
|
||||
self.store.db_pool.simple_insert(
|
||||
"background_updates",
|
||||
{
|
||||
"update_name": _BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE,
|
||||
"progress_json": "{}",
|
||||
},
|
||||
)
|
||||
)
|
||||
self.store.db_pool.updates._all_done = False
|
||||
self.wait_for_background_updates()
|
||||
|
||||
# Advance so that the background job to delete the state groups runs
|
||||
self.reactor.advance(
|
||||
1 + self.state_deletion_store.DELAY_BEFORE_DELETION_MS / 1000
|
||||
)
|
||||
|
||||
# We expect that the unreferenced state group has been deleted.
|
||||
row = self.get_success(
|
||||
self.state_store.db_pool.simple_select_one_onecol(
|
||||
table="state_groups",
|
||||
keyvalues={"id": unreferenced_state_group},
|
||||
retcol="id",
|
||||
allow_none=True,
|
||||
desc="test_purge_unreferenced_state_group",
|
||||
)
|
||||
)
|
||||
self.assertIsNone(row)
|
||||
|
||||
# We expect that the other unreferenced state group has also been deleted.
|
||||
row = self.get_success(
|
||||
self.state_store.db_pool.simple_select_one_onecol(
|
||||
table="state_groups",
|
||||
keyvalues={"id": another_unreferenced_state_group},
|
||||
retcol="id",
|
||||
allow_none=True,
|
||||
desc="test_purge_unreferenced_state_group",
|
||||
)
|
||||
)
|
||||
self.assertIsNone(row)
|
||||
|
||||
# We expect there to now only be one state group for the room, which is
|
||||
# the state group of the last event (as the only outlier).
|
||||
state_groups = self.get_success(
|
||||
self.state_store.db_pool.simple_select_onecol(
|
||||
table="state_groups",
|
||||
keyvalues={"room_id": self.room_id},
|
||||
retcol="id",
|
||||
desc="test_purge_unreferenced_state_group",
|
||||
)
|
||||
)
|
||||
self.assertEqual(len(state_groups), 207)
|
||||
|
||||
Reference in New Issue
Block a user