Compare commits

..

9 Commits

Author SHA1 Message Date
Eric Eastwood
2b2e37e580 Add tests 2024-09-17 15:52:26 -05:00
Eric Eastwood
7d5484ea0d Update comments 2024-09-17 15:29:15 -05:00
Eric Eastwood
65dc3aa5b8 Take to_token into account 2024-09-17 15:25:57 -05:00
Eric Eastwood
48ab85f276 Get rid of multiple maps 2024-09-17 15:20:19 -05:00
Eric Eastwood
68c5cd8c0b Restore connection tracking 2024-09-17 15:12:25 -05:00
Eric Eastwood
aae9e912de Merge branch 'develop' into erikj/account_data
Conflicts:
	synapse/handlers/sliding_sync/extensions.py
2024-09-17 15:02:15 -05:00
Erik Johnston
1c1eaf7b5f Newsfile 2024-09-11 13:06:46 +01:00
Erik Johnston
20be70dae4 Correctly handle room account data with gappy lists 2024-09-11 12:55:39 +01:00
Erik Johnston
119b7527fb Support room account data in per connection state 2024-09-11 12:00:03 +01:00
66 changed files with 536 additions and 4722 deletions

View File

@@ -1,82 +1,3 @@
# Synapse 1.116.0rc2 (2024-09-26)
### Features
- Add implementation of restricting who can overwrite a state event as proposed by [MSC3757](https://github.com/matrix-org/matrix-spec-proposals/pull/3757). ([\#17513](https://github.com/element-hq/synapse/issues/17513))
# Synapse 1.116.0rc1 (2024-09-25)
### Features
- Add initial implementation of delayed events as proposed by [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140). ([\#17326](https://github.com/element-hq/synapse/issues/17326))
- Add an asynchronous Admin API endpoint [to redact all a user's events](https://element-hq.github.io/synapse/v1.116/admin_api/user_admin_api.html#redact-all-the-events-of-a-user),
and [an endpoint to check on the status of that redaction task](https://element-hq.github.io/synapse/v1.116/admin_api/user_admin_api.html#check-the-status-of-a-redaction-process). ([\#17506](https://github.com/element-hq/synapse/issues/17506))
- Add support for the `tags` and `not_tags` filters for [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17662](https://github.com/element-hq/synapse/issues/17662))
- Guests can use the new media endpoints to download media, as described by [MSC4189](https://github.com/matrix-org/matrix-spec-proposals/pull/4189). ([\#17675](https://github.com/element-hq/synapse/issues/17675))
- Add config option `turn_shared_secret_path`. ([\#17690](https://github.com/element-hq/synapse/issues/17690))
- Return room tags in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync account data extension. ([\#17707](https://github.com/element-hq/synapse/issues/17707))
### Bugfixes
- Make sure we get up-to-date state information when using the new [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync tables to derive room membership. ([\#17692](https://github.com/element-hq/synapse/issues/17692))
- Fix bug where room account data would not correctly be sent down [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync for old rooms. ([\#17695](https://github.com/element-hq/synapse/issues/17695))
- Fix a bug in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync which could prevent /sync from working for certain user accounts. ([\#17727](https://github.com/element-hq/synapse/issues/17727), [\#17733](https://github.com/element-hq/synapse/issues/17733))
- Ignore invites from ignored users in Sliding Sync. ([\#17729](https://github.com/element-hq/synapse/issues/17729))
- Fix bug in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync where the server would incorrectly return a negative bump stamp, which caused Element X apps to stop syncing. ([\#17748](https://github.com/element-hq/synapse/issues/17748))
### Internal Changes
- Import pydantic objects from the `_pydantic_compat` module.
This allows `check_pydantic_models.py` to mock those pydantic objects
only in the synapse module, and not interfere with pydantic objects in
external dependencies. ([\#17667](https://github.com/element-hq/synapse/issues/17667))
- Use [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync tables as a bulk shortcut for getting the max `event_stream_ordering` of rooms. ([\#17693](https://github.com/element-hq/synapse/issues/17693))
- Speed up [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync requests a bit where there are many room changes. ([\#17696](https://github.com/element-hq/synapse/issues/17696))
- Refactor [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync filter unit tests so the sliding sync API has better test coverage. ([\#17703](https://github.com/element-hq/synapse/issues/17703))
- Fetch `bump_stamp`s more efficiently in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17723](https://github.com/element-hq/synapse/issues/17723))
- Shortcut for checking if certain background updates have completed (utilized in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync). ([\#17724](https://github.com/element-hq/synapse/issues/17724))
- More efficiently fetch rooms for [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17725](https://github.com/element-hq/synapse/issues/17725))
- Fix `_bulk_get_max_event_pos` being inefficient. ([\#17728](https://github.com/element-hq/synapse/issues/17728))
- Add cache to `get_tags_for_room(...)`. ([\#17730](https://github.com/element-hq/synapse/issues/17730))
- Small performance improvement in speeding up [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17731](https://github.com/element-hq/synapse/issues/17731))
- Minor speed up of initial [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync requests. ([\#17734](https://github.com/element-hq/synapse/issues/17734))
- Remove usage of the deprecated `cgi` module, deprecated in Python 3.11 and removed in Python 3.13. ([\#17741](https://github.com/element-hq/synapse/issues/17741))
- Fix typing of a variable that is not `Unknown` anymore after updating `treq`. ([\#17744](https://github.com/element-hq/synapse/issues/17744))
### Updates to locked dependencies
* Bump anyhow from 1.0.86 to 1.0.89. ([\#17685](https://github.com/element-hq/synapse/issues/17685), [\#17716](https://github.com/element-hq/synapse/issues/17716))
* Bump bytes from 1.7.1 to 1.7.2. ([\#17743](https://github.com/element-hq/synapse/issues/17743))
* Bump cryptography from 43.0.0 to 43.0.1. ([\#17689](https://github.com/element-hq/synapse/issues/17689))
* Bump idna from 3.8 to 3.10. ([\#17758](https://github.com/element-hq/synapse/issues/17758))
* Bump msgpack from 1.0.8 to 1.1.0. ([\#17759](https://github.com/element-hq/synapse/issues/17759))
* Bump phonenumbers from 8.13.44 to 8.13.45. ([\#17762](https://github.com/element-hq/synapse/issues/17762))
* Bump prometheus-client from 0.20.0 to 0.21.0. ([\#17746](https://github.com/element-hq/synapse/issues/17746))
* Bump pyasn1 from 0.6.0 to 0.6.1. ([\#17714](https://github.com/element-hq/synapse/issues/17714))
* Bump pyasn1-modules from 0.4.0 to 0.4.1. ([\#17747](https://github.com/element-hq/synapse/issues/17747))
* Bump pydantic from 2.8.2 to 2.9.2. ([\#17756](https://github.com/element-hq/synapse/issues/17756))
* Bump python-multipart from 0.0.9 to 0.0.10. ([\#17745](https://github.com/element-hq/synapse/issues/17745))
* Bump ruff from 0.6.4 to 0.6.7. ([\#17715](https://github.com/element-hq/synapse/issues/17715), [\#17760](https://github.com/element-hq/synapse/issues/17760))
* Bump sentry-sdk from 2.13.0 to 2.14.0. ([\#17712](https://github.com/element-hq/synapse/issues/17712))
* Bump serde from 1.0.209 to 1.0.210. ([\#17686](https://github.com/element-hq/synapse/issues/17686))
* Bump serde_json from 1.0.127 to 1.0.128. ([\#17687](https://github.com/element-hq/synapse/issues/17687))
* Bump treq from 23.11.0 to 24.9.1. ([\#17744](https://github.com/element-hq/synapse/issues/17744))
* Bump types-pyyaml from 6.0.12.20240808 to 6.0.12.20240917. ([\#17755](https://github.com/element-hq/synapse/issues/17755))
* Bump types-requests from 2.32.0.20240712 to 2.32.0.20240914. ([\#17713](https://github.com/element-hq/synapse/issues/17713))
* Bump types-setuptools from 74.1.0.20240907 to 75.1.0.20240917. ([\#17757](https://github.com/element-hq/synapse/issues/17757))
# Synapse 1.115.0 (2024-09-17)
No significant changes since 1.115.0rc2.
# Synapse 1.115.0rc2 (2024-09-12)
### Internal Changes

4
Cargo.lock generated
View File

@@ -67,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
[[package]]
name = "bytes"
version = "1.7.2"
version = "1.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "428d9aa8fbc0670b7b8d6030a7fadd0f86151cae55e4dbbece15f3780a3dfaf3"
checksum = "8318a53db07bb3f8dca91a600466bdb3f2eaadeedfdbcf02e1accbad9271ba50"
[[package]]
name = "cfg-if"

View File

@@ -0,0 +1 @@
Add support for the `tags` and `not_tags` filters for simplified sliding sync.

5
changelog.d/17667.misc Normal file
View File

@@ -0,0 +1,5 @@
Import pydantic objects from the `_pydantic_compat` module.
This allows `check_pydantic_models.py` to mock those pydantic objects
only in the synapse module, and not interfere with pydantic objects in
external dependencies.

View File

@@ -0,0 +1 @@
Guests can use the new media endpoints to download media, as described by [MSC4189](https://github.com/matrix-org/matrix-spec-proposals/pull/4189).

View File

@@ -0,0 +1 @@
Add config option `turn_shared_secret_path`.

1
changelog.d/17692.bugfix Normal file
View File

@@ -0,0 +1 @@
Make sure we get up-to-date state information when using the new Sliding Sync tables to derive room membership.

1
changelog.d/17693.misc Normal file
View File

@@ -0,0 +1 @@
Use Sliding Sync tables as a bulk shortcut for getting the max `event_stream_ordering` of rooms.

1
changelog.d/17695.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug where room account data would not correctly be sent down sliding sync for old rooms.

1
changelog.d/17696.misc Normal file
View File

@@ -0,0 +1 @@
Speed up sliding sync requests a bit where there are many room changes.

1
changelog.d/17703.misc Normal file
View File

@@ -0,0 +1 @@
Refactor sliding sync filter unit tests so the sliding sync API has better test coverage.

View File

@@ -0,0 +1 @@
Return room tags in Sliding Sync account data extension.

View File

@@ -1 +0,0 @@
Remove spurious "TODO UPDATE ALL THIS" note in the Debian installation docs.

View File

@@ -20,8 +20,8 @@
#
import argparse
import cgi
import datetime
import html
import json
import urllib.request
from typing import List
@@ -85,7 +85,7 @@ def make_graph(pdus: List[dict], filename_prefix: str) -> None:
"name": name,
"type": pdu.get("pdu_type"),
"state_key": pdu.get("state_key"),
"content": html.escape(json.dumps(pdu.get("content")), quote=True),
"content": cgi.escape(json.dumps(pdu.get("content")), quote=True),
"time": t,
"depth": pdu.get("depth"),
}

18
debian/changelog vendored
View File

@@ -1,21 +1,3 @@
matrix-synapse-py3 (1.116.0~rc2) stable; urgency=medium
* New synapse release 1.116.0rc2.
-- Synapse Packaging team <packages@matrix.org> Thu, 26 Sep 2024 13:28:43 +0000
matrix-synapse-py3 (1.116.0~rc1) stable; urgency=medium
* New synapse release 1.116.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 25 Sep 2024 09:34:07 +0000
matrix-synapse-py3 (1.115.0) stable; urgency=medium
* New Synapse release 1.115.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 17 Sep 2024 14:32:10 +0100
matrix-synapse-py3 (1.115.0~rc2) stable; urgency=medium
* New Synapse release 1.115.0rc2.

View File

@@ -111,9 +111,6 @@ server_notices:
system_mxid_avatar_url: ""
room_name: "Server Alert"
# Enable delayed events (msc4140)
max_event_delay_duration: 24h
# Disable sync cache so that initial `/sync` requests are up-to-date.
caches:

View File

@@ -24,15 +24,6 @@
# nginx and supervisord configs depending on the workers requested.
#
# The environment variables it reads are:
# * SYNAPSE_CONFIG_PATH: The path where the generated `homeserver.yaml` will
# be stored.
# * SYNAPSE_CONFIG_DIR: The directory where generated config will be stored.
# If `SYNAPSE_CONFIG_PATH` is not set, it will default to
# SYNAPSE_CONFIG_DIR/homeserver.yaml.
# * SYNAPSE_DATA_DIR: Where the generated config will put persistent data
# such as the database and media store.
# * SYNAPSE_CONFIG_TEMPLATE_DIR: The directory containing jinja2 templates for
# configuration that this script will generate config from. Defaults to '/conf'.
# * SYNAPSE_SERVER_NAME: The desired server_name of the homeserver.
# * SYNAPSE_REPORT_STATS: Whether to report stats.
# * SYNAPSE_WORKER_TYPES: A comma separated list of worker names as specified in WORKERS_CONFIG
@@ -44,8 +35,6 @@
# SYNAPSE_WORKER_TYPES='event_persister, federation_sender, client_reader'
# SYNAPSE_WORKER_TYPES='event_persister:2, federation_sender:2, client_reader'
# SYNAPSE_WORKER_TYPES='stream_writers=account_data+presence+typing'
# * SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK: Whether worker logs should be written to disk,
# in addition to stdout.
# * SYNAPSE_AS_REGISTRATION_DIR: If specified, a directory in which .yaml and .yml files
# will be treated as Application Service registration files.
# * SYNAPSE_TLS_CERT: Path to a TLS certificate in PEM format.
@@ -59,9 +48,7 @@
# * SYNAPSE_LOG_SENSITIVE: If unset, SQL and SQL values won't be logged,
# regardless of the SYNAPSE_LOG_LEVEL setting.
# * SYNAPSE_LOG_TESTING: if set, Synapse will log additional information useful
# for testing.
# * SYNAPSE_USE_UNIX_SOCKET: if set, workers will communicate via unix socket
# rather than TCP.
# for testing.
#
# NOTE: According to Complement's ENTRYPOINT expectations for a homeserver image (as defined
# in the project's README), this script may be run multiple times, and functionality should
@@ -617,9 +604,7 @@ def generate_base_homeserver_config() -> None:
# start.py already does this for us, so just call that.
# note that this script is copied in in the official, monolith dockerfile
os.environ["SYNAPSE_HTTP_PORT"] = str(MAIN_PROCESS_HTTP_LISTENER_PORT)
# This script makes use of the `SYNAPSE_CONFIG_DIR` environment variable to
# determine where to place the generated homeserver config.
subprocess.run(["/usr/local/bin/python", "/start.py", "migrate_config"], check=True)
def parse_worker_types(
@@ -748,10 +733,8 @@ def parse_worker_types(
def generate_worker_files(
environ: Mapping[str, str],
config_dir: str,
config_path: str,
data_dir: str,
template_dir: str,
requested_worker_types: Dict[str, Set[str]],
) -> None:
"""Read the desired workers(if any) that is passed in and generate shared
@@ -759,13 +742,9 @@ def generate_worker_files(
Args:
environ: os.environ instance.
config_dir: The location of the configuration directory, where generated
worker config files are written to.
config_path: The location of the base Synapse homeserver config file.
data_dir: The location of the synapse data directory. Where logs will be
stored (if `SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK` is set).
template_dir: The location of the template directory. Where jinja2
templates for config files live.
config_path: The location of the generated Synapse main worker config file.
data_dir: The location of the synapse data directory. Where log and
user-facing config files live.
requested_worker_types: A Dict containing requested workers in the format of
{'worker_name1': {'worker_type', ...}}
"""
@@ -828,8 +807,7 @@ def generate_worker_files(
nginx_locations: Dict[str, str] = {}
# Create the worker configuration directory if it doesn't already exist
workers_config_dir = os.path.join(config_dir, "workers")
os.makedirs(workers_config_dir, exist_ok=True)
os.makedirs("/conf/workers", exist_ok=True)
# Start worker ports from this arbitrary port
worker_port = 18009
@@ -876,7 +854,7 @@ def generate_worker_files(
worker_config = insert_worker_name_for_worker_config(worker_config, worker_name)
worker_config.update(
{"name": worker_name, "port": str(worker_port)}
{"name": worker_name, "port": str(worker_port), "config_path": config_path}
)
# Update the shared config with any worker_type specific options. The first of a
@@ -899,14 +877,12 @@ def generate_worker_files(
worker_descriptors.append(worker_config)
# Write out the worker's logging config file
log_config_filepath = generate_worker_log_config(
environ, worker_name, template_dir, workers_config_dir, data_dir
)
log_config_filepath = generate_worker_log_config(environ, worker_name, data_dir)
# Then a worker config file
convert(
os.path.join(template_dir, "worker.yaml.j2"),
os.path.join(workers_config_dir, f"{worker_name}.yaml"),
"/conf/worker.yaml.j2",
f"/conf/workers/{worker_name}.yaml",
**worker_config,
worker_log_config_filepath=log_config_filepath,
using_unix_sockets=using_unix_sockets,
@@ -947,9 +923,7 @@ def generate_worker_files(
# Finally, we'll write out the config files.
# log config for the master process
master_log_config = generate_worker_log_config(
environ, "master", template_dir, workers_config_dir, data_dir
)
master_log_config = generate_worker_log_config(environ, "master", data_dir)
shared_config["log_config"] = master_log_config
# Find application service registrations
@@ -980,8 +954,8 @@ def generate_worker_files(
# Shared homeserver config
convert(
os.path.join(template_dir, "shared.yaml.j2"),
os.path.join(workers_config_dir, "shared.yaml"),
"/conf/shared.yaml.j2",
"/conf/workers/shared.yaml",
shared_worker_config=yaml.dump(shared_config),
appservice_registrations=appservice_registrations,
enable_redis=workers_in_use,
@@ -991,7 +965,7 @@ def generate_worker_files(
# Nginx config
convert(
os.path.join(template_dir, "nginx.conf.j2"),
"/conf/nginx.conf.j2",
"/etc/nginx/conf.d/matrix-synapse.conf",
worker_locations=nginx_location_config,
upstream_directives=nginx_upstream_config,
@@ -1003,7 +977,7 @@ def generate_worker_files(
# Supervisord config
os.makedirs("/etc/supervisor", exist_ok=True)
convert(
os.path.join(template_dir, "supervisord.conf.j2"),
"/conf/supervisord.conf.j2",
"/etc/supervisor/supervisord.conf",
main_config_path=config_path,
enable_redis=workers_in_use,
@@ -1011,7 +985,7 @@ def generate_worker_files(
)
convert(
os.path.join(template_dir, "synapse.supervisord.conf.j2"),
"/conf/synapse.supervisord.conf.j2",
"/etc/supervisor/conf.d/synapse.conf",
workers=worker_descriptors,
main_config_path=config_path,
@@ -1020,7 +994,7 @@ def generate_worker_files(
# healthcheck config
convert(
os.path.join(template_dir, "healthcheck.sh.j2"),
"/conf/healthcheck.sh.j2",
"/healthcheck.sh",
healthcheck_urls=healthcheck_urls,
)
@@ -1032,24 +1006,10 @@ def generate_worker_files(
def generate_worker_log_config(
environ: Mapping[str, str],
worker_name: str,
workers_config_dir: str,
template_dir: str,
data_dir: str,
environ: Mapping[str, str], worker_name: str, data_dir: str
) -> str:
"""Generate a log.config file for the given worker.
Args:
environ: A mapping representing the environment variables that this script
is running with.
worker_name: The name of the worker. Used in generated file paths.
workers_config_dir: The location of the worker configuration directory,
where the generated worker log config will be saved.
template_dir: The directory containing jinja2 template files.
data_dir: The directory where log files will be written (if
`SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK` is set).
Returns: the path to the generated file
"""
# Check whether we should write worker logs to disk, in addition to the console
@@ -1064,9 +1024,9 @@ def generate_worker_log_config(
extra_log_template_args["SYNAPSE_LOG_TESTING"] = environ.get("SYNAPSE_LOG_TESTING")
# Render and write the file
log_config_filepath = os.path.join(workers_config_dir, f"{worker_name}.log.config")
log_config_filepath = f"/conf/workers/{worker_name}.log.config"
convert(
os.path.join(template_dir, "log.config"),
"/conf/log.config",
log_config_filepath,
worker_name=worker_name,
**extra_log_template_args,
@@ -1089,7 +1049,6 @@ def main(args: List[str], environ: MutableMapping[str, str]) -> None:
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
template_dir = environ.get("SYNAPSE_CONFIG_TEMPLATE_DIR", "/conf")
# override SYNAPSE_NO_TLS, we don't support TLS in worker mode,
# this needs to be handled by a frontend proxy
@@ -1101,10 +1060,9 @@ def main(args: List[str], environ: MutableMapping[str, str]) -> None:
generate_base_homeserver_config()
else:
log("Base homeserver config exists—not regenerating")
# This script may be run multiple times (mostly by Complement, see note at top of
# file). Don't re-configure workers in this instance.
mark_filepath = os.path.join(config_dir, "workers_have_been_configured")
mark_filepath = "/conf/workers_have_been_configured"
if not os.path.exists(mark_filepath):
# Collect and validate worker_type requests
# Read the desired worker configuration from the environment
@@ -1121,9 +1079,7 @@ def main(args: List[str], environ: MutableMapping[str, str]) -> None:
# Always regenerate all other config files
log("Generating worker config files")
generate_worker_files(
environ, config_dir, config_path, data_dir, template_dir, requested_worker_types
)
generate_worker_files(environ, config_path, data_dir, requested_worker_types)
# Mark workers as being configured
with open(mark_filepath, "w") as f:

View File

@@ -42,8 +42,6 @@ def convert(src: str, dst: str, environ: Mapping[str, object]) -> None:
def generate_config_from_template(
data_dir: str,
template_dir: str,
config_dir: str,
config_path: str,
os_environ: Mapping[str, str],
@@ -52,9 +50,6 @@ def generate_config_from_template(
"""Generate a homeserver.yaml from environment variables
Args:
data_dir: where persistent data is stored
template_dir: The location of the template directory. Where jinja2
templates for config files live.
config_dir: where to put generated config files
config_path: where to put the main config file
os_environ: environment mapping
@@ -75,10 +70,9 @@ def generate_config_from_template(
"macaroon": "SYNAPSE_MACAROON_SECRET_KEY",
}
synapse_server_name = environ["SYNAPSE_SERVER_NAME"]
for name, secret in secrets.items():
if secret not in environ:
filename = os.path.join(data_dir, f"{synapse_server_name}.{name}.key")
filename = "/data/%s.%s.key" % (environ["SYNAPSE_SERVER_NAME"], name)
# if the file already exists, load in the existing value; otherwise,
# generate a new secret and write it to a file
@@ -94,7 +88,7 @@ def generate_config_from_template(
handle.write(value)
environ[secret] = value
environ["SYNAPSE_APPSERVICES"] = glob.glob(os.path.join(data_dir, "appservices", "*.yaml"))
environ["SYNAPSE_APPSERVICES"] = glob.glob("/data/appservices/*.yaml")
if not os.path.exists(config_dir):
os.mkdir(config_dir)
@@ -117,12 +111,12 @@ def generate_config_from_template(
environ["SYNAPSE_LOG_CONFIG"] = config_dir + "/log.config"
log("Generating synapse config file " + config_path)
convert(os.path.join(template_dir, "homeserver.yaml"), config_path, environ)
convert("/conf/homeserver.yaml", config_path, environ)
log_config_file = environ["SYNAPSE_LOG_CONFIG"]
log("Generating log config file " + log_config_file)
convert(
os.path.join(template_dir, "log.config"),
"/conf/log.config",
log_config_file,
{**environ, "include_worker_name_in_log_line": False},
)
@@ -134,15 +128,15 @@ def generate_config_from_template(
"synapse.app.homeserver",
"--config-path",
config_path,
# tell synapse to put generated keys in the data directory rather than /compiled
# tell synapse to put generated keys in /data rather than /compiled
"--keys-directory",
config_dir,
"--generate-keys",
]
if ownership is not None:
log(f"Setting ownership on the data dir to {ownership}")
subprocess.run(["chown", "-R", ownership, data_dir], check=True)
log(f"Setting ownership on /data to {ownership}")
subprocess.run(["chown", "-R", ownership, "/data"], check=True)
args = ["gosu", ownership] + args
subprocess.run(args, check=True)
@@ -165,13 +159,12 @@ def run_generate_config(environ: Mapping[str, str], ownership: Optional[str]) ->
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
template_dir = environ.get("SYNAPSE_CONFIG_TEMPLATE_DIR", "/conf")
# create a suitable log config from our template
log_config_file = "%s/%s.log.config" % (config_dir, server_name)
if not os.path.exists(log_config_file):
log("Creating log config %s" % (log_config_file,))
convert(os.path.join(template_dir, "log.config"), log_config_file, environ)
convert("/conf/log.config", log_config_file, environ)
# generate the main config file, and a signing key.
args = [
@@ -223,14 +216,12 @@ def main(args: List[str], environ: MutableMapping[str, str]) -> None:
if mode == "migrate_config":
# generate a config based on environment vars.
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get(
"SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml"
)
template_dir = environ.get("SYNAPSE_CONFIG_TEMPLATE_DIR", "/conf")
return generate_config_from_template(
data_dir, template_dir, config_dir, config_path, environ, ownership
config_dir, config_path, environ, ownership
)
if mode != "run":

View File

@@ -1361,83 +1361,3 @@ Returns a `404` HTTP status code if no user was found, with a response body like
```
_Added in Synapse 1.72.0._
## Redact all the events of a user
The API is
```
POST /_synapse/admin/v1/user/$user_id/redact
{
"rooms": ["!roomid1", "!roomid2"]
}
```
If an empty list is provided as the key for `rooms`, all events in all the rooms the user is member of will be redacted,
otherwise all the events in the rooms provided in the request will be redacted.
The API starts redaction process running, and returns immediately with a JSON body with
a redact id which can be used to query the status of the redaction process:
```json
{
"redact_id": "<opaque id>"
}
```
**Parameters**
The following parameters should be set in the URL:
- `user_id` - The fully qualified MXID of the user: for example, `@user:server.com`.
The following JSON body parameter must be provided:
- `rooms` - A list of rooms to redact the user's events in. If an empty list is provided all events in all rooms
the user is a member of will be redacted
_Added in Synapse 1.116.0._
The following JSON body parameters are optional:
- `reason` - Reason the redaction is being requested, ie "spam", "abuse", etc. This will be included in each redaction event, and be visible to users.
- `limit` - a limit on the number of the user's events to search for ones that can be redacted (events are redacted newest to oldest) in each room, defaults to 1000 if not provided
## Check the status of a redaction process
It is possible to query the status of the background task for redacting a user's events.
The status can be queried up to 24 hours after completion of the task,
or until Synapse is restarted (whichever happens first).
The API is:
```
GET /_synapse/admin/v1/user/redact_status/$redact_id
```
A response body like the following is returned:
```
{
"status": "active",
"failed_redactions": [],
}
```
**Parameters**
The following parameters should be set in the URL:
* `redact_id` - string - The ID for this redaction process, provided when the redaction was requested.
**Response**
The following fields are returned in the JSON response body:
- `status` - string - one of scheduled/active/completed/failed, indicating the status of the redaction job
- `failed_redactions` - dictionary - the keys of the dict are event ids the process was unable to redact, if any, and the values are
the corresponding error that caused the redaction to fail
_Added in Synapse 1.116.0._

View File

@@ -52,6 +52,8 @@ architecture via <https://packages.matrix.org/debian/>.
To install the latest release:
TODO UPDATE ALL THIS
```sh
sudo apt install -y lsb-release wget apt-transport-https
sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
@@ -314,7 +316,7 @@ sudo dnf group install "Development Tools"
*Note: The term "RHEL" below refers to both Red Hat Enterprise Linux and Rocky Linux. The distributions are 1:1 binary compatible.*
It's recommended to use the latest Python versions.
It's recommended to use the latest Python versions.
RHEL 8 in particular ships with Python 3.6 by default which is EOL and therefore no longer supported by Synapse. RHEL 9 ship with Python 3.9 which is still supported by the Python core team as of this writing. However, newer Python versions provide significant performance improvements and they're available in official distributions' repositories. Therefore it's recommended to use them.
@@ -344,7 +346,7 @@ dnf install python3.12 python3.12-devel
```
Finally, install common prerequisites
```bash
dnf install libicu libicu-devel libpq5 libpq5-devel lz4 pkgconf
dnf install libicu libicu-devel libpq5 libpq5-devel lz4 pkgconf
dnf group install "Development Tools"
```
###### Using venv module instead of virtualenv command
@@ -353,7 +355,7 @@ It's recommended to use Python venv module directly rather than the virtualenv c
* On RHEL 9, virtualenv is only available on [EPEL](https://docs.fedoraproject.org/en-US/epel/).
* On RHEL 8, virtualenv is based on Python 3.6. It does not support creating 3.11/3.12 virtual environments.
Here's an example of creating Python 3.12 virtual environment and installing Synapse from PyPI.
Here's an example of creating Python 3.12 virtual environment and installing Synapse from PyPI.
```bash
mkdir -p ~/synapse

View File

@@ -761,19 +761,6 @@ email:
password_reset: "[%(server_name)s] Password reset"
email_validation: "[%(server_name)s] Validate your email"
```
---
### `max_event_delay_duration`
The maximum allowed duration by which sent events can be delayed, as per
[MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140).
Must be a positive value if set.
Defaults to no duration (`null`), which disallows sending delayed events.
Example configuration:
```yaml
max_event_delay_duration: 24h
```
## Homeserver blocking
Useful options for Synapse admins.

View File

@@ -290,7 +290,6 @@ information.
Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/
^/_matrix/client/unstable/org.matrix.msc4140/delayed_events
Pagination requests can also be handled, but all requests for a given
room must be routed to the same instance. Additionally, care must be taken to

418
poetry.lock generated
View File

@@ -2,13 +2,13 @@
[[package]]
name = "annotated-types"
version = "0.7.0"
version = "0.5.0"
description = "Reusable constraint types to use with typing.Annotated"
optional = false
python-versions = ">=3.8"
python-versions = ">=3.7"
files = [
{file = "annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53"},
{file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"},
{file = "annotated_types-0.5.0-py3-none-any.whl", hash = "sha256:58da39888f92c276ad970249761ebea80ba544b77acddaa1a4d6cf78287d45fd"},
{file = "annotated_types-0.5.0.tar.gz", hash = "sha256:47cdc3490d9ac1506ce92c7aaa76c579dc3509ff11e098fc867e5130ab7be802"},
]
[package.dependencies]
@@ -608,18 +608,15 @@ idna = ">=2.5"
[[package]]
name = "idna"
version = "3.10"
version = "3.8"
description = "Internationalized Domain Names in Applications (IDNA)"
optional = false
python-versions = ">=3.6"
files = [
{file = "idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3"},
{file = "idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9"},
{file = "idna-3.8-py3-none-any.whl", hash = "sha256:050b4e5baadcd44d760cedbd2b8e639f2ff89bbc7a5730fcc662954303377aac"},
{file = "idna-3.8.tar.gz", hash = "sha256:d838c2c0ed6fced7693d5e8ab8e734d5f8fda53a039c0164afb0b82e771e3603"},
]
[package.extras]
all = ["flake8 (>=7.1.1)", "mypy (>=1.11.2)", "pytest (>=8.3.2)", "ruff (>=0.6.2)"]
[[package]]
name = "ijson"
version = "3.3.0"
@@ -1246,75 +1243,67 @@ files = [
[[package]]
name = "msgpack"
version = "1.1.0"
version = "1.0.8"
description = "MessagePack serializer"
optional = false
python-versions = ">=3.8"
files = [
{file = "msgpack-1.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:7ad442d527a7e358a469faf43fda45aaf4ac3249c8310a82f0ccff9164e5dccd"},
{file = "msgpack-1.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:74bed8f63f8f14d75eec75cf3d04ad581da6b914001b474a5d3cd3372c8cc27d"},
{file = "msgpack-1.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:914571a2a5b4e7606997e169f64ce53a8b1e06f2cf2c3a7273aa106236d43dd5"},
{file = "msgpack-1.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c921af52214dcbb75e6bdf6a661b23c3e6417f00c603dd2070bccb5c3ef499f5"},
{file = "msgpack-1.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d8ce0b22b890be5d252de90d0e0d119f363012027cf256185fc3d474c44b1b9e"},
{file = "msgpack-1.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:73322a6cc57fcee3c0c57c4463d828e9428275fb85a27aa2aa1a92fdc42afd7b"},
{file = "msgpack-1.1.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:e1f3c3d21f7cf67bcf2da8e494d30a75e4cf60041d98b3f79875afb5b96f3a3f"},
{file = "msgpack-1.1.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:64fc9068d701233effd61b19efb1485587560b66fe57b3e50d29c5d78e7fef68"},
{file = "msgpack-1.1.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:42f754515e0f683f9c79210a5d1cad631ec3d06cea5172214d2176a42e67e19b"},
{file = "msgpack-1.1.0-cp310-cp310-win32.whl", hash = "sha256:3df7e6b05571b3814361e8464f9304c42d2196808e0119f55d0d3e62cd5ea044"},
{file = "msgpack-1.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:685ec345eefc757a7c8af44a3032734a739f8c45d1b0ac45efc5d8977aa4720f"},
{file = "msgpack-1.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:3d364a55082fb2a7416f6c63ae383fbd903adb5a6cf78c5b96cc6316dc1cedc7"},
{file = "msgpack-1.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:79ec007767b9b56860e0372085f8504db5d06bd6a327a335449508bbee9648fa"},
{file = "msgpack-1.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6ad622bf7756d5a497d5b6836e7fc3752e2dd6f4c648e24b1803f6048596f701"},
{file = "msgpack-1.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e59bca908d9ca0de3dc8684f21ebf9a690fe47b6be93236eb40b99af28b6ea6"},
{file = "msgpack-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e1da8f11a3dd397f0a32c76165cf0c4eb95b31013a94f6ecc0b280c05c91b59"},
{file = "msgpack-1.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:452aff037287acb1d70a804ffd022b21fa2bb7c46bee884dbc864cc9024128a0"},
{file = "msgpack-1.1.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8da4bf6d54ceed70e8861f833f83ce0814a2b72102e890cbdfe4b34764cdd66e"},
{file = "msgpack-1.1.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:41c991beebf175faf352fb940bf2af9ad1fb77fd25f38d9142053914947cdbf6"},
{file = "msgpack-1.1.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a52a1f3a5af7ba1c9ace055b659189f6c669cf3657095b50f9602af3a3ba0fe5"},
{file = "msgpack-1.1.0-cp311-cp311-win32.whl", hash = "sha256:58638690ebd0a06427c5fe1a227bb6b8b9fdc2bd07701bec13c2335c82131a88"},
{file = "msgpack-1.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:fd2906780f25c8ed5d7b323379f6138524ba793428db5d0e9d226d3fa6aa1788"},
{file = "msgpack-1.1.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:d46cf9e3705ea9485687aa4001a76e44748b609d260af21c4ceea7f2212a501d"},
{file = "msgpack-1.1.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:5dbad74103df937e1325cc4bfeaf57713be0b4f15e1c2da43ccdd836393e2ea2"},
{file = "msgpack-1.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:58dfc47f8b102da61e8949708b3eafc3504509a5728f8b4ddef84bd9e16ad420"},
{file = "msgpack-1.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4676e5be1b472909b2ee6356ff425ebedf5142427842aa06b4dfd5117d1ca8a2"},
{file = "msgpack-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17fb65dd0bec285907f68b15734a993ad3fc94332b5bb21b0435846228de1f39"},
{file = "msgpack-1.1.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a51abd48c6d8ac89e0cfd4fe177c61481aca2d5e7ba42044fd218cfd8ea9899f"},
{file = "msgpack-1.1.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2137773500afa5494a61b1208619e3871f75f27b03bcfca7b3a7023284140247"},
{file = "msgpack-1.1.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:398b713459fea610861c8a7b62a6fec1882759f308ae0795b5413ff6a160cf3c"},
{file = "msgpack-1.1.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:06f5fd2f6bb2a7914922d935d3b8bb4a7fff3a9a91cfce6d06c13bc42bec975b"},
{file = "msgpack-1.1.0-cp312-cp312-win32.whl", hash = "sha256:ad33e8400e4ec17ba782f7b9cf868977d867ed784a1f5f2ab46e7ba53b6e1e1b"},
{file = "msgpack-1.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:115a7af8ee9e8cddc10f87636767857e7e3717b7a2e97379dc2054712693e90f"},
{file = "msgpack-1.1.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:071603e2f0771c45ad9bc65719291c568d4edf120b44eb36324dcb02a13bfddf"},
{file = "msgpack-1.1.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:0f92a83b84e7c0749e3f12821949d79485971f087604178026085f60ce109330"},
{file = "msgpack-1.1.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4a1964df7b81285d00a84da4e70cb1383f2e665e0f1f2a7027e683956d04b734"},
{file = "msgpack-1.1.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:59caf6a4ed0d164055ccff8fe31eddc0ebc07cf7326a2aaa0dbf7a4001cd823e"},
{file = "msgpack-1.1.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0907e1a7119b337971a689153665764adc34e89175f9a34793307d9def08e6ca"},
{file = "msgpack-1.1.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:65553c9b6da8166e819a6aa90ad15288599b340f91d18f60b2061f402b9a4915"},
{file = "msgpack-1.1.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:7a946a8992941fea80ed4beae6bff74ffd7ee129a90b4dd5cf9c476a30e9708d"},
{file = "msgpack-1.1.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:4b51405e36e075193bc051315dbf29168d6141ae2500ba8cd80a522964e31434"},
{file = "msgpack-1.1.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:b4c01941fd2ff87c2a934ee6055bda4ed353a7846b8d4f341c428109e9fcde8c"},
{file = "msgpack-1.1.0-cp313-cp313-win32.whl", hash = "sha256:7c9a35ce2c2573bada929e0b7b3576de647b0defbd25f5139dcdaba0ae35a4cc"},
{file = "msgpack-1.1.0-cp313-cp313-win_amd64.whl", hash = "sha256:bce7d9e614a04d0883af0b3d4d501171fbfca038f12c77fa838d9f198147a23f"},
{file = "msgpack-1.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c40ffa9a15d74e05ba1fe2681ea33b9caffd886675412612d93ab17b58ea2fec"},
{file = "msgpack-1.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f1ba6136e650898082d9d5a5217d5906d1e138024f836ff48691784bbe1adf96"},
{file = "msgpack-1.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e0856a2b7e8dcb874be44fea031d22e5b3a19121be92a1e098f46068a11b0870"},
{file = "msgpack-1.1.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:471e27a5787a2e3f974ba023f9e265a8c7cfd373632247deb225617e3100a3c7"},
{file = "msgpack-1.1.0-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:646afc8102935a388ffc3914b336d22d1c2d6209c773f3eb5dd4d6d3b6f8c1cb"},
{file = "msgpack-1.1.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:13599f8829cfbe0158f6456374e9eea9f44eee08076291771d8ae93eda56607f"},
{file = "msgpack-1.1.0-cp38-cp38-win32.whl", hash = "sha256:8a84efb768fb968381e525eeeb3d92857e4985aacc39f3c47ffd00eb4509315b"},
{file = "msgpack-1.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:879a7b7b0ad82481c52d3c7eb99bf6f0645dbdec5134a4bddbd16f3506947feb"},
{file = "msgpack-1.1.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:53258eeb7a80fc46f62fd59c876957a2d0e15e6449a9e71842b6d24419d88ca1"},
{file = "msgpack-1.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:7e7b853bbc44fb03fbdba34feb4bd414322180135e2cb5164f20ce1c9795ee48"},
{file = "msgpack-1.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f3e9b4936df53b970513eac1758f3882c88658a220b58dcc1e39606dccaaf01c"},
{file = "msgpack-1.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:46c34e99110762a76e3911fc923222472c9d681f1094096ac4102c18319e6468"},
{file = "msgpack-1.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8a706d1e74dd3dea05cb54580d9bd8b2880e9264856ce5068027eed09680aa74"},
{file = "msgpack-1.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:534480ee5690ab3cbed89d4c8971a5c631b69a8c0883ecfea96c19118510c846"},
{file = "msgpack-1.1.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:8cf9e8c3a2153934a23ac160cc4cba0ec035f6867c8013cc6077a79823370346"},
{file = "msgpack-1.1.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:3180065ec2abbe13a4ad37688b61b99d7f9e012a535b930e0e683ad6bc30155b"},
{file = "msgpack-1.1.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:c5a91481a3cc573ac8c0d9aace09345d989dc4a0202b7fcb312c88c26d4e71a8"},
{file = "msgpack-1.1.0-cp39-cp39-win32.whl", hash = "sha256:f80bc7d47f76089633763f952e67f8214cb7b3ee6bfa489b3cb6a84cfac114cd"},
{file = "msgpack-1.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:4d1b7ff2d6146e16e8bd665ac726a89c74163ef8cd39fa8c1087d4e52d3a2325"},
{file = "msgpack-1.1.0.tar.gz", hash = "sha256:dd432ccc2c72b914e4cb77afce64aab761c1137cc698be3984eee260bcb2896e"},
{file = "msgpack-1.0.8-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:505fe3d03856ac7d215dbe005414bc28505d26f0c128906037e66d98c4e95868"},
{file = "msgpack-1.0.8-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e6b7842518a63a9f17107eb176320960ec095a8ee3b4420b5f688e24bf50c53c"},
{file = "msgpack-1.0.8-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:376081f471a2ef24828b83a641a02c575d6103a3ad7fd7dade5486cad10ea659"},
{file = "msgpack-1.0.8-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e390971d082dba073c05dbd56322427d3280b7cc8b53484c9377adfbae67dc2"},
{file = "msgpack-1.0.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:00e073efcba9ea99db5acef3959efa45b52bc67b61b00823d2a1a6944bf45982"},
{file = "msgpack-1.0.8-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:82d92c773fbc6942a7a8b520d22c11cfc8fd83bba86116bfcf962c2f5c2ecdaa"},
{file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9ee32dcb8e531adae1f1ca568822e9b3a738369b3b686d1477cbc643c4a9c128"},
{file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e3aa7e51d738e0ec0afbed661261513b38b3014754c9459508399baf14ae0c9d"},
{file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:69284049d07fce531c17404fcba2bb1df472bc2dcdac642ae71a2d079d950653"},
{file = "msgpack-1.0.8-cp310-cp310-win32.whl", hash = "sha256:13577ec9e247f8741c84d06b9ece5f654920d8365a4b636ce0e44f15e07ec693"},
{file = "msgpack-1.0.8-cp310-cp310-win_amd64.whl", hash = "sha256:e532dbd6ddfe13946de050d7474e3f5fb6ec774fbb1a188aaf469b08cf04189a"},
{file = "msgpack-1.0.8-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9517004e21664f2b5a5fd6333b0731b9cf0817403a941b393d89a2f1dc2bd836"},
{file = "msgpack-1.0.8-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d16a786905034e7e34098634b184a7d81f91d4c3d246edc6bd7aefb2fd8ea6ad"},
{file = "msgpack-1.0.8-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e2872993e209f7ed04d963e4b4fbae72d034844ec66bc4ca403329db2074377b"},
{file = "msgpack-1.0.8-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c330eace3dd100bdb54b5653b966de7f51c26ec4a7d4e87132d9b4f738220ba"},
{file = "msgpack-1.0.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:83b5c044f3eff2a6534768ccfd50425939e7a8b5cf9a7261c385de1e20dcfc85"},
{file = "msgpack-1.0.8-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1876b0b653a808fcd50123b953af170c535027bf1d053b59790eebb0aeb38950"},
{file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:dfe1f0f0ed5785c187144c46a292b8c34c1295c01da12e10ccddfc16def4448a"},
{file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:3528807cbbb7f315bb81959d5961855e7ba52aa60a3097151cb21956fbc7502b"},
{file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e2f879ab92ce502a1e65fce390eab619774dda6a6ff719718069ac94084098ce"},
{file = "msgpack-1.0.8-cp311-cp311-win32.whl", hash = "sha256:26ee97a8261e6e35885c2ecd2fd4a6d38252246f94a2aec23665a4e66d066305"},
{file = "msgpack-1.0.8-cp311-cp311-win_amd64.whl", hash = "sha256:eadb9f826c138e6cf3c49d6f8de88225a3c0ab181a9b4ba792e006e5292d150e"},
{file = "msgpack-1.0.8-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:114be227f5213ef8b215c22dde19532f5da9652e56e8ce969bf0a26d7c419fee"},
{file = "msgpack-1.0.8-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:d661dc4785affa9d0edfdd1e59ec056a58b3dbb9f196fa43587f3ddac654ac7b"},
{file = "msgpack-1.0.8-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d56fd9f1f1cdc8227d7b7918f55091349741904d9520c65f0139a9755952c9e8"},
{file = "msgpack-1.0.8-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0726c282d188e204281ebd8de31724b7d749adebc086873a59efb8cf7ae27df3"},
{file = "msgpack-1.0.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8db8e423192303ed77cff4dce3a4b88dbfaf43979d280181558af5e2c3c71afc"},
{file = "msgpack-1.0.8-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:99881222f4a8c2f641f25703963a5cefb076adffd959e0558dc9f803a52d6a58"},
{file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:b5505774ea2a73a86ea176e8a9a4a7c8bf5d521050f0f6f8426afe798689243f"},
{file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:ef254a06bcea461e65ff0373d8a0dd1ed3aa004af48839f002a0c994a6f72d04"},
{file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e1dd7839443592d00e96db831eddb4111a2a81a46b028f0facd60a09ebbdd543"},
{file = "msgpack-1.0.8-cp312-cp312-win32.whl", hash = "sha256:64d0fcd436c5683fdd7c907eeae5e2cbb5eb872fafbc03a43609d7941840995c"},
{file = "msgpack-1.0.8-cp312-cp312-win_amd64.whl", hash = "sha256:74398a4cf19de42e1498368c36eed45d9528f5fd0155241e82c4082b7e16cffd"},
{file = "msgpack-1.0.8-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:0ceea77719d45c839fd73abcb190b8390412a890df2f83fb8cf49b2a4b5c2f40"},
{file = "msgpack-1.0.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1ab0bbcd4d1f7b6991ee7c753655b481c50084294218de69365f8f1970d4c151"},
{file = "msgpack-1.0.8-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:1cce488457370ffd1f953846f82323cb6b2ad2190987cd4d70b2713e17268d24"},
{file = "msgpack-1.0.8-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3923a1778f7e5ef31865893fdca12a8d7dc03a44b33e2a5f3295416314c09f5d"},
{file = "msgpack-1.0.8-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a22e47578b30a3e199ab067a4d43d790249b3c0587d9a771921f86250c8435db"},
{file = "msgpack-1.0.8-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bd739c9251d01e0279ce729e37b39d49a08c0420d3fee7f2a4968c0576678f77"},
{file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:d3420522057ebab1728b21ad473aa950026d07cb09da41103f8e597dfbfaeb13"},
{file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5845fdf5e5d5b78a49b826fcdc0eb2e2aa7191980e3d2cfd2a30303a74f212e2"},
{file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a0e76621f6e1f908ae52860bdcb58e1ca85231a9b0545e64509c931dd34275a"},
{file = "msgpack-1.0.8-cp38-cp38-win32.whl", hash = "sha256:374a8e88ddab84b9ada695d255679fb99c53513c0a51778796fcf0944d6c789c"},
{file = "msgpack-1.0.8-cp38-cp38-win_amd64.whl", hash = "sha256:f3709997b228685fe53e8c433e2df9f0cdb5f4542bd5114ed17ac3c0129b0480"},
{file = "msgpack-1.0.8-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f51bab98d52739c50c56658cc303f190785f9a2cd97b823357e7aeae54c8f68a"},
{file = "msgpack-1.0.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:73ee792784d48aa338bba28063e19a27e8d989344f34aad14ea6e1b9bd83f596"},
{file = "msgpack-1.0.8-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f9904e24646570539a8950400602d66d2b2c492b9010ea7e965025cb71d0c86d"},
{file = "msgpack-1.0.8-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e75753aeda0ddc4c28dce4c32ba2f6ec30b1b02f6c0b14e547841ba5b24f753f"},
{file = "msgpack-1.0.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5dbf059fb4b7c240c873c1245ee112505be27497e90f7c6591261c7d3c3a8228"},
{file = "msgpack-1.0.8-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4916727e31c28be8beaf11cf117d6f6f188dcc36daae4e851fee88646f5b6b18"},
{file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7938111ed1358f536daf311be244f34df7bf3cdedb3ed883787aca97778b28d8"},
{file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:493c5c5e44b06d6c9268ce21b302c9ca055c1fd3484c25ba41d34476c76ee746"},
{file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5fbb160554e319f7b22ecf530a80a3ff496d38e8e07ae763b9e82fadfe96f273"},
{file = "msgpack-1.0.8-cp39-cp39-win32.whl", hash = "sha256:f9af38a89b6a5c04b7d18c492c8ccf2aee7048aff1ce8437c4683bb5a1df893d"},
{file = "msgpack-1.0.8-cp39-cp39-win_amd64.whl", hash = "sha256:ed59dd52075f8fc91da6053b12e8c89e37aa043f8986efd89e61fae69dc1b011"},
{file = "msgpack-1.0.8.tar.gz", hash = "sha256:95c02b0e27e706e48d0e5426d1710ca78e0f0628d6e89d5b5a5b91a5f12274f3"},
]
[[package]]
@@ -1447,13 +1436,13 @@ dev = ["jinja2"]
[[package]]
name = "phonenumbers"
version = "8.13.45"
version = "8.13.44"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
files = [
{file = "phonenumbers-8.13.45-py2.py3-none-any.whl", hash = "sha256:bf05ec20fcd13f0d53e43a34ed7bd1c8be26a72b88fce4b8c64fca5b4641987a"},
{file = "phonenumbers-8.13.45.tar.gz", hash = "sha256:53679a95b6060fd5e15467759252c87933d8566d6a5be00995a579eb0e02435b"},
{file = "phonenumbers-8.13.44-py2.py3-none-any.whl", hash = "sha256:52cd02865dab1428ca9e89d442629b61d407c7dc687cfb80a3e8d068a584513c"},
{file = "phonenumbers-8.13.44.tar.gz", hash = "sha256:2175021e84ee4e41b43c890f2d0af51f18c6ca9ad525886d6d6e4ea882e46fac"},
]
[[package]]
@@ -1580,13 +1569,13 @@ files = [
[[package]]
name = "prometheus-client"
version = "0.21.0"
version = "0.20.0"
description = "Python client for the Prometheus monitoring system."
optional = false
python-versions = ">=3.8"
files = [
{file = "prometheus_client-0.21.0-py3-none-any.whl", hash = "sha256:4fa6b4dd0ac16d58bb587c04b1caae65b8c5043e85f778f42f5f632f6af2e166"},
{file = "prometheus_client-0.21.0.tar.gz", hash = "sha256:96c83c606b71ff2b0a433c98889d275f51ffec6c5e267de37c7a2b5c9aa9233e"},
{file = "prometheus_client-0.20.0-py3-none-any.whl", hash = "sha256:cde524a85bce83ca359cc837f28b8c0db5cac7aa653a588fd7e84ba061c329e7"},
{file = "prometheus_client-0.20.0.tar.gz", hash = "sha256:287629d00b147a32dcb2be0b9df905da599b2d82f80377083ec8463309a4bb89"},
]
[package.extras]
@@ -1654,13 +1643,13 @@ files = [
[[package]]
name = "pyasn1-modules"
version = "0.4.1"
version = "0.4.0"
description = "A collection of ASN.1-based protocols modules"
optional = false
python-versions = ">=3.8"
files = [
{file = "pyasn1_modules-0.4.1-py3-none-any.whl", hash = "sha256:49bfa96b45a292b711e986f222502c1c9a5e1f4e568fc30e2574a6c7d07838fd"},
{file = "pyasn1_modules-0.4.1.tar.gz", hash = "sha256:c28e2dbf9c06ad61c71a075c7e0f9fd0f1b0bb2d2ad4377f240d33ac2ab60a7c"},
{file = "pyasn1_modules-0.4.0-py3-none-any.whl", hash = "sha256:be04f15b66c206eed667e0bb5ab27e2b1855ea54a842e5037738099e8ca4ae0b"},
{file = "pyasn1_modules-0.4.0.tar.gz", hash = "sha256:831dbcea1b177b28c9baddf4c6d1013c24c3accd14a1873fffaa6a2e905f17b6"},
]
[package.dependencies]
@@ -1679,18 +1668,18 @@ files = [
[[package]]
name = "pydantic"
version = "2.9.2"
version = "2.8.2"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.8"
files = [
{file = "pydantic-2.9.2-py3-none-any.whl", hash = "sha256:f048cec7b26778210e28a0459867920654d48e5e62db0958433636cde4254f12"},
{file = "pydantic-2.9.2.tar.gz", hash = "sha256:d155cef71265d1e9807ed1c32b4c8deec042a44a50a4188b25ac67ecd81a9c0f"},
{file = "pydantic-2.8.2-py3-none-any.whl", hash = "sha256:73ee9fddd406dc318b885c7a2eab8a6472b68b8fb5ba8150949fc3db939f23c8"},
{file = "pydantic-2.8.2.tar.gz", hash = "sha256:6f62c13d067b0755ad1c21a34bdd06c0c12625a22b0fc09c6b149816604f7c2a"},
]
[package.dependencies]
annotated-types = ">=0.6.0"
pydantic-core = "2.23.4"
annotated-types = ">=0.4.0"
pydantic-core = "2.20.1"
typing-extensions = [
{version = ">=4.12.2", markers = "python_version >= \"3.13\""},
{version = ">=4.6.1", markers = "python_version < \"3.13\""},
@@ -1698,104 +1687,103 @@ typing-extensions = [
[package.extras]
email = ["email-validator (>=2.0.0)"]
timezone = ["tzdata"]
[[package]]
name = "pydantic-core"
version = "2.23.4"
version = "2.20.1"
description = "Core functionality for Pydantic validation and serialization"
optional = false
python-versions = ">=3.8"
files = [
{file = "pydantic_core-2.23.4-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:b10bd51f823d891193d4717448fab065733958bdb6a6b351967bd349d48d5c9b"},
{file = "pydantic_core-2.23.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4fc714bdbfb534f94034efaa6eadd74e5b93c8fa6315565a222f7b6f42ca1166"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:63e46b3169866bd62849936de036f901a9356e36376079b05efa83caeaa02ceb"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ed1a53de42fbe34853ba90513cea21673481cd81ed1be739f7f2efb931b24916"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cfdd16ab5e59fc31b5e906d1a3f666571abc367598e3e02c83403acabc092e07"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:255a8ef062cbf6674450e668482456abac99a5583bbafb73f9ad469540a3a232"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4a7cd62e831afe623fbb7aabbb4fe583212115b3ef38a9f6b71869ba644624a2"},
{file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f09e2ff1f17c2b51f2bc76d1cc33da96298f0a036a137f5440ab3ec5360b624f"},
{file = "pydantic_core-2.23.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e38e63e6f3d1cec5a27e0afe90a085af8b6806ee208b33030e65b6516353f1a3"},
{file = "pydantic_core-2.23.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:0dbd8dbed2085ed23b5c04afa29d8fd2771674223135dc9bc937f3c09284d071"},
{file = "pydantic_core-2.23.4-cp310-none-win32.whl", hash = "sha256:6531b7ca5f951d663c339002e91aaebda765ec7d61b7d1e3991051906ddde119"},
{file = "pydantic_core-2.23.4-cp310-none-win_amd64.whl", hash = "sha256:7c9129eb40958b3d4500fa2467e6a83356b3b61bfff1b414c7361d9220f9ae8f"},
{file = "pydantic_core-2.23.4-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:77733e3892bb0a7fa797826361ce8a9184d25c8dffaec60b7ffe928153680ba8"},
{file = "pydantic_core-2.23.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1b84d168f6c48fabd1f2027a3d1bdfe62f92cade1fb273a5d68e621da0e44e6d"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:df49e7a0861a8c36d089c1ed57d308623d60416dab2647a4a17fe050ba85de0e"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ff02b6d461a6de369f07ec15e465a88895f3223eb75073ffea56b84d9331f607"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:996a38a83508c54c78a5f41456b0103c30508fed9abcad0a59b876d7398f25fd"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d97683ddee4723ae8c95d1eddac7c192e8c552da0c73a925a89fa8649bf13eea"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:216f9b2d7713eb98cb83c80b9c794de1f6b7e3145eef40400c62e86cee5f4e1e"},
{file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6f783e0ec4803c787bcea93e13e9932edab72068f68ecffdf86a99fd5918878b"},
{file = "pydantic_core-2.23.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:d0776dea117cf5272382634bd2a5c1b6eb16767c223c6a5317cd3e2a757c61a0"},
{file = "pydantic_core-2.23.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d5f7a395a8cf1621939692dba2a6b6a830efa6b3cee787d82c7de1ad2930de64"},
{file = "pydantic_core-2.23.4-cp311-none-win32.whl", hash = "sha256:74b9127ffea03643e998e0c5ad9bd3811d3dac8c676e47db17b0ee7c3c3bf35f"},
{file = "pydantic_core-2.23.4-cp311-none-win_amd64.whl", hash = "sha256:98d134c954828488b153d88ba1f34e14259284f256180ce659e8d83e9c05eaa3"},
{file = "pydantic_core-2.23.4-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:f3e0da4ebaef65158d4dfd7d3678aad692f7666877df0002b8a522cdf088f231"},
{file = "pydantic_core-2.23.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f69a8e0b033b747bb3e36a44e7732f0c99f7edd5cea723d45bc0d6e95377ffee"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:723314c1d51722ab28bfcd5240d858512ffd3116449c557a1336cbe3919beb87"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:bb2802e667b7051a1bebbfe93684841cc9351004e2badbd6411bf357ab8d5ac8"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d18ca8148bebe1b0a382a27a8ee60350091a6ddaf475fa05ef50dc35b5df6327"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33e3d65a85a2a4a0dc3b092b938a4062b1a05f3a9abde65ea93b233bca0e03f2"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:128585782e5bfa515c590ccee4b727fb76925dd04a98864182b22e89a4e6ed36"},
{file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:68665f4c17edcceecc112dfed5dbe6f92261fb9d6054b47d01bf6371a6196126"},
{file = "pydantic_core-2.23.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:20152074317d9bed6b7a95ade3b7d6054845d70584216160860425f4fbd5ee9e"},
{file = "pydantic_core-2.23.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:9261d3ce84fa1d38ed649c3638feefeae23d32ba9182963e465d58d62203bd24"},
{file = "pydantic_core-2.23.4-cp312-none-win32.whl", hash = "sha256:4ba762ed58e8d68657fc1281e9bb72e1c3e79cc5d464be146e260c541ec12d84"},
{file = "pydantic_core-2.23.4-cp312-none-win_amd64.whl", hash = "sha256:97df63000f4fea395b2824da80e169731088656d1818a11b95f3b173747b6cd9"},
{file = "pydantic_core-2.23.4-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:7530e201d10d7d14abce4fb54cfe5b94a0aefc87da539d0346a484ead376c3cc"},
{file = "pydantic_core-2.23.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:df933278128ea1cd77772673c73954e53a1c95a4fdf41eef97c2b779271bd0bd"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0cb3da3fd1b6a5d0279a01877713dbda118a2a4fc6f0d821a57da2e464793f05"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:42c6dcb030aefb668a2b7009c85b27f90e51e6a3b4d5c9bc4c57631292015b0d"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:696dd8d674d6ce621ab9d45b205df149399e4bb9aa34102c970b721554828510"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2971bb5ffe72cc0f555c13e19b23c85b654dd2a8f7ab493c262071377bfce9f6"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8394d940e5d400d04cad4f75c0598665cbb81aecefaca82ca85bd28264af7f9b"},
{file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0dff76e0602ca7d4cdaacc1ac4c005e0ce0dcfe095d5b5259163a80d3a10d327"},
{file = "pydantic_core-2.23.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:7d32706badfe136888bdea71c0def994644e09fff0bfe47441deaed8e96fdbc6"},
{file = "pydantic_core-2.23.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ed541d70698978a20eb63d8c5d72f2cc6d7079d9d90f6b50bad07826f1320f5f"},
{file = "pydantic_core-2.23.4-cp313-none-win32.whl", hash = "sha256:3d5639516376dce1940ea36edf408c554475369f5da2abd45d44621cb616f769"},
{file = "pydantic_core-2.23.4-cp313-none-win_amd64.whl", hash = "sha256:5a1504ad17ba4210df3a045132a7baeeba5a200e930f57512ee02909fc5c4cb5"},
{file = "pydantic_core-2.23.4-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:d4488a93b071c04dc20f5cecc3631fc78b9789dd72483ba15d423b5b3689b555"},
{file = "pydantic_core-2.23.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:81965a16b675b35e1d09dd14df53f190f9129c0202356ed44ab2728b1c905658"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4ffa2ebd4c8530079140dd2d7f794a9d9a73cbb8e9d59ffe24c63436efa8f271"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:61817945f2fe7d166e75fbfb28004034b48e44878177fc54d81688e7b85a3665"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:29d2c342c4bc01b88402d60189f3df065fb0dda3654744d5a165a5288a657368"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5e11661ce0fd30a6790e8bcdf263b9ec5988e95e63cf901972107efc49218b13"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d18368b137c6295db49ce7218b1a9ba15c5bc254c96d7c9f9e924a9bc7825ad"},
{file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ec4e55f79b1c4ffb2eecd8a0cfba9955a2588497d96851f4c8f99aa4a1d39b12"},
{file = "pydantic_core-2.23.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:374a5e5049eda9e0a44c696c7ade3ff355f06b1fe0bb945ea3cac2bc336478a2"},
{file = "pydantic_core-2.23.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:5c364564d17da23db1106787675fc7af45f2f7b58b4173bfdd105564e132e6fb"},
{file = "pydantic_core-2.23.4-cp38-none-win32.whl", hash = "sha256:d7a80d21d613eec45e3d41eb22f8f94ddc758a6c4720842dc74c0581f54993d6"},
{file = "pydantic_core-2.23.4-cp38-none-win_amd64.whl", hash = "sha256:5f5ff8d839f4566a474a969508fe1c5e59c31c80d9e140566f9a37bba7b8d556"},
{file = "pydantic_core-2.23.4-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:a4fa4fc04dff799089689f4fd502ce7d59de529fc2f40a2c8836886c03e0175a"},
{file = "pydantic_core-2.23.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0a7df63886be5e270da67e0966cf4afbae86069501d35c8c1b3b6c168f42cb36"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dcedcd19a557e182628afa1d553c3895a9f825b936415d0dbd3cd0bbcfd29b4b"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5f54b118ce5de9ac21c363d9b3caa6c800341e8c47a508787e5868c6b79c9323"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:86d2f57d3e1379a9525c5ab067b27dbb8a0642fb5d454e17a9ac434f9ce523e3"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:de6d1d1b9e5101508cb37ab0d972357cac5235f5c6533d1071964c47139257df"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1278e0d324f6908e872730c9102b0112477a7f7cf88b308e4fc36ce1bdb6d58c"},
{file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9a6b5099eeec78827553827f4c6b8615978bb4b6a88e5d9b93eddf8bb6790f55"},
{file = "pydantic_core-2.23.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:e55541f756f9b3ee346b840103f32779c695a19826a4c442b7954550a0972040"},
{file = "pydantic_core-2.23.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a5c7ba8ffb6d6f8f2ab08743be203654bb1aaa8c9dcb09f82ddd34eadb695605"},
{file = "pydantic_core-2.23.4-cp39-none-win32.whl", hash = "sha256:37b0fe330e4a58d3c58b24d91d1eb102aeec675a3db4c292ec3928ecd892a9a6"},
{file = "pydantic_core-2.23.4-cp39-none-win_amd64.whl", hash = "sha256:1498bec4c05c9c787bde9125cfdcc63a41004ff167f495063191b863399b1a29"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:f455ee30a9d61d3e1a15abd5068827773d6e4dc513e795f380cdd59932c782d5"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1e90d2e3bd2c3863d48525d297cd143fe541be8bbf6f579504b9712cb6b643ec"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e203fdf807ac7e12ab59ca2bfcabb38c7cf0b33c41efeb00f8e5da1d86af480"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e08277a400de01bc72436a0ccd02bdf596631411f592ad985dcee21445bd0068"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f220b0eea5965dec25480b6333c788fb72ce5f9129e8759ef876a1d805d00801"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:d06b0c8da4f16d1d1e352134427cb194a0a6e19ad5db9161bf32b2113409e728"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:ba1a0996f6c2773bd83e63f18914c1de3c9dd26d55f4ac302a7efe93fb8e7433"},
{file = "pydantic_core-2.23.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:9a5bce9d23aac8f0cf0836ecfc033896aa8443b501c58d0602dbfd5bd5b37753"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:78ddaaa81421a29574a682b3179d4cf9e6d405a09b99d93ddcf7e5239c742e21"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:883a91b5dd7d26492ff2f04f40fbb652de40fcc0afe07e8129e8ae779c2110eb"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:88ad334a15b32a791ea935af224b9de1bf99bcd62fabf745d5f3442199d86d59"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:233710f069d251feb12a56da21e14cca67994eab08362207785cf8c598e74577"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:19442362866a753485ba5e4be408964644dd6a09123d9416c54cd49171f50744"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:624e278a7d29b6445e4e813af92af37820fafb6dcc55c012c834f9e26f9aaaef"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f5ef8f42bec47f21d07668a043f077d507e5bf4e668d5c6dfe6aaba89de1a5b8"},
{file = "pydantic_core-2.23.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:aea443fffa9fbe3af1a9ba721a87f926fe548d32cab71d188a6ede77d0ff244e"},
{file = "pydantic_core-2.23.4.tar.gz", hash = "sha256:2584f7cf844ac4d970fba483a717dbe10c1c1c96a969bf65d61ffe94df1b2863"},
{file = "pydantic_core-2.20.1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:3acae97ffd19bf091c72df4d726d552c473f3576409b2a7ca36b2f535ffff4a3"},
{file = "pydantic_core-2.20.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:41f4c96227a67a013e7de5ff8f20fb496ce573893b7f4f2707d065907bffdbd6"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5f239eb799a2081495ea659d8d4a43a8f42cd1fe9ff2e7e436295c38a10c286a"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:53e431da3fc53360db73eedf6f7124d1076e1b4ee4276b36fb25514544ceb4a3"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f1f62b2413c3a0e846c3b838b2ecd6c7a19ec6793b2a522745b0869e37ab5bc1"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5d41e6daee2813ecceea8eda38062d69e280b39df793f5a942fa515b8ed67953"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d482efec8b7dc6bfaedc0f166b2ce349df0011f5d2f1f25537ced4cfc34fd98"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e93e1a4b4b33daed65d781a57a522ff153dcf748dee70b40c7258c5861e1768a"},
{file = "pydantic_core-2.20.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e7c4ea22b6739b162c9ecaaa41d718dfad48a244909fe7ef4b54c0b530effc5a"},
{file = "pydantic_core-2.20.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:4f2790949cf385d985a31984907fecb3896999329103df4e4983a4a41e13e840"},
{file = "pydantic_core-2.20.1-cp310-none-win32.whl", hash = "sha256:5e999ba8dd90e93d57410c5e67ebb67ffcaadcea0ad973240fdfd3a135506250"},
{file = "pydantic_core-2.20.1-cp310-none-win_amd64.whl", hash = "sha256:512ecfbefef6dac7bc5eaaf46177b2de58cdf7acac8793fe033b24ece0b9566c"},
{file = "pydantic_core-2.20.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:d2a8fa9d6d6f891f3deec72f5cc668e6f66b188ab14bb1ab52422fe8e644f312"},
{file = "pydantic_core-2.20.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:175873691124f3d0da55aeea1d90660a6ea7a3cfea137c38afa0a5ffabe37b88"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:37eee5b638f0e0dcd18d21f59b679686bbd18917b87db0193ae36f9c23c355fc"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:25e9185e2d06c16ee438ed39bf62935ec436474a6ac4f9358524220f1b236e43"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:150906b40ff188a3260cbee25380e7494ee85048584998c1e66df0c7a11c17a6"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8ad4aeb3e9a97286573c03df758fc7627aecdd02f1da04516a86dc159bf70121"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3f3ed29cd9f978c604708511a1f9c2fdcb6c38b9aae36a51905b8811ee5cbf1"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0dae11d8f5ded51699c74d9548dcc5938e0804cc8298ec0aa0da95c21fff57b"},
{file = "pydantic_core-2.20.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:faa6b09ee09433b87992fb5a2859efd1c264ddc37280d2dd5db502126d0e7f27"},
{file = "pydantic_core-2.20.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:9dc1b507c12eb0481d071f3c1808f0529ad41dc415d0ca11f7ebfc666e66a18b"},
{file = "pydantic_core-2.20.1-cp311-none-win32.whl", hash = "sha256:fa2fddcb7107e0d1808086ca306dcade7df60a13a6c347a7acf1ec139aa6789a"},
{file = "pydantic_core-2.20.1-cp311-none-win_amd64.whl", hash = "sha256:40a783fb7ee353c50bd3853e626f15677ea527ae556429453685ae32280c19c2"},
{file = "pydantic_core-2.20.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:595ba5be69b35777474fa07f80fc260ea71255656191adb22a8c53aba4479231"},
{file = "pydantic_core-2.20.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a4f55095ad087474999ee28d3398bae183a66be4823f753cd7d67dd0153427c9"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9aa05d09ecf4c75157197f27cdc9cfaeb7c5f15021c6373932bf3e124af029f"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e97fdf088d4b31ff4ba35db26d9cc472ac7ef4a2ff2badeabf8d727b3377fc52"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bc633a9fe1eb87e250b5c57d389cf28998e4292336926b0b6cdaee353f89a237"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d573faf8eb7e6b1cbbcb4f5b247c60ca8be39fe2c674495df0eb4318303137fe"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:26dc97754b57d2fd00ac2b24dfa341abffc380b823211994c4efac7f13b9e90e"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:33499e85e739a4b60c9dac710c20a08dc73cb3240c9a0e22325e671b27b70d24"},
{file = "pydantic_core-2.20.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:bebb4d6715c814597f85297c332297c6ce81e29436125ca59d1159b07f423eb1"},
{file = "pydantic_core-2.20.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:516d9227919612425c8ef1c9b869bbbee249bc91912c8aaffb66116c0b447ebd"},
{file = "pydantic_core-2.20.1-cp312-none-win32.whl", hash = "sha256:469f29f9093c9d834432034d33f5fe45699e664f12a13bf38c04967ce233d688"},
{file = "pydantic_core-2.20.1-cp312-none-win_amd64.whl", hash = "sha256:035ede2e16da7281041f0e626459bcae33ed998cca6a0a007a5ebb73414ac72d"},
{file = "pydantic_core-2.20.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:0827505a5c87e8aa285dc31e9ec7f4a17c81a813d45f70b1d9164e03a813a686"},
{file = "pydantic_core-2.20.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:19c0fa39fa154e7e0b7f82f88ef85faa2a4c23cc65aae2f5aea625e3c13c735a"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4aa223cd1e36b642092c326d694d8bf59b71ddddc94cdb752bbbb1c5c91d833b"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c336a6d235522a62fef872c6295a42ecb0c4e1d0f1a3e500fe949415761b8a19"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7eb6a0587eded33aeefea9f916899d42b1799b7b14b8f8ff2753c0ac1741edac"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:70c8daf4faca8da5a6d655f9af86faf6ec2e1768f4b8b9d0226c02f3d6209703"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e9fa4c9bf273ca41f940bceb86922a7667cd5bf90e95dbb157cbb8441008482c"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:11b71d67b4725e7e2a9f6e9c0ac1239bbc0c48cce3dc59f98635efc57d6dac83"},
{file = "pydantic_core-2.20.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:270755f15174fb983890c49881e93f8f1b80f0b5e3a3cc1394a255706cabd203"},
{file = "pydantic_core-2.20.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:c81131869240e3e568916ef4c307f8b99583efaa60a8112ef27a366eefba8ef0"},
{file = "pydantic_core-2.20.1-cp313-none-win32.whl", hash = "sha256:b91ced227c41aa29c672814f50dbb05ec93536abf8f43cd14ec9521ea09afe4e"},
{file = "pydantic_core-2.20.1-cp313-none-win_amd64.whl", hash = "sha256:65db0f2eefcaad1a3950f498aabb4875c8890438bc80b19362cf633b87a8ab20"},
{file = "pydantic_core-2.20.1-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:4745f4ac52cc6686390c40eaa01d48b18997cb130833154801a442323cc78f91"},
{file = "pydantic_core-2.20.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a8ad4c766d3f33ba8fd692f9aa297c9058970530a32c728a2c4bfd2616d3358b"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41e81317dd6a0127cabce83c0c9c3fbecceae981c8391e6f1dec88a77c8a569a"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:04024d270cf63f586ad41fff13fde4311c4fc13ea74676962c876d9577bcc78f"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:eaad4ff2de1c3823fddf82f41121bdf453d922e9a238642b1dedb33c4e4f98ad"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:26ab812fa0c845df815e506be30337e2df27e88399b985d0bb4e3ecfe72df31c"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3c5ebac750d9d5f2706654c638c041635c385596caf68f81342011ddfa1e5598"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2aafc5a503855ea5885559eae883978c9b6d8c8993d67766ee73d82e841300dd"},
{file = "pydantic_core-2.20.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:4868f6bd7c9d98904b748a2653031fc9c2f85b6237009d475b1008bfaeb0a5aa"},
{file = "pydantic_core-2.20.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:aa2f457b4af386254372dfa78a2eda2563680d982422641a85f271c859df1987"},
{file = "pydantic_core-2.20.1-cp38-none-win32.whl", hash = "sha256:225b67a1f6d602de0ce7f6c1c3ae89a4aa25d3de9be857999e9124f15dab486a"},
{file = "pydantic_core-2.20.1-cp38-none-win_amd64.whl", hash = "sha256:6b507132dcfc0dea440cce23ee2182c0ce7aba7054576efc65634f080dbe9434"},
{file = "pydantic_core-2.20.1-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:b03f7941783b4c4a26051846dea594628b38f6940a2fdc0df00b221aed39314c"},
{file = "pydantic_core-2.20.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1eedfeb6089ed3fad42e81a67755846ad4dcc14d73698c120a82e4ccf0f1f9f6"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:635fee4e041ab9c479e31edda27fcf966ea9614fff1317e280d99eb3e5ab6fe2"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:77bf3ac639c1ff567ae3b47f8d4cc3dc20f9966a2a6dd2311dcc055d3d04fb8a"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7ed1b0132f24beeec5a78b67d9388656d03e6a7c837394f99257e2d55b461611"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c6514f963b023aeee506678a1cf821fe31159b925c4b76fe2afa94cc70b3222b"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10d4204d8ca33146e761c79f83cc861df20e7ae9f6487ca290a97702daf56006"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2d036c7187b9422ae5b262badb87a20a49eb6c5238b2004e96d4da1231badef1"},
{file = "pydantic_core-2.20.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9ebfef07dbe1d93efb94b4700f2d278494e9162565a54f124c404a5656d7ff09"},
{file = "pydantic_core-2.20.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:6b9d9bb600328a1ce523ab4f454859e9d439150abb0906c5a1983c146580ebab"},
{file = "pydantic_core-2.20.1-cp39-none-win32.whl", hash = "sha256:784c1214cb6dd1e3b15dd8b91b9a53852aed16671cc3fbe4786f4f1db07089e2"},
{file = "pydantic_core-2.20.1-cp39-none-win_amd64.whl", hash = "sha256:d2fe69c5434391727efa54b47a1e7986bb0186e72a41b203df8f5b0a19a4f669"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:a45f84b09ac9c3d35dfcf6a27fd0634d30d183205230a0ebe8373a0e8cfa0906"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d02a72df14dfdbaf228424573a07af10637bd490f0901cee872c4f434a735b94"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d2b27e6af28f07e2f195552b37d7d66b150adbaa39a6d327766ffd695799780f"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:084659fac3c83fd674596612aeff6041a18402f1e1bc19ca39e417d554468482"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:242b8feb3c493ab78be289c034a1f659e8826e2233786e36f2893a950a719bb6"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:38cf1c40a921d05c5edc61a785c0ddb4bed67827069f535d794ce6bcded919fc"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:e0bbdd76ce9aa5d4209d65f2b27fc6e5ef1312ae6c5333c26db3f5ade53a1e99"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:254ec27fdb5b1ee60684f91683be95e5133c994cc54e86a0b0963afa25c8f8a6"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:407653af5617f0757261ae249d3fba09504d7a71ab36ac057c938572d1bc9331"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:c693e916709c2465b02ca0ad7b387c4f8423d1db7b4649c551f27a529181c5ad"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5b5ff4911aea936a47d9376fd3ab17e970cc543d1b68921886e7f64bd28308d1"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:177f55a886d74f1808763976ac4efd29b7ed15c69f4d838bbd74d9d09cf6fa86"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:964faa8a861d2664f0c7ab0c181af0bea66098b1919439815ca8803ef136fc4e"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:4dd484681c15e6b9a977c785a345d3e378d72678fd5f1f3c0509608da24f2ac0"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f6d6cff3538391e8486a431569b77921adfcdef14eb18fbf19b7c0a5294d4e6a"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:a6d511cc297ff0883bc3708b465ff82d7560193169a8b93260f74ecb0a5e08a7"},
{file = "pydantic_core-2.20.1.tar.gz", hash = "sha256:26ca695eeee5f9f1aeeb211ffc12f10bcb6f71e2989988fda61dabd65db878d4"},
]
[package.dependencies]
@@ -1974,15 +1962,18 @@ six = ">=1.5"
[[package]]
name = "python-multipart"
version = "0.0.10"
version = "0.0.9"
description = "A streaming multipart parser for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "python_multipart-0.0.10-py3-none-any.whl", hash = "sha256:2b06ad9e8d50c7a8db80e3b56dab590137b323410605af2be20d62a5f1ba1dc8"},
{file = "python_multipart-0.0.10.tar.gz", hash = "sha256:46eb3c6ce6fdda5fb1a03c7e11d490e407c6930a2703fe7aef4da71c374688fa"},
{file = "python_multipart-0.0.9-py3-none-any.whl", hash = "sha256:97ca7b8ea7b05f977dc3849c3ba99d51689822fab725c3703af7c866a0c2b215"},
{file = "python_multipart-0.0.9.tar.gz", hash = "sha256:03f54688c663f1b7977105f021043b0793151e4cb1c1a9d4a11fc13d622c4026"},
]
[package.extras]
dev = ["atomicwrites (==1.4.1)", "attrs (==23.2.0)", "coverage (==7.4.1)", "hatch", "invoke (==2.2.0)", "more-itertools (==10.2.0)", "pbr (==6.0.0)", "pluggy (==1.4.0)", "py (==1.11.0)", "pytest (==8.0.0)", "pytest-cov (==4.1.0)", "pytest-timeout (==2.2.0)", "pyyaml (==6.0.1)", "ruff (==0.2.1)"]
[[package]]
name = "pytz"
version = "2022.7.1"
@@ -2277,29 +2268,29 @@ files = [
[[package]]
name = "ruff"
version = "0.6.7"
version = "0.6.5"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
files = [
{file = "ruff-0.6.7-py3-none-linux_armv6l.whl", hash = "sha256:08277b217534bfdcc2e1377f7f933e1c7957453e8a79764d004e44c40db923f2"},
{file = "ruff-0.6.7-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:c6707a32e03b791f4448dc0dce24b636cbcdee4dd5607adc24e5ee73fd86c00a"},
{file = "ruff-0.6.7-py3-none-macosx_11_0_arm64.whl", hash = "sha256:533d66b7774ef224e7cf91506a7dafcc9e8ec7c059263ec46629e54e7b1f90ab"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17a86aac6f915932d259f7bec79173e356165518859f94649d8c50b81ff087e9"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:b3f8822defd260ae2460ea3832b24d37d203c3577f48b055590a426a722d50ef"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9ba4efe5c6dbbb58be58dd83feedb83b5e95c00091bf09987b4baf510fee5c99"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:525201b77f94d2b54868f0cbe5edc018e64c22563da6c5c2e5c107a4e85c1c0d"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8854450839f339e1049fdbe15d875384242b8e85d5c6947bb2faad33c651020b"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2f0b62056246234d59cbf2ea66e84812dc9ec4540518e37553513392c171cb18"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b1462fa56c832dc0cea5b4041cfc9c97813505d11cce74ebc6d1aae068de36b"},
{file = "ruff-0.6.7-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:02b083770e4cdb1495ed313f5694c62808e71764ec6ee5db84eedd82fd32d8f5"},
{file = "ruff-0.6.7-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:0c05fd37013de36dfa883a3854fae57b3113aaa8abf5dea79202675991d48624"},
{file = "ruff-0.6.7-py3-none-musllinux_1_2_i686.whl", hash = "sha256:f49c9caa28d9bbfac4a637ae10327b3db00f47d038f3fbb2195c4d682e925b14"},
{file = "ruff-0.6.7-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:a0e1655868164e114ba43a908fd2d64a271a23660195017c17691fb6355d59bb"},
{file = "ruff-0.6.7-py3-none-win32.whl", hash = "sha256:a939ca435b49f6966a7dd64b765c9df16f1faed0ca3b6f16acdf7731969deb35"},
{file = "ruff-0.6.7-py3-none-win_amd64.whl", hash = "sha256:590445eec5653f36248584579c06252ad2e110a5d1f32db5420de35fb0e1c977"},
{file = "ruff-0.6.7-py3-none-win_arm64.whl", hash = "sha256:b28f0d5e2f771c1fe3c7a45d3f53916fc74a480698c4b5731f0bea61e52137c8"},
{file = "ruff-0.6.7.tar.gz", hash = "sha256:44e52129d82266fa59b587e2cd74def5637b730a69c4542525dfdecfaae38bd5"},
{file = "ruff-0.6.5-py3-none-linux_armv6l.whl", hash = "sha256:7e4e308f16e07c95fc7753fc1aaac690a323b2bb9f4ec5e844a97bb7fbebd748"},
{file = "ruff-0.6.5-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:932cd69eefe4daf8c7d92bd6689f7e8182571cb934ea720af218929da7bd7d69"},
{file = "ruff-0.6.5-py3-none-macosx_11_0_arm64.whl", hash = "sha256:3a8d42d11fff8d3143ff4da41742a98f8f233bf8890e9fe23077826818f8d680"},
{file = "ruff-0.6.5-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a50af6e828ee692fb10ff2dfe53f05caecf077f4210fae9677e06a808275754f"},
{file = "ruff-0.6.5-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:794ada3400a0d0b89e3015f1a7e01f4c97320ac665b7bc3ade24b50b54cb2972"},
{file = "ruff-0.6.5-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:381413ec47f71ce1d1c614f7779d88886f406f1fd53d289c77e4e533dc6ea200"},
{file = "ruff-0.6.5-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:52e75a82bbc9b42e63c08d22ad0ac525117e72aee9729a069d7c4f235fc4d276"},
{file = "ruff-0.6.5-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:09c72a833fd3551135ceddcba5ebdb68ff89225d30758027280968c9acdc7810"},
{file = "ruff-0.6.5-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:800c50371bdcb99b3c1551d5691e14d16d6f07063a518770254227f7f6e8c178"},
{file = "ruff-0.6.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8e25ddd9cd63ba1f3bd51c1f09903904a6adf8429df34f17d728a8fa11174253"},
{file = "ruff-0.6.5-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:7291e64d7129f24d1b0c947ec3ec4c0076e958d1475c61202497c6aced35dd19"},
{file = "ruff-0.6.5-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:9ad7dfbd138d09d9a7e6931e6a7e797651ce29becd688be8a0d4d5f8177b4b0c"},
{file = "ruff-0.6.5-py3-none-musllinux_1_2_i686.whl", hash = "sha256:005256d977021790cc52aa23d78f06bb5090dc0bfbd42de46d49c201533982ae"},
{file = "ruff-0.6.5-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:482c1e6bfeb615eafc5899127b805d28e387bd87db38b2c0c41d271f5e58d8cc"},
{file = "ruff-0.6.5-py3-none-win32.whl", hash = "sha256:cf4d3fa53644137f6a4a27a2b397381d16454a1566ae5335855c187fbf67e4f5"},
{file = "ruff-0.6.5-py3-none-win_amd64.whl", hash = "sha256:3e42a57b58e3612051a636bc1ac4e6b838679530235520e8f095f7c44f706ff9"},
{file = "ruff-0.6.5-py3-none-win_arm64.whl", hash = "sha256:51935067740773afdf97493ba9b8231279e9beef0f2a8079188c4776c25688e0"},
{file = "ruff-0.6.5.tar.gz", hash = "sha256:4d32d87fab433c0cf285c3683dd4dae63be05fd7a1d65b3f5bf7cdd05a6b96fb"},
]
[[package]]
@@ -2587,13 +2578,13 @@ dev = ["furo (>=2024.05.06)", "nox", "packaging", "sphinx (>=5)", "twisted"]
[[package]]
name = "treq"
version = "24.9.1"
version = "23.11.0"
description = "High-level Twisted HTTP Client API"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.6"
files = [
{file = "treq-24.9.1-py3-none-any.whl", hash = "sha256:eee4756fd9a857c77f180fd5202b52c518f2d3e2826dce28b89066c03bfc45d0"},
{file = "treq-24.9.1.tar.gz", hash = "sha256:15da7fc404f3e4ed59d0abe5f8eef4966fabbe618039a2a23bc7c15305cefea8"},
{file = "treq-23.11.0-py3-none-any.whl", hash = "sha256:f494c2218d61cab2cabbee37cd6606d3eea9d16cf14190323095c95d22c467e9"},
{file = "treq-23.11.0.tar.gz", hash = "sha256:0914ff929fd1632ce16797235260f8bc19d20ff7c459c1deabd65b8c68cbeac5"},
]
[package.dependencies]
@@ -2602,7 +2593,6 @@ hyperlink = ">=21.0.0"
incremental = "*"
requests = ">=2.1.0"
Twisted = {version = ">=22.10.0", extras = ["tls"]}
typing-extensions = ">=3.10.0"
[package.extras]
dev = ["httpbin (==0.7.0)", "pep8", "pyflakes", "werkzeug (==2.0.3)"]
@@ -2808,13 +2798,13 @@ types-cffi = "*"
[[package]]
name = "types-pyyaml"
version = "6.0.12.20240917"
version = "6.0.12.20240808"
description = "Typing stubs for PyYAML"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-PyYAML-6.0.12.20240917.tar.gz", hash = "sha256:d1405a86f9576682234ef83bcb4e6fff7c9305c8b1fbad5e0bcd4f7dbdc9c587"},
{file = "types_PyYAML-6.0.12.20240917-py3-none-any.whl", hash = "sha256:392b267f1c0fe6022952462bf5d6523f31e37f6cea49b14cee7ad634b6301570"},
{file = "types-PyYAML-6.0.12.20240808.tar.gz", hash = "sha256:b8f76ddbd7f65440a8bda5526a9607e4c7a322dc2f8e1a8c405644f9a6f4b9af"},
{file = "types_PyYAML-6.0.12.20240808-py3-none-any.whl", hash = "sha256:deda34c5c655265fc517b546c902aa6eed2ef8d3e921e4765fe606fe2afe8d35"},
]
[[package]]
@@ -2833,13 +2823,13 @@ urllib3 = ">=2"
[[package]]
name = "types-setuptools"
version = "75.1.0.20240917"
version = "74.1.0.20240907"
description = "Typing stubs for setuptools"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-setuptools-75.1.0.20240917.tar.gz", hash = "sha256:12f12a165e7ed383f31def705e5c0fa1c26215dd466b0af34bd042f7d5331f55"},
{file = "types_setuptools-75.1.0.20240917-py3-none-any.whl", hash = "sha256:06f78307e68d1bbde6938072c57b81cf8a99bc84bd6dc7e4c5014730b097dc0c"},
{file = "types-setuptools-74.1.0.20240907.tar.gz", hash = "sha256:0abdb082552ca966c1e5fc244e4853adc62971f6cd724fb1d8a3713b580e5a65"},
{file = "types_setuptools-74.1.0.20240907-py3-none-any.whl", hash = "sha256:15b38c8e63ca34f42f6063ff4b1dd662ea20086166d5ad6a102e670a52574120"},
]
[[package]]
@@ -3114,4 +3104,4 @@ user-search = ["pyicu"]
[metadata]
lock-version = "2.0"
python-versions = "^3.8.0"
content-hash = "93c267fac3428b764f954e6faa17937b9c97b1ed2bdafc41dd8f6cb5d2ce085b"
content-hash = "0c833ab57d2082e1ebe2627aef122ce4f93c1abe1f9d8739d5ea3fe52c79fa3f"

View File

@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.116.0rc2"
version = "1.115.0rc2"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"
@@ -320,7 +320,7 @@ all = [
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.6.7"
ruff = "0.6.5"
# Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2"

View File

@@ -220,11 +220,9 @@ test_packages=(
./tests/msc3874
./tests/msc3890
./tests/msc3391
./tests/msc3757
./tests/msc3930
./tests/msc3902
./tests/msc3967
./tests/msc4140
)
# Enable dirty runs, so tests will reuse the same container where possible.

View File

@@ -107,8 +107,6 @@ class RoomVersion:
# support the flag. Unknown flags are ignored by the evaluator, making conditions
# fail if used.
msc3931_push_features: Tuple[str, ...] # values from PushRuleRoomFlag
# MSC3757: Restricting who can overwrite a state event
msc3757_enabled: bool
class RoomVersions:
@@ -130,7 +128,6 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V2 = RoomVersion(
"2",
@@ -150,7 +147,6 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V3 = RoomVersion(
"3",
@@ -170,7 +166,6 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V4 = RoomVersion(
"4",
@@ -190,7 +185,6 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V5 = RoomVersion(
"5",
@@ -210,7 +204,6 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V6 = RoomVersion(
"6",
@@ -230,7 +223,6 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V7 = RoomVersion(
"7",
@@ -250,7 +242,6 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V8 = RoomVersion(
"8",
@@ -270,7 +261,6 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V9 = RoomVersion(
"9",
@@ -290,7 +280,6 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V10 = RoomVersion(
"10",
@@ -310,7 +299,6 @@ class RoomVersions:
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(),
msc3757_enabled=False,
)
MSC1767v10 = RoomVersion(
# MSC1767 (Extensible Events) based on room version "10"
@@ -331,28 +319,6 @@ class RoomVersions:
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(PushRuleRoomFlag.EXTENSIBLE_EVENTS,),
msc3757_enabled=False,
)
MSC3757v10 = RoomVersion(
# MSC3757 (Restricting who can overwrite a state event) based on room version "10"
"org.matrix.msc3757.10",
RoomDisposition.UNSTABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=True,
restricted_join_rule_fix=True,
knock_join_rule=True,
msc3389_relation_redactions=False,
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(),
msc3757_enabled=True,
)
V11 = RoomVersion(
"11",
@@ -372,28 +338,6 @@ class RoomVersions:
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(),
msc3757_enabled=False,
)
MSC3757v11 = RoomVersion(
# MSC3757 (Restricting who can overwrite a state event) based on room version "11"
"org.matrix.msc3757.11",
RoomDisposition.UNSTABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
implicit_room_creator=True, # Used by MSC3820
updated_redaction_rules=True, # Used by MSC3820
restricted_join_rule=True,
restricted_join_rule_fix=True,
knock_join_rule=True,
msc3389_relation_redactions=False,
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(),
msc3757_enabled=True,
)
@@ -411,8 +355,6 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
RoomVersions.V9,
RoomVersions.V10,
RoomVersions.V11,
RoomVersions.MSC3757v10,
RoomVersions.MSC3757v11,
)
}

View File

@@ -65,7 +65,6 @@ from synapse.storage.databases.main.appservice import (
)
from synapse.storage.databases.main.censor_events import CensorEventsStore
from synapse.storage.databases.main.client_ips import ClientIpWorkerStore
from synapse.storage.databases.main.delayed_events import DelayedEventsStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.databases.main.devices import DeviceWorkerStore
from synapse.storage.databases.main.directory import DirectoryWorkerStore
@@ -162,7 +161,6 @@ class GenericWorkerStore(
TaskSchedulerWorkerStore,
ExperimentalFeaturesStore,
SlidingSyncStore,
DelayedEventsStore,
):
# Properties that multiple storage classes define. Tell mypy what the
# expected type is.

View File

@@ -2,7 +2,7 @@
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2014-2021 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023-2024 New Vector, Ltd
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
@@ -780,17 +780,6 @@ class ServerConfig(Config):
else:
self.delete_stale_devices_after = None
# The maximum allowed delay duration for delayed events (MSC4140).
max_event_delay_duration = config.get("max_event_delay_duration")
if max_event_delay_duration is not None:
self.max_event_delay_ms: Optional[int] = self.parse_duration(
max_event_delay_duration
)
if self.max_event_delay_ms <= 0:
raise ConfigError("max_event_delay_duration must be a positive value")
else:
self.max_event_delay_ms = None
def has_tls_listener(self) -> bool:
return any(listener.is_tls() for listener in self.listeners)

View File

@@ -388,7 +388,6 @@ LENIENT_EVENT_BYTE_LIMITS_ROOM_VERSIONS = {
RoomVersions.V9,
RoomVersions.V10,
RoomVersions.MSC1767v10,
RoomVersions.MSC3757v10,
}
@@ -791,10 +790,9 @@ def get_send_level(
def _can_send_event(event: "EventBase", auth_events: StateMap["EventBase"]) -> bool:
state_key = event.get_state_key()
power_levels_event = get_power_level_event(auth_events)
send_level = get_send_level(event.type, state_key, power_levels_event)
send_level = get_send_level(event.type, event.get("state_key"), power_levels_event)
user_level = get_user_power_level(event.user_id, auth_events)
if user_level < send_level:
@@ -805,34 +803,11 @@ def _can_send_event(event: "EventBase", auth_events: StateMap["EventBase"]) -> b
errcode=Codes.INSUFFICIENT_POWER,
)
if (
state_key is not None
and state_key.startswith("@")
and state_key != event.user_id
):
if event.room_version.msc3757_enabled:
try:
colon_idx = state_key.index(":", 1)
suffix_idx = state_key.find("_", colon_idx + 1)
state_key_user_id = (
state_key[:suffix_idx] if suffix_idx != -1 else state_key
)
if not UserID.is_valid(state_key_user_id):
raise ValueError
except ValueError:
raise SynapseError(
400,
"State key neither equals a valid user ID, nor starts with one plus an underscore",
errcode=Codes.BAD_JSON,
)
if (
# sender is owner of the state key
state_key_user_id == event.user_id
# sender has higher PL than the owner of the state key
or user_level > get_user_power_level(state_key_user_id, auth_events)
):
return True
raise AuthError(403, "You are not allowed to set others state")
# Check state_key
if hasattr(event, "state_key"):
if event.state_key.startswith("@"):
if event.state_key != event.user_id:
raise AuthError(403, "You are not allowed to set others state")
return True

View File

@@ -33,7 +33,7 @@ from synapse.replication.http.account_data import (
ReplicationRemoveUserAccountDataRestServlet,
)
from synapse.streams import EventSource
from synapse.types import JsonDict, JsonMapping, StrCollection, StreamKeyType, UserID
from synapse.types import JsonDict, StrCollection, StreamKeyType, UserID
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -253,7 +253,7 @@ class AccountDataHandler:
return response["max_stream_id"]
async def add_tag_to_room(
self, user_id: str, room_id: str, tag: str, content: JsonMapping
self, user_id: str, room_id: str, tag: str, content: JsonDict
) -> int:
"""Add a tag to a room for a user.

View File

@@ -21,34 +21,13 @@
import abc
import logging
from typing import (
TYPE_CHECKING,
Any,
Dict,
List,
Mapping,
Optional,
Sequence,
Set,
Tuple,
)
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Sequence, Set
import attr
from synapse.api.constants import Direction, EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.api.constants import Direction, Membership
from synapse.events import EventBase
from synapse.types import (
JsonMapping,
Requester,
RoomStreamToken,
ScheduledTask,
StateMap,
TaskStatus,
UserID,
UserInfo,
create_requester,
)
from synapse.types import JsonMapping, RoomStreamToken, StateMap, UserID, UserInfo
from synapse.visibility import filter_events_for_client
if TYPE_CHECKING:
@@ -56,8 +35,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
REDACT_ALL_EVENTS_ACTION_NAME = "redact_all_events"
class AdminHandler:
def __init__(self, hs: "HomeServer"):
@@ -66,20 +43,6 @@ class AdminHandler:
self._storage_controllers = hs.get_storage_controllers()
self._state_storage_controller = self._storage_controllers.state
self._msc3866_enabled = hs.config.experimental.msc3866.enabled
self.event_creation_handler = hs.get_event_creation_handler()
self._task_scheduler = hs.get_task_scheduler()
self._task_scheduler.register_action(
self._redact_all_events, REDACT_ALL_EVENTS_ACTION_NAME
)
async def get_redact_task(self, redact_id: str) -> Optional[ScheduledTask]:
"""Get the current status of an active redaction process
Args:
redact_id: redact_id returned by start_redact_events.
"""
return await self._task_scheduler.get_task(redact_id)
async def get_whois(self, user: UserID) -> JsonMapping:
connections = []
@@ -350,153 +313,6 @@ class AdminHandler:
return writer.finished()
async def start_redact_events(
self,
user_id: str,
rooms: list,
requester: JsonMapping,
reason: Optional[str],
limit: Optional[int],
) -> str:
"""
Start a task redacting the events of the given user in the given rooms
Args:
user_id: the user ID of the user whose events should be redacted
rooms: the rooms in which to redact the user's events
requester: the user requesting the events
reason: reason for requesting the redaction, ie spam, etc
limit: limit on the number of events in each room to redact
Returns:
a unique ID which can be used to query the status of the task
"""
active_tasks = await self._task_scheduler.get_tasks(
actions=[REDACT_ALL_EVENTS_ACTION_NAME],
resource_id=user_id,
statuses=[TaskStatus.ACTIVE],
)
if len(active_tasks) > 0:
raise SynapseError(
400, "Redact already in progress for user %s" % (user_id,)
)
if not limit:
limit = 1000
redact_id = await self._task_scheduler.schedule_task(
REDACT_ALL_EVENTS_ACTION_NAME,
resource_id=user_id,
params={
"rooms": rooms,
"requester": requester,
"user_id": user_id,
"reason": reason,
"limit": limit,
},
)
logger.info(
"starting redact events with redact_id %s",
redact_id,
)
return redact_id
async def _redact_all_events(
self, task: ScheduledTask
) -> Tuple[TaskStatus, Optional[Mapping[str, Any]], Optional[str]]:
"""
Task to redact all of a users events in the given rooms, tracking which, if any, events
whose redaction failed
"""
assert task.params is not None
rooms = task.params.get("rooms")
assert rooms is not None
r = task.params.get("requester")
assert r is not None
admin = Requester.deserialize(self._store, r)
user_id = task.params.get("user_id")
assert user_id is not None
requester = create_requester(
user_id, authenticated_entity=admin.user.to_string()
)
reason = task.params.get("reason")
limit = task.params.get("limit")
assert limit is not None
result: Mapping[str, Any] = (
task.result if task.result else {"failed_redactions": {}}
)
for room in rooms:
room_version = await self._store.get_room_version(room)
event_ids = await self._store.get_events_sent_by_user_in_room(
user_id,
room,
limit,
["m.room.member", "m.room.message"],
)
if not event_ids:
# there's nothing to redact
return TaskStatus.COMPLETE, result, None
events = await self._store.get_events_as_list(event_ids)
for event in events:
# we care about join events but not other membership events
if event.type == "m.room.member":
content = event.content
if content:
if content.get("membership") == Membership.JOIN:
pass
else:
continue
relations = await self._store.get_relations_for_event(
room, event.event_id, event, event_type=EventTypes.Redaction
)
# if we've already successfully redacted this event then skip processing it
if relations[0]:
continue
event_dict = {
"type": EventTypes.Redaction,
"content": {"reason": reason} if reason else {},
"room_id": room,
"sender": user_id,
}
if room_version.updated_redaction_rules:
event_dict["content"]["redacts"] = event.event_id
else:
event_dict["redacts"] = event.event_id
try:
# set the prev event to the offending message to allow for redactions
# to be processed in the case where the user has been kicked/banned before
# redactions are requested
(
redaction,
_,
) = await self.event_creation_handler.create_and_send_nonmember_event(
requester,
event_dict,
prev_event_ids=[event.event_id],
ratelimit=False,
)
except Exception as ex:
logger.info(
f"Redaction of event {event.event_id} failed due to: {ex}"
)
result["failed_redactions"][event.event_id] = str(ex)
await self._task_scheduler.update_task(task.id, result=result)
return TaskStatus.COMPLETE, result, None
class ExfiltrationWriter(metaclass=abc.ABCMeta):
"""Interface used to specify how to write exported data."""

View File

@@ -1,484 +0,0 @@
import logging
from typing import TYPE_CHECKING, List, Optional, Set, Tuple
from twisted.internet.interfaces import IDelayedCall
from synapse.api.constants import EventTypes
from synapse.api.errors import ShadowBanError
from synapse.config.workers import MAIN_PROCESS_INSTANCE_NAME
from synapse.logging.opentracing import set_tag
from synapse.metrics import event_processing_positions
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.delayed_events import (
ReplicationAddedDelayedEventRestServlet,
)
from synapse.storage.databases.main.delayed_events import (
DelayedEventDetails,
DelayID,
EventType,
StateKey,
Timestamp,
UserLocalpart,
)
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import (
JsonDict,
Requester,
RoomID,
UserID,
create_requester,
)
from synapse.util.events import generate_fake_event_id
from synapse.util.metrics import Measure
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class DelayedEventsHandler:
def __init__(self, hs: "HomeServer"):
self._store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers()
self._config = hs.config
self._clock = hs.get_clock()
self._request_ratelimiter = hs.get_request_ratelimiter()
self._event_creation_handler = hs.get_event_creation_handler()
self._room_member_handler = hs.get_room_member_handler()
self._next_delayed_event_call: Optional[IDelayedCall] = None
# The current position in the current_state_delta stream
self._event_pos: Optional[int] = None
# Guard to ensure we only process event deltas one at a time
self._event_processing = False
if hs.config.worker.worker_app is None:
self._repl_client = None
async def _schedule_db_events() -> None:
# We kick this off to pick up outstanding work from before the last restart.
# Block until we're up to date.
await self._unsafe_process_new_event()
hs.get_notifier().add_replication_callback(self.notify_new_event)
# Kick off again (without blocking) to catch any missed notifications
# that may have fired before the callback was added.
self._clock.call_later(0, self.notify_new_event)
# Delayed events that are already marked as processed on startup might not have been
# sent properly on the last run of the server, so unmark them to send them again.
# Caveat: this will double-send delayed events that successfully persisted, but failed
# to be removed from the DB table of delayed events.
# TODO: To avoid double-sending, scan the timeline to find which of these events were
# already sent. To do so, must store delay_ids in sent events to retrieve them later.
await self._store.unprocess_delayed_events()
events, next_send_ts = await self._store.process_timeout_delayed_events(
self._get_current_ts()
)
if next_send_ts:
self._schedule_next_at(next_send_ts)
# Can send the events in background after having awaited on marking them as processed
run_as_background_process(
"_send_events",
self._send_events,
events,
)
self._initialized_from_db = run_as_background_process(
"_schedule_db_events", _schedule_db_events
)
else:
self._repl_client = ReplicationAddedDelayedEventRestServlet.make_client(hs)
@property
def _is_master(self) -> bool:
return self._repl_client is None
def notify_new_event(self) -> None:
"""
Called when there may be more state event deltas to process,
which should cancel pending delayed events for the same state.
"""
if self._event_processing:
return
self._event_processing = True
async def process() -> None:
try:
await self._unsafe_process_new_event()
finally:
self._event_processing = False
run_as_background_process("delayed_events.notify_new_event", process)
async def _unsafe_process_new_event(self) -> None:
# If self._event_pos is None then means we haven't fetched it from the DB yet
if self._event_pos is None:
self._event_pos = await self._store.get_delayed_events_stream_pos()
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self._event_pos > room_max_stream_ordering:
# apparently, we've processed more events than exist in the database!
# this can happen if events are removed with history purge or similar.
logger.warning(
"Event stream ordering appears to have gone backwards (%i -> %i): "
"rewinding delayed events processor",
self._event_pos,
room_max_stream_ordering,
)
self._event_pos = room_max_stream_ordering
# Loop round handling deltas until we're up to date
while True:
with Measure(self._clock, "delayed_events_delta"):
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self._event_pos == room_max_stream_ordering:
return
logger.debug(
"Processing delayed events %s->%s",
self._event_pos,
room_max_stream_ordering,
)
(
max_pos,
deltas,
) = await self._storage_controllers.state.get_current_state_deltas(
self._event_pos, room_max_stream_ordering
)
logger.debug(
"Handling %d state deltas for delayed events processing",
len(deltas),
)
await self._handle_state_deltas(deltas)
self._event_pos = max_pos
# Expose current event processing position to prometheus
event_processing_positions.labels("delayed_events").set(max_pos)
await self._store.update_delayed_events_stream_pos(max_pos)
async def _handle_state_deltas(self, deltas: List[StateDelta]) -> None:
"""
Process current state deltas to cancel pending delayed events
that target the same state.
"""
for delta in deltas:
logger.debug(
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
)
next_send_ts = await self._store.cancel_delayed_state_events(
room_id=delta.room_id,
event_type=delta.event_type,
state_key=delta.state_key,
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at_or_none(next_send_ts)
async def add(
self,
requester: Requester,
*,
room_id: str,
event_type: str,
state_key: Optional[str],
origin_server_ts: Optional[int],
content: JsonDict,
delay: int,
) -> str:
"""
Creates a new delayed event and schedules its delivery.
Args:
requester: The requester of the delayed event, who will be its owner.
room_id: The ID of the room where the event should be sent to.
event_type: The type of event to be sent.
state_key: The state key of the event to be sent, or None if it is not a state event.
origin_server_ts: The custom timestamp to send the event with.
If None, the timestamp will be the actual time when the event is sent.
content: The content of the event to be sent.
delay: How long (in milliseconds) to wait before automatically sending the event.
Returns: The ID of the added delayed event.
Raises:
SynapseError: if the delayed event fails validation checks.
"""
await self._request_ratelimiter.ratelimit(requester)
self._event_creation_handler.validator.validate_builder(
self._event_creation_handler.event_builder_factory.for_room_version(
await self._store.get_room_version(room_id),
{
"type": event_type,
"content": content,
"room_id": room_id,
"sender": str(requester.user),
**({"state_key": state_key} if state_key is not None else {}),
},
)
)
creation_ts = self._get_current_ts()
delay_id, next_send_ts = await self._store.add_delayed_event(
user_localpart=requester.user.localpart,
device_id=requester.device_id,
creation_ts=creation_ts,
room_id=room_id,
event_type=event_type,
state_key=state_key,
origin_server_ts=origin_server_ts,
content=content,
delay=delay,
)
if self._repl_client is not None:
# NOTE: If this throws, the delayed event will remain in the DB and
# will be picked up once the main worker gets another delayed event.
await self._repl_client(
instance_name=MAIN_PROCESS_INSTANCE_NAME,
next_send_ts=next_send_ts,
)
elif self._next_send_ts_changed(next_send_ts):
self._schedule_next_at(next_send_ts)
return delay_id
def on_added(self, next_send_ts: int) -> None:
next_send_ts = Timestamp(next_send_ts)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at(next_send_ts)
async def cancel(self, requester: Requester, delay_id: str) -> None:
"""
Cancels the scheduled delivery of the matching delayed event.
Args:
requester: The owner of the delayed event to act on.
delay_id: The ID of the delayed event to act on.
Raises:
NotFoundError: if no matching delayed event could be found.
"""
assert self._is_master
await self._request_ratelimiter.ratelimit(requester)
await self._initialized_from_db
next_send_ts = await self._store.cancel_delayed_event(
delay_id=delay_id,
user_localpart=requester.user.localpart,
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at_or_none(next_send_ts)
async def restart(self, requester: Requester, delay_id: str) -> None:
"""
Restarts the scheduled delivery of the matching delayed event.
Args:
requester: The owner of the delayed event to act on.
delay_id: The ID of the delayed event to act on.
Raises:
NotFoundError: if no matching delayed event could be found.
"""
assert self._is_master
await self._request_ratelimiter.ratelimit(requester)
await self._initialized_from_db
next_send_ts = await self._store.restart_delayed_event(
delay_id=delay_id,
user_localpart=requester.user.localpart,
current_ts=self._get_current_ts(),
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at(next_send_ts)
async def send(self, requester: Requester, delay_id: str) -> None:
"""
Immediately sends the matching delayed event, instead of waiting for its scheduled delivery.
Args:
requester: The owner of the delayed event to act on.
delay_id: The ID of the delayed event to act on.
Raises:
NotFoundError: if no matching delayed event could be found.
"""
assert self._is_master
await self._request_ratelimiter.ratelimit(requester)
await self._initialized_from_db
event, next_send_ts = await self._store.process_target_delayed_event(
delay_id=delay_id,
user_localpart=requester.user.localpart,
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at_or_none(next_send_ts)
await self._send_event(
DelayedEventDetails(
delay_id=DelayID(delay_id),
user_localpart=UserLocalpart(requester.user.localpart),
room_id=event.room_id,
type=event.type,
state_key=event.state_key,
origin_server_ts=event.origin_server_ts,
content=event.content,
device_id=event.device_id,
)
)
async def _send_on_timeout(self) -> None:
self._next_delayed_event_call = None
events, next_send_ts = await self._store.process_timeout_delayed_events(
self._get_current_ts()
)
if next_send_ts:
self._schedule_next_at(next_send_ts)
await self._send_events(events)
async def _send_events(self, events: List[DelayedEventDetails]) -> None:
sent_state: Set[Tuple[RoomID, EventType, StateKey]] = set()
for event in events:
if event.state_key is not None:
state_info = (event.room_id, event.type, event.state_key)
if state_info in sent_state:
continue
else:
state_info = None
try:
# TODO: send in background if message event or non-conflicting state event
await self._send_event(event)
if state_info is not None:
sent_state.add(state_info)
except Exception:
logger.exception("Failed to send delayed event")
for room_id, event_type, state_key in sent_state:
await self._store.delete_processed_delayed_state_events(
room_id=str(room_id),
event_type=event_type,
state_key=state_key,
)
def _schedule_next_at_or_none(self, next_send_ts: Optional[Timestamp]) -> None:
if next_send_ts is not None:
self._schedule_next_at(next_send_ts)
elif self._next_delayed_event_call is not None:
self._next_delayed_event_call.cancel()
self._next_delayed_event_call = None
def _schedule_next_at(self, next_send_ts: Timestamp) -> None:
delay = next_send_ts - self._get_current_ts()
delay_sec = delay / 1000 if delay > 0 else 0
if self._next_delayed_event_call is None:
self._next_delayed_event_call = self._clock.call_later(
delay_sec,
run_as_background_process,
"_send_on_timeout",
self._send_on_timeout,
)
else:
self._next_delayed_event_call.reset(delay_sec)
async def get_all_for_user(self, requester: Requester) -> List[JsonDict]:
"""Return all pending delayed events requested by the given user."""
await self._request_ratelimiter.ratelimit(requester)
return await self._store.get_all_delayed_events_for_user(
requester.user.localpart
)
async def _send_event(
self,
event: DelayedEventDetails,
txn_id: Optional[str] = None,
) -> None:
user_id = UserID(event.user_localpart, self._config.server.server_name)
user_id_str = user_id.to_string()
# Create a new requester from what data is currently available
requester = create_requester(
user_id,
is_guest=await self._store.is_guest(user_id_str),
device_id=event.device_id,
)
try:
if event.state_key is not None and event.type == EventTypes.Member:
membership = event.content.get("membership")
assert membership is not None
event_id, _ = await self._room_member_handler.update_membership(
requester,
target=UserID.from_string(event.state_key),
room_id=event.room_id.to_string(),
action=membership,
content=event.content,
origin_server_ts=event.origin_server_ts,
)
else:
event_dict: JsonDict = {
"type": event.type,
"content": event.content,
"room_id": event.room_id.to_string(),
"sender": user_id_str,
}
if event.origin_server_ts is not None:
event_dict["origin_server_ts"] = event.origin_server_ts
if event.state_key is not None:
event_dict["state_key"] = event.state_key
(
sent_event,
_,
) = await self._event_creation_handler.create_and_send_nonmember_event(
requester,
event_dict,
txn_id=txn_id,
)
event_id = sent_event.event_id
except ShadowBanError:
event_id = generate_fake_event_id()
finally:
# TODO: If this is a temporary error, retry. Otherwise, consider notifying clients of the failure
try:
await self._store.delete_processed_delayed_event(
event.delay_id, event.user_localpart
)
except Exception:
logger.exception("Failed to delete processed delayed event")
set_tag("event_id", event_id)
def _get_current_ts(self) -> Timestamp:
return Timestamp(self._clock.time_msec())
def _next_send_ts_changed(self, next_send_ts: Optional[Timestamp]) -> bool:
# The DB alone knows if the next send time changed after adding/modifying
# a delayed event, but if we were to ever miss updating our delayed call's
# firing time, we may miss other updates. So, keep track of changes to the
# the next send time here instead of in the DB.
cached_next_send_ts = (
int(self._next_delayed_event_call.getTime() * 1000)
if self._next_delayed_event_call is not None
else None
)
return next_send_ts != cached_next_send_ts

View File

@@ -495,24 +495,6 @@ class SlidingSyncHandler:
room_sync_config.timeline_limit,
)
# Handle state resets. For example, if we see
# `room_membership_for_user_at_to_token.event_id=None and
# room_membership_for_user_at_to_token.membership is not None`, we should
# indicate to the client that a state reset happened. Perhaps we should indicate
# this by setting `initial: True` and empty `required_state: []`.
state_reset_out_of_room = False
if (
room_membership_for_user_at_to_token.event_id is None
and room_membership_for_user_at_to_token.membership is not None
):
# We only expect the `event_id` to be `None` if you've been state reset out
# of the room (meaning you're no longer in the room). We could put this as
# part of the if-statement above but we want to handle every case where
# `event_id` is `None`.
assert room_membership_for_user_at_to_token.membership is Membership.LEAVE
state_reset_out_of_room = True
# Determine whether we should limit the timeline to the token range.
#
# We should return historical messages (before token range) in the
@@ -545,7 +527,7 @@ class SlidingSyncHandler:
from_bound = None
initial = True
ignore_timeline_bound = False
if from_token and not newly_joined and not state_reset_out_of_room:
if from_token and not newly_joined:
room_status = previous_connection_state.rooms.have_sent_room(room_id)
if room_status.status == HaveSentRoomFlag.LIVE:
from_bound = from_token.stream_token.room_key
@@ -750,6 +732,12 @@ class SlidingSyncHandler:
stripped_state.append(strip_event(invite_or_knock_event))
# TODO: Handle state resets. For example, if we see
# `room_membership_for_user_at_to_token.event_id=None and
# room_membership_for_user_at_to_token.membership is not None`, we should
# indicate to the client that a state reset happened. Perhaps we should indicate
# this by setting `initial: True` and empty `required_state`.
# Get the changes to current state in the token range from the
# `current_state_delta_stream` table.
#
@@ -1063,22 +1051,6 @@ class SlidingSyncHandler:
if new_bump_stamp is not None:
bump_stamp = new_bump_stamp
if bump_stamp < 0:
# We never want to send down negative stream orderings, as you can't
# sensibly compare positive and negative stream orderings (they have
# different meanings).
#
# A negative bump stamp here can only happen if the stream ordering
# of the membership event is negative (and there are no further bump
# stamps), which can happen if the server leaves and deletes a room,
# and then rejoins it.
#
# To deal with this, we just set the bump stamp to zero, which will
# shove this room to the bottom of the list. This is OK as the
# moment a new message happens in the room it will get put into a
# sensible order again.
bump_stamp = 0
unstable_expanded_timeline = False
prev_room_sync_config = previous_connection_state.room_configs.get(room_id)
# Record the `room_sync_config` if we're `ignore_timeline_bound` (which means
@@ -1199,8 +1171,8 @@ class SlidingSyncHandler:
# `SCHEMA_COMPAT_VERSION` and run the foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots`
# (tracked by https://github.com/element-hq/synapse/issues/17623)
latest_room_bump_stamp is None
and await self.store.have_finished_sliding_sync_background_jobs()
await self.store.have_finished_sliding_sync_background_jobs()
and latest_room_bump_stamp is None
):
return None

View File

@@ -528,9 +528,10 @@ class SlidingSyncExtensionHandler:
immutable_tag_map = await self.store.get_tags_for_room(
user_id, room_id
)
room_account_data[AccountDataTypes.TAG] = {
"tags": immutable_tag_map
}
if immutable_tag_map:
room_account_data[AccountDataTypes.TAG] = {
"tags": immutable_tag_map
}
# Only add an entry if there were any updates.
if room_account_data:

View File

@@ -40,7 +40,6 @@ from synapse.api.constants import (
EventTypes,
Membership,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import StrippedStateEvent
from synapse.events.utils import parse_stripped_state_event
from synapse.logging.opentracing import start_active_span, trace
@@ -56,6 +55,7 @@ from synapse.storage.roommember import (
)
from synapse.types import (
MutableStateMap,
PersistedEventPosition,
RoomStreamToken,
StateMap,
StrCollection,
@@ -80,12 +80,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
# type of a dictionary lookup and subsequent type narrowing.
UNSET_SENTINEL = object()
# Helper definition for the types that we might return. We do this to avoid
# copying data between types (which can be expensive for many rooms).
RoomsForUserType = Union[RoomsForUserStateReset, RoomsForUser, RoomsForUserSlidingSync]
@@ -124,6 +118,12 @@ class SlidingSyncInterestedRooms:
dm_room_ids: AbstractSet[str]
class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
# type of a dictionary lookup and subsequent type narrowing.
UNSET_SENTINEL = object()
def filter_membership_for_sync(
*,
user_id: str,
@@ -220,38 +220,19 @@ class SlidingSyncRoomLists:
# include rooms that are outside the list ranges.
all_rooms: Set[str] = set()
# Note: this won't include rooms the user has left themselves. We add back
# `newly_left` rooms below. This is more efficient than fetching all rooms and
# then filtering out the old left rooms.
room_membership_for_user_map = await self.store.get_sliding_sync_rooms_for_user(
user_id
)
# Remove invites from ignored users
ignored_users = await self.store.ignored_users(user_id)
if ignored_users:
# TODO: It would be nice to avoid these copies
room_membership_for_user_map = dict(room_membership_for_user_map)
# Make a copy so we don't run into an error: `dictionary changed size during
# iteration`, when we remove items
for room_id in list(room_membership_for_user_map.keys()):
room_for_user_sliding_sync = room_membership_for_user_map[room_id]
if (
room_for_user_sliding_sync.membership == Membership.INVITE
and room_for_user_sliding_sync.sender in ignored_users
):
room_membership_for_user_map.pop(room_id, None)
changes = await self._get_rewind_changes_to_current_membership_to_token(
sync_config.user, room_membership_for_user_map, to_token=to_token
)
if changes:
# TODO: It would be nice to avoid these copies
room_membership_for_user_map = dict(room_membership_for_user_map)
for room_id, change in changes.items():
if change is None:
# Remove rooms that the user joined after the `to_token`
room_membership_for_user_map.pop(room_id, None)
room_membership_for_user_map.pop(room_id)
continue
existing_room = room_membership_for_user_map.get(room_id)
@@ -264,11 +245,36 @@ class SlidingSyncRoomLists:
event_id=change.event_id,
event_pos=change.event_pos,
room_version_id=change.room_version_id,
# We keep the state of the room though
# We keep the current state of the room though
has_known_state=existing_room.has_known_state,
room_type=existing_room.room_type,
is_encrypted=existing_room.is_encrypted,
)
else:
# This can happen if we get "state reset" out of the room
# after the `to_token`. In other words, there is no membership
# for the room after the `to_token` but we see membership in
# the token range.
# Get the state at the time. Note that room type never changes,
# so we can just get current room type
room_type = await self.store.get_room_type(room_id)
is_encrypted = await self.get_is_encrypted_for_room_at_token(
room_id, to_token.room_key
)
# Add back rooms that the user was state-reset out of after `to_token`
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_id=room_id,
sender=change.sender,
membership=change.membership,
event_id=change.event_id,
event_pos=change.event_pos,
room_version_id=change.room_version_id,
has_known_state=True,
room_type=room_type,
is_encrypted=is_encrypted,
)
(
newly_joined_room_ids,
@@ -278,88 +284,44 @@ class SlidingSyncRoomLists:
)
dm_room_ids = await self._get_dm_rooms_for_user(user_id)
# Add back `newly_left` rooms (rooms left in the from -> to token range).
#
# We do this because `get_sliding_sync_rooms_for_user(...)` doesn't include
# rooms that the user left themselves as it's more efficient to add them back
# here than to fetch all rooms and then filter out the old left rooms. The user
# only leaves a room once in a blue moon so this barely needs to run.
#
missing_newly_left_rooms = (
# Handle state resets in the from -> to token range.
state_reset_rooms = (
newly_left_room_map.keys() - room_membership_for_user_map.keys()
)
if missing_newly_left_rooms:
# TODO: It would be nice to avoid these copies
if state_reset_rooms:
room_membership_for_user_map = dict(room_membership_for_user_map)
for room_id in missing_newly_left_rooms:
newly_left_room_for_user = newly_left_room_map[room_id]
# This should be a given
assert newly_left_room_for_user.membership == Membership.LEAVE
# Add back `newly_left` rooms
#
# Check for membership and state in the Sliding Sync tables as it's just
# another membership
newly_left_room_for_user_sliding_sync = (
await self.store.get_sliding_sync_room_for_user(user_id, room_id)
for room_id in (
newly_left_room_map.keys() - room_membership_for_user_map.keys()
):
# Get the state at the time. Note that room type never changes,
# so we can just get current room type
room_type = await self.store.get_room_type(room_id)
is_encrypted = await self.get_is_encrypted_for_room_at_token(
room_id, newly_left_room_map[room_id].to_room_stream_token()
)
# If the membership exists, it's just a normal user left the room on
# their own
if newly_left_room_for_user_sliding_sync is not None:
room_membership_for_user_map[room_id] = (
newly_left_room_for_user_sliding_sync
)
change = changes.get(room_id)
if change is not None:
# Update room membership events to the point in time of the `to_token`
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_id=room_id,
sender=change.sender,
membership=change.membership,
event_id=change.event_id,
event_pos=change.event_pos,
room_version_id=change.room_version_id,
# We keep the state of the room though
has_known_state=newly_left_room_for_user_sliding_sync.has_known_state,
room_type=newly_left_room_for_user_sliding_sync.room_type,
is_encrypted=newly_left_room_for_user_sliding_sync.is_encrypted,
)
# If we are `newly_left` from the room but can't find any membership,
# then we have been "state reset" out of the room
else:
# Get the state at the time. We can't read from the Sliding Sync
# tables because the user has no membership in the room according to
# the state (thanks to the state reset).
#
# Note: `room_type` never changes, so we can just get current room
# type
room_type = await self.store.get_room_type(room_id)
has_known_state = room_type is not ROOM_UNKNOWN_SENTINEL
if isinstance(room_type, StateSentinel):
room_type = None
# Get the encryption status at the time of the token
is_encrypted = await self.get_is_encrypted_for_room_at_token(
room_id,
newly_left_room_for_user.event_pos.to_room_stream_token(),
)
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_id=room_id,
sender=newly_left_room_for_user.sender,
membership=newly_left_room_for_user.membership,
event_id=newly_left_room_for_user.event_id,
event_pos=newly_left_room_for_user.event_pos,
room_version_id=newly_left_room_for_user.room_version_id,
has_known_state=has_known_state,
room_type=room_type,
is_encrypted=is_encrypted,
)
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_id=room_id,
sender=None,
membership=Membership.LEAVE,
event_id=None,
event_pos=newly_left_room_map[room_id],
room_version_id=await self.store.get_room_version_id(room_id),
has_known_state=True,
room_type=room_type,
is_encrypted=is_encrypted,
)
if sync_config.lists:
sync_room_map = room_membership_for_user_map
sync_room_map = {
room_id: room_membership_for_user
for room_id, room_membership_for_user in room_membership_for_user_map.items()
if filter_membership_for_sync(
user_id=user_id,
room_membership_for_user=room_membership_for_user,
newly_left=room_id in newly_left_room_map,
)
}
with start_active_span("assemble_sliding_window_lists"):
for list_key, list_config in sync_config.lists.items():
# Apply filters
@@ -368,7 +330,6 @@ class SlidingSyncRoomLists:
filtered_sync_room_map = await self.filter_rooms_using_tables(
user_id,
sync_room_map,
previous_connection_state,
list_config.filters,
to_token,
dm_room_ids,
@@ -396,18 +357,8 @@ class SlidingSyncRoomLists:
ops: List[SlidingSyncResult.SlidingWindowList.Operation] = []
if list_config.ranges:
# Optimization: If we are asking for the full range, we don't
# need to sort the list.
if (
# We're looking for a single range that covers the entire list
len(list_config.ranges) == 1
# Range starts at 0
and list_config.ranges[0][0] == 0
# And the range extends to the end of the list or more. Each
# side is inclusive.
and list_config.ranges[0][1]
>= len(filtered_sync_room_map) - 1
):
if list_config.ranges == [(0, len(filtered_sync_room_map) - 1)]:
# If we are asking for the full range, we don't need to sort the list.
sorted_room_info: List[RoomsForUserType] = list(
filtered_sync_room_map.values()
)
@@ -420,10 +371,6 @@ class SlidingSyncRoomLists:
Dict[str, RoomsForUserType], filtered_sync_room_map
),
to_token,
# We only need to sort the rooms up to the end
# of the largest range. Both sides of range are
# inclusive so we `+ 1`.
limit=max(range[1] + 1 for range in list_config.ranges),
)
for range in list_config.ranges:
@@ -466,15 +413,12 @@ class SlidingSyncRoomLists:
)
lists[list_key] = SlidingSyncResult.SlidingWindowList(
count=len(filtered_sync_room_map),
count=len(sorted_room_info),
ops=ops,
)
if sync_config.room_subscriptions:
with start_active_span("assemble_room_subscriptions"):
# TODO: It would be nice to avoid these copies
room_membership_for_user_map = dict(room_membership_for_user_map)
# Find which rooms are partially stated and may need to be filtered out
# depending on the `required_state` requested (see below).
partial_state_rooms = await self.store.get_partial_rooms()
@@ -483,20 +427,10 @@ class SlidingSyncRoomLists:
room_id,
room_subscription,
) in sync_config.room_subscriptions.items():
# Check if we have a membership for the room, but didn't pull it out
# above. This could be e.g. a leave that we don't pull out by
# default.
current_room_entry = (
await self.store.get_sliding_sync_room_for_user(
user_id, room_id
)
)
if not current_room_entry:
if room_id not in room_membership_for_user_map:
# TODO: Handle rooms the user isn't in.
continue
room_membership_for_user_map[room_id] = current_room_entry
all_rooms.add(room_id)
# Take the superset of the `RoomSyncConfig` for each room.
@@ -510,6 +444,8 @@ class SlidingSyncRoomLists:
if room_id in partial_state_rooms:
continue
all_rooms.add(room_id)
# Update our `relevant_room_map` with the room we're going to display
# and need to fetch more info about.
existing_room_sync_config = relevant_room_map.get(room_id)
@@ -524,7 +460,7 @@ class SlidingSyncRoomLists:
# Filtered subset of `relevant_room_map` for rooms that may have updates
# (in the event stream)
relevant_rooms_to_send_map = await self._filter_relevant_rooms_to_send(
relevant_rooms_to_send_map = await self._filter_relevant_room_to_send(
previous_connection_state, from_token, relevant_room_map
)
@@ -581,7 +517,6 @@ class SlidingSyncRoomLists:
filtered_sync_room_map = await self.filter_rooms(
sync_config.user,
sync_room_map,
previous_connection_state,
list_config.filters,
to_token,
dm_room_ids,
@@ -712,7 +647,7 @@ class SlidingSyncRoomLists:
# Filtered subset of `relevant_room_map` for rooms that may have updates
# (in the event stream)
relevant_rooms_to_send_map = await self._filter_relevant_rooms_to_send(
relevant_rooms_to_send_map = await self._filter_relevant_room_to_send(
previous_connection_state, from_token, relevant_room_map
)
@@ -727,7 +662,7 @@ class SlidingSyncRoomLists:
dm_room_ids=dm_room_ids,
)
async def _filter_relevant_rooms_to_send(
async def _filter_relevant_room_to_send(
self,
previous_connection_state: PerConnectionState,
from_token: Optional[StreamToken],
@@ -991,38 +926,8 @@ class SlidingSyncRoomLists:
excluded_rooms=self.rooms_to_exclude_globally,
)
# We filter out unknown room versions before we try and load any
# metadata about the room. They shouldn't go down sync anyway, and their
# metadata may be in a broken state.
room_for_user_list = [
room_for_user
for room_for_user in room_for_user_list
if room_for_user.room_version_id in KNOWN_ROOM_VERSIONS
]
# Remove invites from ignored users
ignored_users = await self.store.ignored_users(user_id)
if ignored_users:
room_for_user_list = [
room_for_user
for room_for_user in room_for_user_list
if not (
room_for_user.membership == Membership.INVITE
and room_for_user.sender in ignored_users
)
]
(
newly_joined_room_ids,
newly_left_room_map,
) = await self._get_newly_joined_and_left_rooms(
user_id, to_token=to_token, from_token=from_token
)
# If the user has never joined any rooms before, we can just return an empty
# list. We also have to check the `newly_left_room_map` in case someone was
# state reset out of all of the rooms they were in.
if not room_for_user_list and not newly_left_room_map:
# If the user has never joined any rooms before, we can just return an empty list
if not room_for_user_list:
return {}, set(), set()
# Since we fetched the users room list at some point in time after the
@@ -1040,22 +945,30 @@ class SlidingSyncRoomLists:
else:
rooms_for_user[room_id] = change_room_for_user
(
newly_joined_room_ids,
newly_left_room_ids,
) = await self._get_newly_joined_and_left_rooms(
user_id, to_token=to_token, from_token=from_token
)
# Ensure we have entries for rooms that the user has been "state reset"
# out of. These are rooms appear in the `newly_left_rooms` map but
# aren't in the `rooms_for_user` map.
for room_id, newly_left_room_for_user in newly_left_room_map.items():
# If we already know about the room, it's not a state reset
for room_id, left_event_pos in newly_left_room_ids.items():
if room_id in rooms_for_user:
continue
# This should be true if it's a state reset
assert newly_left_room_for_user.membership is Membership.LEAVE
assert newly_left_room_for_user.event_id is None
assert newly_left_room_for_user.sender is None
rooms_for_user[room_id] = RoomsForUserStateReset(
room_id=room_id,
event_id=None,
event_pos=left_event_pos,
membership=Membership.LEAVE,
sender=None,
room_version_id=await self.store.get_room_version_id(room_id),
)
rooms_for_user[room_id] = newly_left_room_for_user
return rooms_for_user, newly_joined_room_ids, set(newly_left_room_map)
return rooms_for_user, newly_joined_room_ids, set(newly_left_room_ids)
@trace
async def _get_newly_joined_and_left_rooms(
@@ -1063,7 +976,7 @@ class SlidingSyncRoomLists:
user_id: str,
to_token: StreamToken,
from_token: Optional[StreamToken],
) -> Tuple[AbstractSet[str], Mapping[str, RoomsForUserStateReset]]:
) -> Tuple[AbstractSet[str], Mapping[str, PersistedEventPosition]]:
"""Fetch the sets of rooms that the user newly joined or left in the
given token range.
@@ -1072,18 +985,11 @@ class SlidingSyncRoomLists:
"current memberships" of the user.
Returns:
A 2-tuple of newly joined room IDs and a map of newly_left room
IDs to the `RoomsForUserStateReset` entry.
We're using `RoomsForUserStateReset` but that doesn't necessarily mean the
user was state reset of the rooms. It's just that the `event_id`/`sender`
are optional and we can't tell the difference between the server leaving the
room when the user was the last person participating in the room and left or
was state reset out of the room. To actually check for a state reset, you
need to check if a membership still exists in the room.
A 2-tuple of newly joined room IDs and a map of newly left room
IDs to the event position the leave happened at.
"""
newly_joined_room_ids: Set[str] = set()
newly_left_room_map: Dict[str, RoomsForUserStateReset] = {}
newly_left_room_map: Dict[str, PersistedEventPosition] = {}
# We need to figure out the
#
@@ -1154,13 +1060,8 @@ class SlidingSyncRoomLists:
# 1) Figure out newly_left rooms (> `from_token` and <= `to_token`).
if last_membership_change_in_from_to_range.membership == Membership.LEAVE:
# 1) Mark this room as `newly_left`
newly_left_room_map[room_id] = RoomsForUserStateReset(
room_id=room_id,
sender=last_membership_change_in_from_to_range.sender,
membership=Membership.LEAVE,
event_id=last_membership_change_in_from_to_range.event_id,
event_pos=last_membership_change_in_from_to_range.event_pos,
room_version_id=await self.store.get_room_version_id(room_id),
newly_left_room_map[room_id] = (
last_membership_change_in_from_to_range.event_pos
)
# 2) Figure out `newly_joined`
@@ -1604,7 +1505,6 @@ class SlidingSyncRoomLists:
self,
user: UserID,
sync_room_map: Dict[str, RoomsForUserType],
previous_connection_state: PerConnectionState,
filters: SlidingSyncConfig.SlidingSyncList.Filters,
to_token: StreamToken,
dm_room_ids: AbstractSet[str],
@@ -1790,33 +1690,14 @@ class SlidingSyncRoomLists:
)
}
# Keep rooms if the user has been state reset out of it but we previously sent
# down the connection before. We want to make sure that we send these down to
# the client regardless of filters so they find out about the state reset.
#
# We don't always have access to the state in a room after being state reset if
# no one else locally on the server is participating in the room so we patch
# these back in manually.
state_reset_out_of_room_id_set = {
room_id
for room_id in sync_room_map.keys()
if sync_room_map[room_id].event_id is None
and previous_connection_state.rooms.have_sent_room(room_id).status
!= HaveSentRoomFlag.NEVER
}
# Assemble a new sync room map but only with the `filtered_room_id_set`
return {
room_id: sync_room_map[room_id]
for room_id in filtered_room_id_set | state_reset_out_of_room_id_set
}
return {room_id: sync_room_map[room_id] for room_id in filtered_room_id_set}
@trace
async def filter_rooms_using_tables(
self,
user_id: str,
sync_room_map: Mapping[str, RoomsForUserSlidingSync],
previous_connection_state: PerConnectionState,
filters: SlidingSyncConfig.SlidingSyncList.Filters,
to_token: StreamToken,
dm_room_ids: AbstractSet[str],
@@ -1958,47 +1839,23 @@ class SlidingSyncRoomLists:
)
}
# Keep rooms if the user has been state reset out of it but we previously sent
# down the connection before. We want to make sure that we send these down to
# the client regardless of filters so they find out about the state reset.
#
# We don't always have access to the state in a room after being state reset if
# no one else locally on the server is participating in the room so we patch
# these back in manually.
state_reset_out_of_room_id_set = {
room_id
for room_id in sync_room_map.keys()
if sync_room_map[room_id].event_id is None
and previous_connection_state.rooms.have_sent_room(room_id).status
!= HaveSentRoomFlag.NEVER
}
# Assemble a new sync room map but only with the `filtered_room_id_set`
return {
room_id: sync_room_map[room_id]
for room_id in filtered_room_id_set | state_reset_out_of_room_id_set
}
return {room_id: sync_room_map[room_id] for room_id in filtered_room_id_set}
@trace
async def sort_rooms(
self,
sync_room_map: Dict[str, RoomsForUserType],
to_token: StreamToken,
limit: Optional[int] = None,
) -> List[RoomsForUserType]:
"""
Sort by `stream_ordering` of the last event that the user should see in the
room. `stream_ordering` is unique so we get a stable sort.
If `limit` is specified then sort may return fewer entries, but will
always return at least the top N rooms. This is useful as we don't always
need to sort the full list, but are just interested in the top N.
Args:
sync_room_map: Dictionary of room IDs to sort along with membership
information in the room at the time of `to_token`.
to_token: We sort based on the events in the room at this token (<= `to_token`)
limit: The number of rooms that we need to return from the top of the list.
Returns:
A sorted list of room IDs by `stream_ordering` along with membership information.
@@ -2008,23 +1865,8 @@ class SlidingSyncRoomLists:
# user should see in the room (<= `to_token`)
last_activity_in_room_map: Dict[str, int] = {}
# Same as above, except for positions that we know are in the event
# stream cache.
cached_positions: Dict[str, int] = {}
earliest_cache_position = (
self.store._events_stream_cache.get_earliest_known_position()
)
for room_id, room_for_user in sync_room_map.items():
if room_for_user.membership == Membership.JOIN:
# For joined rooms check the stream change cache.
cached_position = (
self.store._events_stream_cache.get_max_pos_of_last_change(room_id)
)
if cached_position is not None:
cached_positions[room_id] = cached_position
else:
if room_for_user.membership != Membership.JOIN:
# If the user has left/been invited/knocked/been banned from a
# room, they shouldn't see anything past that point.
#
@@ -2034,48 +1876,6 @@ class SlidingSyncRoomLists:
# https://github.com/matrix-org/matrix-spec-proposals/pull/3575#discussion_r1653045932
last_activity_in_room_map[room_id] = room_for_user.event_pos.stream
# If the stream position is in range of the stream change cache
# we can include it.
if room_for_user.event_pos.stream > earliest_cache_position:
cached_positions[room_id] = room_for_user.event_pos.stream
# If we are only asked for the top N rooms, and we have enough from
# looking in the stream change cache, then we can return early. This
# is because the cache must include all entries above
# `.get_earliest_known_position()`.
if limit is not None and len(cached_positions) >= limit:
# ... but first we need to handle the case where the cached max
# position is greater than the to_token, in which case we do
# actually query the DB. This should happen rarely, so can do it in
# a loop.
for room_id, position in list(cached_positions.items()):
if position > to_token.room_key.stream:
result = await self.store.get_last_event_pos_in_room_before_stream_ordering(
room_id, to_token.room_key
)
if (
result is not None
and result[1].stream > earliest_cache_position
):
# We have a stream position in the cached range.
cached_positions[room_id] = result[1].stream
else:
# No position in the range, so we remove the entry.
cached_positions.pop(room_id)
if limit is not None and len(cached_positions) >= limit:
return sorted(
(
room
for room in sync_room_map.values()
if room.room_id in cached_positions
),
# Sort by the last activity (stream_ordering) in the room
key=lambda room_info: cached_positions[room_info.room_id],
# We want descending order
reverse=True,
)
# For fully-joined rooms, we find the latest activity at/before the
# `to_token`.
joined_room_positions = (

View File

@@ -19,6 +19,7 @@
#
#
import abc
import cgi
import codecs
import logging
import random
@@ -791,7 +792,7 @@ class MatrixFederationHttpClient:
url_str,
_flatten_response_never_received(e),
)
body = b""
body = None
exc = HttpResponseException(
response.code, response_phrase, body
@@ -1812,9 +1813,8 @@ def check_content_type_is(headers: Headers, expected_content_type: str) -> None:
)
c_type = content_type_headers[0].decode("ascii") # only the first header
# Extract the 'essence' of the mimetype, removing any parameter
c_type_parsed = c_type.split(";", 1)[0].strip()
if c_type_parsed != expected_content_type:
val, options = cgi.parse_header(c_type)
if val != expected_content_type:
raise RequestSendFailed(
RuntimeError(
f"Remote server sent Content-Type header of '{c_type}', not '{expected_content_type}'",

View File

@@ -23,7 +23,6 @@ from typing import TYPE_CHECKING
from synapse.http.server import JsonResource
from synapse.replication.http import (
account_data,
delayed_events,
devices,
federation,
login,
@@ -65,4 +64,3 @@ class ReplicationRestResource(JsonResource):
login.register_servlets(hs, self)
register.register_servlets(hs, self)
devices.register_servlets(hs, self)
delayed_events.register_servlets(hs, self)

View File

@@ -1,48 +0,0 @@
import logging
from typing import TYPE_CHECKING, Dict, Optional, Tuple
from twisted.web.server import Request
from synapse.http.server import HttpServer
from synapse.replication.http._base import ReplicationEndpoint
from synapse.types import JsonDict, JsonMapping
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class ReplicationAddedDelayedEventRestServlet(ReplicationEndpoint):
"""Handle a delayed event being added by another worker.
Request format:
POST /_synapse/replication/delayed_event_added/
{}
"""
NAME = "added_delayed_event"
PATH_ARGS = ()
CACHE = False
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.handler = hs.get_delayed_events_handler()
@staticmethod
async def _serialize_payload(next_send_ts: int) -> JsonDict: # type: ignore[override]
return {"next_send_ts": next_send_ts}
async def _handle_request( # type: ignore[override]
self, request: Request, content: JsonDict
) -> Tuple[int, Dict[str, Optional[JsonMapping]]]:
self.handler.on_added(int(content["next_send_ts"]))
return 200, {}
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
ReplicationAddedDelayedEventRestServlet(hs).register(http_server)

View File

@@ -2,7 +2,7 @@
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2014-2016 OpenMarket Ltd
# Copyright (C) 2023-2024 New Vector, Ltd
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
@@ -31,7 +31,6 @@ from synapse.rest.client import (
auth,
auth_issuer,
capabilities,
delayed_events,
devices,
directory,
events,
@@ -82,7 +81,6 @@ CLIENT_SERVLET_FUNCTIONS: Tuple[RegisterServletsFunc, ...] = (
room.register_deprecated_servlets,
events.register_servlets,
room.register_servlets,
delayed_events.register_servlets,
login.register_servlets,
profile.register_servlets,
presence.register_servlets,

View File

@@ -98,8 +98,6 @@ from synapse.rest.admin.users import (
DeactivateAccountRestServlet,
PushersRestServlet,
RateLimitRestServlet,
RedactUser,
RedactUserStatus,
ResetPasswordRestServlet,
SearchUsersRestServlet,
ShadowBanRestServlet,
@@ -321,8 +319,6 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
UserReplaceMasterCrossSigningKeyRestServlet(hs).register(http_server)
UserByExternalId(hs).register(http_server)
UserByThreePid(hs).register(http_server)
RedactUser(hs).register(http_server)
RedactUserStatus(hs).register(http_server)
DeviceRestServlet(hs).register(http_server)
DevicesRestServlet(hs).register(http_server)

View File

@@ -50,7 +50,7 @@ from synapse.rest.admin._base import (
from synapse.rest.client._base import client_patterns
from synapse.storage.databases.main.registration import ExternalIDReuseException
from synapse.storage.databases.main.stats import UserSortOrder
from synapse.types import JsonDict, JsonMapping, TaskStatus, UserID
from synapse.types import JsonDict, JsonMapping, UserID
from synapse.types.rest import RequestBodyModel
if TYPE_CHECKING:
@@ -1405,100 +1405,3 @@ class UserByThreePid(RestServlet):
raise NotFoundError("User not found")
return HTTPStatus.OK, {"user_id": user_id}
class RedactUser(RestServlet):
"""
Redact all the events of a given user in the given rooms or if empty dict is provided
then all events in all rooms user is member of. Kicks off a background process and
returns an id that can be used to check on the progress of the redaction progress
"""
PATTERNS = admin_patterns("/user/(?P<user_id>[^/]*)/redact")
def __init__(self, hs: "HomeServer"):
self._auth = hs.get_auth()
self._store = hs.get_datastores().main
self.admin_handler = hs.get_admin_handler()
async def on_POST(
self, request: SynapseRequest, user_id: str
) -> Tuple[int, JsonDict]:
requester = await self._auth.get_user_by_req(request)
await assert_user_is_admin(self._auth, requester)
body = parse_json_object_from_request(request, allow_empty_body=True)
rooms = body.get("rooms")
if rooms is None:
raise SynapseError(
HTTPStatus.BAD_REQUEST, "Must provide a value for rooms."
)
reason = body.get("reason")
if reason:
if not isinstance(reason, str):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"If a reason is provided it must be a string.",
)
limit = body.get("limit")
if limit:
if not isinstance(limit, int) or limit <= 0:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"If limit is provided it must be a non-negative integer greater than 0.",
)
if not rooms:
rooms = await self._store.get_rooms_for_user(user_id)
redact_id = await self.admin_handler.start_redact_events(
user_id, list(rooms), requester.serialize(), reason, limit
)
return HTTPStatus.OK, {"redact_id": redact_id}
class RedactUserStatus(RestServlet):
"""
Check on the progress of the redaction request represented by the provided ID, returning
the status of the process and a dict of events that were unable to be redacted, if any
"""
PATTERNS = admin_patterns("/user/redact_status/(?P<redact_id>[^/]*)$")
def __init__(self, hs: "HomeServer"):
self._auth = hs.get_auth()
self.admin_handler = hs.get_admin_handler()
async def on_GET(
self, request: SynapseRequest, redact_id: str
) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self._auth, request)
task = await self.admin_handler.get_redact_task(redact_id)
if task:
if task.status == TaskStatus.ACTIVE:
return HTTPStatus.OK, {"status": TaskStatus.ACTIVE}
elif task.status == TaskStatus.COMPLETE:
assert task.result is not None
failed_redactions = task.result.get("failed_redactions")
return HTTPStatus.OK, {
"status": TaskStatus.COMPLETE,
"failed_redactions": failed_redactions if failed_redactions else {},
}
elif task.status == TaskStatus.SCHEDULED:
return HTTPStatus.OK, {"status": TaskStatus.SCHEDULED}
else:
return HTTPStatus.OK, {
"status": TaskStatus.FAILED,
"error": (
task.error
if task.error
else "Unknown error, please check the logs for more information."
),
}
else:
raise NotFoundError("redact id '%s' not found" % redact_id)

View File

@@ -1,97 +0,0 @@
# This module contains REST servlets to do with delayed events: /delayed_events/<paths>
import logging
from enum import Enum
from http import HTTPStatus
from typing import TYPE_CHECKING, Tuple
from synapse.api.errors import Codes, SynapseError
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseRequest
from synapse.rest.client._base import client_patterns
from synapse.types import JsonDict
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class _UpdateDelayedEventAction(Enum):
CANCEL = "cancel"
RESTART = "restart"
SEND = "send"
class UpdateDelayedEventServlet(RestServlet):
PATTERNS = client_patterns(
r"/org\.matrix\.msc4140/delayed_events/(?P<delay_id>[^/]+)$",
releases=(),
)
CATEGORY = "Delayed event management requests"
def __init__(self, hs: "HomeServer"):
super().__init__()
self.auth = hs.get_auth()
self.delayed_events_handler = hs.get_delayed_events_handler()
async def on_POST(
self, request: SynapseRequest, delay_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
body = parse_json_object_from_request(request)
try:
action = str(body["action"])
except KeyError:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"'action' is missing",
Codes.MISSING_PARAM,
)
try:
enum_action = _UpdateDelayedEventAction(action)
except ValueError:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"'action' is not one of "
+ ", ".join(f"'{m.value}'" for m in _UpdateDelayedEventAction),
Codes.INVALID_PARAM,
)
if enum_action == _UpdateDelayedEventAction.CANCEL:
await self.delayed_events_handler.cancel(requester, delay_id)
elif enum_action == _UpdateDelayedEventAction.RESTART:
await self.delayed_events_handler.restart(requester, delay_id)
elif enum_action == _UpdateDelayedEventAction.SEND:
await self.delayed_events_handler.send(requester, delay_id)
return 200, {}
class DelayedEventsServlet(RestServlet):
PATTERNS = client_patterns(
r"/org\.matrix\.msc4140/delayed_events$",
releases=(),
)
CATEGORY = "Delayed event management requests"
def __init__(self, hs: "HomeServer"):
super().__init__()
self.auth = hs.get_auth()
self.delayed_events_handler = hs.get_delayed_events_handler()
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
# TODO: Support Pagination stream API ("from" query parameter)
delayed_events = await self.delayed_events_handler.get_all_for_user(requester)
ret = {"delayed_events": delayed_events}
return 200, ret
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
# The following can't currently be instantiated on workers.
if hs.config.worker.worker_app is None:
UpdateDelayedEventServlet(hs).register(http_server)
DelayedEventsServlet(hs).register(http_server)

View File

@@ -2,7 +2,7 @@
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2014-2016 OpenMarket Ltd
# Copyright (C) 2023-2024 New Vector, Ltd
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
@@ -195,9 +195,7 @@ class RoomStateEventRestServlet(RestServlet):
self.event_creation_handler = hs.get_event_creation_handler()
self.room_member_handler = hs.get_room_member_handler()
self.message_handler = hs.get_message_handler()
self.delayed_events_handler = hs.get_delayed_events_handler()
self.auth = hs.get_auth()
self._max_event_delay_ms = hs.config.server.max_event_delay_ms
def register(self, http_server: HttpServer) -> None:
# /rooms/$roomid/state/$eventtype
@@ -293,22 +291,6 @@ class RoomStateEventRestServlet(RestServlet):
if requester.app_service:
origin_server_ts = parse_integer(request, "ts")
delay = _parse_request_delay(request, self._max_event_delay_ms)
if delay is not None:
delay_id = await self.delayed_events_handler.add(
requester,
room_id=room_id,
event_type=event_type,
state_key=state_key,
origin_server_ts=origin_server_ts,
content=content,
delay=delay,
)
set_tag("delay_id", delay_id)
ret = {"delay_id": delay_id}
return 200, ret
try:
if event_type == EventTypes.Member:
membership = content.get("membership", None)
@@ -359,9 +341,7 @@ class RoomSendEventRestServlet(TransactionRestServlet):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.event_creation_handler = hs.get_event_creation_handler()
self.delayed_events_handler = hs.get_delayed_events_handler()
self.auth = hs.get_auth()
self._max_event_delay_ms = hs.config.server.max_event_delay_ms
def register(self, http_server: HttpServer) -> None:
# /rooms/$roomid/send/$event_type[/$txn_id]
@@ -378,26 +358,6 @@ class RoomSendEventRestServlet(TransactionRestServlet):
) -> Tuple[int, JsonDict]:
content = parse_json_object_from_request(request)
origin_server_ts = None
if requester.app_service:
origin_server_ts = parse_integer(request, "ts")
delay = _parse_request_delay(request, self._max_event_delay_ms)
if delay is not None:
delay_id = await self.delayed_events_handler.add(
requester,
room_id=room_id,
event_type=event_type,
state_key=None,
origin_server_ts=origin_server_ts,
content=content,
delay=delay,
)
set_tag("delay_id", delay_id)
ret = {"delay_id": delay_id}
return 200, ret
event_dict: JsonDict = {
"type": event_type,
"content": content,
@@ -405,8 +365,10 @@ class RoomSendEventRestServlet(TransactionRestServlet):
"sender": requester.user.to_string(),
}
if origin_server_ts is not None:
event_dict["origin_server_ts"] = origin_server_ts
if requester.app_service:
origin_server_ts = parse_integer(request, "ts")
if origin_server_ts is not None:
event_dict["origin_server_ts"] = origin_server_ts
try:
(
@@ -449,49 +411,6 @@ class RoomSendEventRestServlet(TransactionRestServlet):
)
def _parse_request_delay(
request: SynapseRequest,
max_delay: Optional[int],
) -> Optional[int]:
"""Parses from the request string the delay parameter for
delayed event requests, and checks it for correctness.
Args:
request: the twisted HTTP request.
max_delay: the maximum allowed value of the delay parameter,
or None if no delay parameter is allowed.
Returns:
The value of the requested delay, or None if it was absent.
Raises:
SynapseError: if the delay parameter is present and forbidden,
or if it exceeds the maximum allowed value.
"""
delay = parse_integer(request, "org.matrix.msc4140.delay")
if delay is None:
return None
if max_delay is None:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Delayed events are not supported on this server",
Codes.UNKNOWN,
{
"org.matrix.msc4140.errcode": "M_MAX_DELAY_UNSUPPORTED",
},
)
if delay > max_delay:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"The requested delay exceeds the allowed maximum.",
Codes.UNKNOWN,
{
"org.matrix.msc4140.errcode": "M_MAX_DELAY_EXCEEDED",
"org.matrix.msc4140.max_delay": max_delay,
},
)
return delay
# TODO: Needs unit testing for room ID + alias joins
class JoinRoomAliasServlet(ResolveRoomIdMixin, TransactionRestServlet):
CATEGORY = "Event sending requests"

View File

@@ -4,7 +4,7 @@
# Copyright 2019 The Matrix.org Foundation C.I.C.
# Copyright 2017 Vector Creations Ltd
# Copyright 2016 OpenMarket Ltd
# Copyright (C) 2023-2024 New Vector, Ltd
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
@@ -171,8 +171,6 @@ class VersionsRestServlet(RestServlet):
is not None
)
),
# MSC4140: Delayed events
"org.matrix.msc4140": True,
# MSC4151: Report room API (Client-Server API)
"org.matrix.msc4151": self.config.experimental.msc4151_enabled,
# Simplified sliding sync

View File

@@ -2,7 +2,7 @@
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2021 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023-2024 New Vector, Ltd
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
@@ -68,7 +68,6 @@ from synapse.handlers.appservice import ApplicationServicesHandler
from synapse.handlers.auth import AuthHandler, PasswordAuthProvider
from synapse.handlers.cas import CasHandler
from synapse.handlers.deactivate_account import DeactivateAccountHandler
from synapse.handlers.delayed_events import DelayedEventsHandler
from synapse.handlers.device import DeviceHandler, DeviceWorkerHandler
from synapse.handlers.devicemessage import DeviceMessageHandler
from synapse.handlers.directory import DirectoryHandler
@@ -252,7 +251,6 @@ class HomeServer(metaclass=abc.ABCMeta):
"account_validity",
"auth",
"deactivate_account",
"delayed_events",
"message",
"pagination",
"profile",
@@ -966,7 +964,3 @@ class HomeServer(metaclass=abc.ABCMeta):
register_threadpool("media", media_threadpool)
return media_threadpool
@cache_in_self
def get_delayed_events_handler(self) -> DelayedEventsHandler:
return DelayedEventsHandler(self)

View File

@@ -490,12 +490,6 @@ class BackgroundUpdater:
if self._all_done:
return True
# We now check if we have completed all pending background updates. We
# do this as once this returns True then it will set `self._all_done`
# and we can skip checking the database in future.
if await self.has_completed_background_updates():
return True
rows = await self.db_pool.simple_select_many_batch(
table="background_updates",
column="update_name",

View File

@@ -3,7 +3,7 @@
#
# Copyright 2019-2021 The Matrix.org Foundation C.I.C.
# Copyright 2014-2016 OpenMarket Ltd
# Copyright (C) 2023-2024 New Vector, Ltd
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
@@ -44,7 +44,6 @@ from .appservice import ApplicationServiceStore, ApplicationServiceTransactionSt
from .cache import CacheInvalidationWorkerStore
from .censor_events import CensorEventsStore
from .client_ips import ClientIpWorkerStore
from .delayed_events import DelayedEventsStore
from .deviceinbox import DeviceInboxStore
from .devices import DeviceStore
from .directory import DirectoryStore
@@ -159,7 +158,6 @@ class DataStore(
SessionStore,
TaskSchedulerWorkerStore,
SlidingSyncStore,
DelayedEventsStore,
):
def __init__(
self,

View File

@@ -471,7 +471,6 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache("get_account_data_for_room", None)
self._attempt_to_invalidate_cache("get_account_data_for_room_and_type", None)
self._attempt_to_invalidate_cache("get_tags_for_room", None)
self._attempt_to_invalidate_cache("get_aliases_for_room", (room_id,))
self._attempt_to_invalidate_cache("get_latest_event_ids_in_room", (room_id,))
self._attempt_to_invalidate_cache("_get_forward_extremeties_for_room", None)

View File

@@ -1,523 +0,0 @@
import logging
from typing import List, NewType, Optional, Tuple
import attr
from synapse.api.errors import NotFoundError
from synapse.storage._base import SQLBaseStore, db_to_json
from synapse.storage.database import LoggingTransaction, StoreError
from synapse.storage.engines import PostgresEngine
from synapse.types import JsonDict, RoomID
from synapse.util import json_encoder, stringutils as stringutils
logger = logging.getLogger(__name__)
DelayID = NewType("DelayID", str)
UserLocalpart = NewType("UserLocalpart", str)
DeviceID = NewType("DeviceID", str)
EventType = NewType("EventType", str)
StateKey = NewType("StateKey", str)
Delay = NewType("Delay", int)
Timestamp = NewType("Timestamp", int)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class EventDetails:
room_id: RoomID
type: EventType
state_key: Optional[StateKey]
origin_server_ts: Optional[Timestamp]
content: JsonDict
device_id: Optional[DeviceID]
@attr.s(slots=True, frozen=True, auto_attribs=True)
class DelayedEventDetails(EventDetails):
delay_id: DelayID
user_localpart: UserLocalpart
class DelayedEventsStore(SQLBaseStore):
async def get_delayed_events_stream_pos(self) -> int:
"""
Gets the stream position of the background process to watch for state events
that target the same piece of state as any pending delayed events.
"""
return await self.db_pool.simple_select_one_onecol(
table="delayed_events_stream_pos",
keyvalues={},
retcol="stream_id",
desc="get_delayed_events_stream_pos",
)
async def update_delayed_events_stream_pos(self, stream_id: Optional[int]) -> None:
"""
Updates the stream position of the background process to watch for state events
that target the same piece of state as any pending delayed events.
Must only be used by the worker running the background process.
"""
await self.db_pool.simple_update_one(
table="delayed_events_stream_pos",
keyvalues={},
updatevalues={"stream_id": stream_id},
desc="update_delayed_events_stream_pos",
)
async def add_delayed_event(
self,
*,
user_localpart: str,
device_id: Optional[str],
creation_ts: Timestamp,
room_id: str,
event_type: str,
state_key: Optional[str],
origin_server_ts: Optional[int],
content: JsonDict,
delay: int,
) -> Tuple[DelayID, Timestamp]:
"""
Inserts a new delayed event in the DB.
Returns: The generated ID assigned to the added delayed event,
and the send time of the next delayed event to be sent,
which is either the event just added or one added earlier.
"""
delay_id = _generate_delay_id()
send_ts = Timestamp(creation_ts + delay)
def add_delayed_event_txn(txn: LoggingTransaction) -> Timestamp:
self.db_pool.simple_insert_txn(
txn,
table="delayed_events",
values={
"delay_id": delay_id,
"user_localpart": user_localpart,
"device_id": device_id,
"delay": delay,
"send_ts": send_ts,
"room_id": room_id,
"event_type": event_type,
"state_key": state_key,
"origin_server_ts": origin_server_ts,
"content": json_encoder.encode(content),
},
)
next_send_ts = self._get_next_delayed_event_send_ts_txn(txn)
assert next_send_ts is not None
return next_send_ts
next_send_ts = await self.db_pool.runInteraction(
"add_delayed_event", add_delayed_event_txn
)
return delay_id, next_send_ts
async def restart_delayed_event(
self,
*,
delay_id: str,
user_localpart: str,
current_ts: Timestamp,
) -> Timestamp:
"""
Restarts the send time of the matching delayed event,
as long as it hasn't already been marked for processing.
Args:
delay_id: The ID of the delayed event to restart.
user_localpart: The localpart of the delayed event's owner.
current_ts: The current time, which will be used to calculate the new send time.
Returns: The send time of the next delayed event to be sent,
which is either the event just restarted, or another one
with an earlier send time than the restarted one's new send time.
Raises:
NotFoundError: if there is no matching delayed event.
"""
def restart_delayed_event_txn(
txn: LoggingTransaction,
) -> Timestamp:
txn.execute(
"""
UPDATE delayed_events
SET send_ts = ? + delay
WHERE delay_id = ? AND user_localpart = ?
AND NOT is_processed
""",
(
current_ts,
delay_id,
user_localpart,
),
)
if txn.rowcount == 0:
raise NotFoundError("Delayed event not found")
next_send_ts = self._get_next_delayed_event_send_ts_txn(txn)
assert next_send_ts is not None
return next_send_ts
return await self.db_pool.runInteraction(
"restart_delayed_event", restart_delayed_event_txn
)
async def get_all_delayed_events_for_user(
self,
user_localpart: str,
) -> List[JsonDict]:
"""Returns all pending delayed events owned by the given user."""
# TODO: Support Pagination stream API ("next_batch" field)
rows = await self.db_pool.execute(
"get_all_delayed_events_for_user",
"""
SELECT
delay_id,
room_id,
event_type,
state_key,
delay,
send_ts,
content
FROM delayed_events
WHERE user_localpart = ? AND NOT is_processed
ORDER BY send_ts
""",
user_localpart,
)
return [
{
"delay_id": DelayID(row[0]),
"room_id": str(RoomID.from_string(row[1])),
"type": EventType(row[2]),
**({"state_key": StateKey(row[3])} if row[3] is not None else {}),
"delay": Delay(row[4]),
"running_since": Timestamp(row[5] - row[4]),
"content": db_to_json(row[6]),
}
for row in rows
]
async def process_timeout_delayed_events(
self, current_ts: Timestamp
) -> Tuple[
List[DelayedEventDetails],
Optional[Timestamp],
]:
"""
Marks for processing all delayed events that should have been sent prior to the provided time
that haven't already been marked as such.
Returns: The details of all newly-processed delayed events,
and the send time of the next delayed event to be sent, if any.
"""
def process_timeout_delayed_events_txn(
txn: LoggingTransaction,
) -> Tuple[
List[DelayedEventDetails],
Optional[Timestamp],
]:
sql_cols = ", ".join(
(
"delay_id",
"user_localpart",
"room_id",
"event_type",
"state_key",
"origin_server_ts",
"send_ts",
"content",
"device_id",
)
)
sql_update = "UPDATE delayed_events SET is_processed = TRUE"
sql_where = "WHERE send_ts <= ? AND NOT is_processed"
sql_args = (current_ts,)
sql_order = "ORDER BY send_ts"
if isinstance(self.database_engine, PostgresEngine):
# Do this only in Postgres because:
# - SQLite's RETURNING emits rows in an arbitrary order
# - https://www.sqlite.org/lang_returning.html#limitations_and_caveats
# - SQLite does not support data-modifying statements in a WITH clause
# - https://www.sqlite.org/lang_with.html
# - https://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-MODIFYING
txn.execute(
f"""
WITH events_to_send AS (
{sql_update} {sql_where} RETURNING *
) SELECT {sql_cols} FROM events_to_send {sql_order}
""",
sql_args,
)
rows = txn.fetchall()
else:
txn.execute(
f"SELECT {sql_cols} FROM delayed_events {sql_where} {sql_order}",
sql_args,
)
rows = txn.fetchall()
txn.execute(f"{sql_update} {sql_where}", sql_args)
assert txn.rowcount == len(rows)
events = [
DelayedEventDetails(
RoomID.from_string(row[2]),
EventType(row[3]),
StateKey(row[4]) if row[4] is not None else None,
# If no custom_origin_ts is set, use send_ts as the event's timestamp
Timestamp(row[5] if row[5] is not None else row[6]),
db_to_json(row[7]),
DeviceID(row[8]) if row[8] is not None else None,
DelayID(row[0]),
UserLocalpart(row[1]),
)
for row in rows
]
next_send_ts = self._get_next_delayed_event_send_ts_txn(txn)
return events, next_send_ts
return await self.db_pool.runInteraction(
"process_timeout_delayed_events", process_timeout_delayed_events_txn
)
async def process_target_delayed_event(
self,
*,
delay_id: str,
user_localpart: str,
) -> Tuple[
EventDetails,
Optional[Timestamp],
]:
"""
Marks for processing the matching delayed event, regardless of its timeout time,
as long as it has not already been marked as such.
Args:
delay_id: The ID of the delayed event to restart.
user_localpart: The localpart of the delayed event's owner.
Returns: The details of the matching delayed event,
and the send time of the next delayed event to be sent, if any.
Raises:
NotFoundError: if there is no matching delayed event.
"""
def process_target_delayed_event_txn(
txn: LoggingTransaction,
) -> Tuple[
EventDetails,
Optional[Timestamp],
]:
sql_cols = ", ".join(
(
"room_id",
"event_type",
"state_key",
"origin_server_ts",
"content",
"device_id",
)
)
sql_update = "UPDATE delayed_events SET is_processed = TRUE"
sql_where = "WHERE delay_id = ? AND user_localpart = ? AND NOT is_processed"
sql_args = (delay_id, user_localpart)
txn.execute(
(
f"{sql_update} {sql_where} RETURNING {sql_cols}"
if self.database_engine.supports_returning
else f"SELECT {sql_cols} FROM delayed_events {sql_where}"
),
sql_args,
)
row = txn.fetchone()
if row is None:
raise NotFoundError("Delayed event not found")
elif not self.database_engine.supports_returning:
txn.execute(f"{sql_update} {sql_where}", sql_args)
assert txn.rowcount == 1
event = EventDetails(
RoomID.from_string(row[0]),
EventType(row[1]),
StateKey(row[2]) if row[2] is not None else None,
Timestamp(row[3]) if row[3] is not None else None,
db_to_json(row[4]),
DeviceID(row[5]) if row[5] is not None else None,
)
return event, self._get_next_delayed_event_send_ts_txn(txn)
return await self.db_pool.runInteraction(
"process_target_delayed_event", process_target_delayed_event_txn
)
async def cancel_delayed_event(
self,
*,
delay_id: str,
user_localpart: str,
) -> Optional[Timestamp]:
"""
Cancels the matching delayed event, i.e. remove it as long as it hasn't been processed.
Args:
delay_id: The ID of the delayed event to restart.
user_localpart: The localpart of the delayed event's owner.
Returns: The send time of the next delayed event to be sent, if any.
Raises:
NotFoundError: if there is no matching delayed event.
"""
def cancel_delayed_event_txn(
txn: LoggingTransaction,
) -> Optional[Timestamp]:
try:
self.db_pool.simple_delete_one_txn(
txn,
table="delayed_events",
keyvalues={
"delay_id": delay_id,
"user_localpart": user_localpart,
"is_processed": False,
},
)
except StoreError:
if txn.rowcount == 0:
raise NotFoundError("Delayed event not found")
else:
raise
return self._get_next_delayed_event_send_ts_txn(txn)
return await self.db_pool.runInteraction(
"cancel_delayed_event", cancel_delayed_event_txn
)
async def cancel_delayed_state_events(
self,
*,
room_id: str,
event_type: str,
state_key: str,
) -> Optional[Timestamp]:
"""
Cancels all matching delayed state events, i.e. remove them as long as they haven't been processed.
Returns: The send time of the next delayed event to be sent, if any.
"""
def cancel_delayed_state_events_txn(
txn: LoggingTransaction,
) -> Optional[Timestamp]:
self.db_pool.simple_delete_txn(
txn,
table="delayed_events",
keyvalues={
"room_id": room_id,
"event_type": event_type,
"state_key": state_key,
"is_processed": False,
},
)
return self._get_next_delayed_event_send_ts_txn(txn)
return await self.db_pool.runInteraction(
"cancel_delayed_state_events", cancel_delayed_state_events_txn
)
async def delete_processed_delayed_event(
self,
delay_id: DelayID,
user_localpart: UserLocalpart,
) -> None:
"""
Delete the matching delayed event, as long as it has been marked as processed.
Throws:
StoreError: if there is no matching delayed event, or if it has not yet been processed.
"""
return await self.db_pool.simple_delete_one(
table="delayed_events",
keyvalues={
"delay_id": delay_id,
"user_localpart": user_localpart,
"is_processed": True,
},
desc="delete_processed_delayed_event",
)
async def delete_processed_delayed_state_events(
self,
*,
room_id: str,
event_type: str,
state_key: str,
) -> None:
"""
Delete the matching delayed state events that have been marked as processed.
"""
await self.db_pool.simple_delete(
table="delayed_events",
keyvalues={
"room_id": room_id,
"event_type": event_type,
"state_key": state_key,
"is_processed": True,
},
desc="delete_processed_delayed_state_events",
)
async def unprocess_delayed_events(self) -> None:
"""
Unmark all delayed events for processing.
"""
await self.db_pool.simple_update(
table="delayed_events",
keyvalues={"is_processed": True},
updatevalues={"is_processed": False},
desc="unprocess_delayed_events",
)
async def get_next_delayed_event_send_ts(self) -> Optional[Timestamp]:
"""
Returns the send time of the next delayed event to be sent, if any.
"""
return await self.db_pool.runInteraction(
"get_next_delayed_event_send_ts",
self._get_next_delayed_event_send_ts_txn,
db_autocommit=True,
)
def _get_next_delayed_event_send_ts_txn(
self, txn: LoggingTransaction
) -> Optional[Timestamp]:
result = self.db_pool.simple_select_one_onecol_txn(
txn,
table="delayed_events",
keyvalues={"is_processed": False},
retcol="MIN(send_ts)",
allow_none=True,
)
return Timestamp(result) if result is not None else None
def _generate_delay_id() -> DelayID:
"""Generates an opaque string, for use as a delay ID"""
# We use the following format for delay IDs:
# syd_<random string>
# They are scoped to user localparts, so it is possible for
# the same ID to exist for multiple users.
return DelayID(f"syd_{stringutils.random_string(20)}")

View File

@@ -2467,76 +2467,6 @@ class EventsWorkerStore(SQLBaseStore):
self.invalidate_get_event_cache_after_txn(txn, event_id)
async def get_events_sent_by_user_in_room(
self, user_id: str, room_id: str, limit: int, filter: Optional[List[str]] = None
) -> Optional[List[str]]:
"""
Get a list of event ids of events sent by the user in the specified room
Args:
user_id: user ID to search against
room_id: room ID of the room to search for events in
filter: type of events to filter for
limit: maximum number of event ids to return
"""
def _get_events_by_user_in_room_txn(
txn: LoggingTransaction,
user_id: str,
room_id: str,
filter: Optional[List[str]],
batch_size: int,
offset: int,
) -> Tuple[Optional[List[str]], int]:
if filter:
base_clause, args = make_in_list_sql_clause(
txn.database_engine, "type", filter
)
clause = f"AND {base_clause}"
parameters = (user_id, room_id, *args, batch_size, offset)
else:
clause = ""
parameters = (user_id, room_id, batch_size, offset)
sql = f"""
SELECT event_id FROM events
WHERE sender = ? AND room_id = ?
{clause}
ORDER BY received_ts DESC
LIMIT ?
OFFSET ?
"""
txn.execute(sql, parameters)
res = txn.fetchall()
if res:
events = [row[0] for row in res]
else:
events = None
return events, offset + batch_size
offset = 0
batch_size = 100
if batch_size > limit:
batch_size = limit
selected_ids: List[str] = []
while offset < limit:
res, offset = await self.db_pool.runInteraction(
"get_events_by_user",
_get_events_by_user_in_room_txn,
user_id,
room_id,
filter,
batch_size,
offset,
)
if res:
selected_ids = selected_ids + res
else:
break
return selected_ids
async def have_finished_sliding_sync_background_jobs(self) -> bool:
"""Return if it's safe to use the sliding sync membership tables."""

View File

@@ -41,7 +41,6 @@ import attr
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.logging.opentracing import trace
from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import wrap_as_background_process
@@ -1404,7 +1403,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
) -> Mapping[str, RoomsForUserSlidingSync]:
"""Get all the rooms for a user to handle a sliding sync request.
Ignores forgotten rooms and rooms that the user has left themselves.
Ignores forgotten rooms and rooms that the user has been kicked from.
Returns:
Map from room ID to membership info
@@ -1429,7 +1428,6 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
LEFT JOIN sliding_sync_joined_rooms AS j ON (j.room_id = m.room_id AND m.membership = 'join')
WHERE user_id = ?
AND m.forgotten = 0
AND (m.membership != 'leave' OR m.user_id != m.sender)
"""
txn.execute(sql, (user_id,))
return {
@@ -1445,10 +1443,6 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
is_encrypted=bool(row[9]),
)
for row in txn
# We filter out unknown room versions proactively. They
# shouldn't go down sync and their metadata may be in a broken
# state (causing errors).
if row[4] in KNOWN_ROOM_VERSIONS
}
return await self.db_pool.runInteraction(
@@ -1456,49 +1450,6 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
get_sliding_sync_rooms_for_user_txn,
)
async def get_sliding_sync_room_for_user(
self, user_id: str, room_id: str
) -> Optional[RoomsForUserSlidingSync]:
"""Get the sliding sync room entry for the given user and room."""
def get_sliding_sync_room_for_user_txn(
txn: LoggingTransaction,
) -> Optional[RoomsForUserSlidingSync]:
sql = """
SELECT m.room_id, m.sender, m.membership, m.membership_event_id,
r.room_version,
m.event_instance_name, m.event_stream_ordering,
m.has_known_state,
COALESCE(j.room_type, m.room_type),
COALESCE(j.is_encrypted, m.is_encrypted)
FROM sliding_sync_membership_snapshots AS m
INNER JOIN rooms AS r USING (room_id)
LEFT JOIN sliding_sync_joined_rooms AS j ON (j.room_id = m.room_id AND m.membership = 'join')
WHERE user_id = ?
AND m.forgotten = 0
AND m.room_id = ?
"""
txn.execute(sql, (user_id, room_id))
row = txn.fetchone()
if not row:
return None
return RoomsForUserSlidingSync(
room_id=row[0],
sender=row[1],
membership=row[2],
event_id=row[3],
room_version_id=row[4],
event_pos=PersistedEventPosition(row[5], row[6]),
has_known_state=bool(row[7]),
room_type=row[8],
is_encrypted=row[9],
)
return await self.db_pool.runInteraction(
"get_sliding_sync_room_for_user", get_sliding_sync_room_for_user_txn
)
class RoomMemberBackgroundUpdateStore(SQLBaseStore):
def __init__(

View File

@@ -308,24 +308,8 @@ class StateGroupWorkerStore(EventsWorkerStore, SQLBaseStore):
return create_event
@cached(max_entries=10000)
async def get_room_type(self, room_id: str) -> Union[Optional[str], Sentinel]:
"""Fetch room type for given room.
Since this function is cached, any missing values would be cached as
`None`. In order to distinguish between an unencrypted room that has
`None` encryption and a room that is unknown to the server where we
might want to omit the value (which would make it cached as `None`),
instead we use the sentinel value `ROOM_UNKNOWN_SENTINEL`.
"""
try:
create_event = await self.get_create_event_for_room(room_id)
return create_event.content.get(EventContentFields.ROOM_TYPE)
except NotFoundError:
# We use the sentinel value to distinguish between `None` which is a
# valid room type and a room that is unknown to the server so the value
# is just unset.
return ROOM_UNKNOWN_SENTINEL
async def get_room_type(self, room_id: str) -> Optional[str]:
raise NotImplementedError()
@cachedList(cached_method_name="get_room_type", list_name="room_ids")
async def bulk_get_room_type(

View File

@@ -941,12 +941,6 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
Returns:
All membership changes to the current state in the token range. Events are
sorted by `stream_ordering` ascending.
`event_id`/`sender` can be `None` when the server leaves a room (meaning
everyone locally left) or a state reset which removed the person from the
room. We can't tell the difference between the two cases with what's
available in the `current_state_delta_stream` table. To actually check for a
state reset, you need to check if a membership still exists in the room.
"""
# Start by ruling out cases where a DB query is not necessary.
if from_key == to_key:
@@ -1058,7 +1052,6 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
membership=(
membership if membership is not None else Membership.LEAVE
),
# This will also be null for the same reasons if `s.event_id = null`
sender=sender,
# Prev event
prev_event_id=prev_event_id,
@@ -1476,10 +1469,6 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
recheck_rooms: Set[str] = set()
min_token = end_token.stream
for room_id, stream in uncapped_results.items():
if stream is None:
# Despite the function not directly setting None, the cache can!
# See: https://github.com/element-hq/synapse/issues/17726
continue
if stream <= min_token:
results[room_id] = stream
else:
@@ -1506,7 +1495,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
@cachedList(cached_method_name="_get_max_event_pos", list_name="room_ids")
async def _bulk_get_max_event_pos(
self, room_ids: StrCollection
) -> Mapping[str, Optional[int]]:
) -> Mapping[str, int]:
"""Fetch the max position of a persisted event in the room."""
# We need to be careful not to return positions ahead of the current
@@ -1591,7 +1580,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
)
for room_id, stream_ordering in batch_results.items():
if stream_ordering <= now_token.stream:
results[room_id] = stream_ordering
results.update(batch_results)
else:
recheck_rooms.add(room_id)

View File

@@ -204,10 +204,9 @@ class TagsWorkerStore(AccountDataWorkerStore):
and last_change_position_for_room <= to_stream_id
)
@cached(num_args=2, tree=True)
async def get_tags_for_room(
self, user_id: str, room_id: str
) -> Mapping[str, JsonMapping]:
) -> Dict[str, JsonDict]:
"""Get all the tags for the given room
Args:
@@ -229,7 +228,7 @@ class TagsWorkerStore(AccountDataWorkerStore):
return {tag: db_to_json(content) for tag, content in rows}
async def add_tag_to_room(
self, user_id: str, room_id: str, tag: str, content: JsonMapping
self, user_id: str, room_id: str, tag: str, content: JsonDict
) -> int:
"""Add a tag to a room for a user.
@@ -260,7 +259,6 @@ class TagsWorkerStore(AccountDataWorkerStore):
await self.db_pool.runInteraction("add_tag", add_tag_txn, next_id)
self.get_tags_for_user.invalidate((user_id,))
self.get_tags_for_room.invalidate((user_id, room_id))
return self._account_data_id_gen.get_current_token()
@@ -285,7 +283,6 @@ class TagsWorkerStore(AccountDataWorkerStore):
await self.db_pool.runInteraction("remove_tag", remove_tag_txn, next_id)
self.get_tags_for_user.invalidate((user_id,))
self.get_tags_for_room.invalidate((user_id, room_id))
return self._account_data_id_gen.get_current_token()
@@ -339,19 +336,9 @@ class TagsWorkerStore(AccountDataWorkerStore):
rows: Iterable[Any],
) -> None:
if stream_name == AccountDataStream.NAME:
# Cast is safe because the `AccountDataStream` should only be giving us
# `AccountDataStreamRow`
account_data_stream_rows: List[AccountDataStream.AccountDataStreamRow] = (
cast(List[AccountDataStream.AccountDataStreamRow], rows)
)
for row in account_data_stream_rows:
for row in rows:
if row.data_type == AccountDataTypes.TAG:
self.get_tags_for_user.invalidate((row.user_id,))
if row.room_id:
self.get_tags_for_room.invalidate((row.user_id, row.room_id))
else:
self.get_tags_for_room.invalidate((row.user_id,))
self._account_data_stream_cache.entity_has_changed(
row.user_id, token
)

View File

@@ -2,7 +2,7 @@
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2021 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023-2024 New Vector, Ltd
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
@@ -19,7 +19,7 @@
#
#
SCHEMA_VERSION = 88 # remember to update the list below when updating
SCHEMA_VERSION = 87 # remember to update the list below when updating
"""Represents the expectations made by the codebase about the database schema
This should be incremented whenever the codebase changes its requirements on the
@@ -149,10 +149,6 @@ Changes in SCHEMA_VERSION = 87
- Add tables for storing the per-connection state for sliding sync requests:
sliding_sync_connections, sliding_sync_connection_positions, sliding_sync_connection_required_state,
sliding_sync_connection_room_configs, sliding_sync_connection_streams
Changes in SCHEMA_VERSION = 88
- MSC4140: Add `delayed_events` table that keeps track of events that are to
be posted in response to a resettable timeout or an on-demand action.
"""

View File

@@ -1,30 +0,0 @@
CREATE TABLE delayed_events (
delay_id TEXT NOT NULL,
user_localpart TEXT NOT NULL,
device_id TEXT,
delay BIGINT NOT NULL,
send_ts BIGINT NOT NULL,
room_id TEXT NOT NULL,
event_type TEXT NOT NULL,
state_key TEXT,
origin_server_ts BIGINT,
content bytea NOT NULL,
is_processed BOOLEAN NOT NULL DEFAULT FALSE,
PRIMARY KEY (user_localpart, delay_id)
);
CREATE INDEX delayed_events_send_ts ON delayed_events (send_ts);
CREATE INDEX delayed_events_is_processed ON delayed_events (is_processed);
CREATE INDEX delayed_events_room_state_event_idx ON delayed_events (room_id, event_type, state_key) WHERE state_key IS NOT NULL;
CREATE TABLE delayed_events_stream_pos (
Lock CHAR(1) NOT NULL DEFAULT 'X' UNIQUE, -- Makes sure this table only has one row.
stream_id BIGINT NOT NULL,
CHECK (Lock='X')
);
-- Start processing events from the point this migration was run, rather
-- than the beginning of time.
INSERT INTO delayed_events_stream_pos (
stream_id
) SELECT COALESCE(MAX(stream_ordering), 0) from events;

View File

@@ -21,11 +21,9 @@
import hashlib
import hmac
import json
import os
import urllib.parse
from binascii import unhexlify
from http import HTTPStatus
from typing import Dict, List, Optional
from unittest.mock import AsyncMock, Mock, patch
@@ -35,7 +33,7 @@ from twisted.test.proto_helpers import MemoryReactor
from twisted.web.resource import Resource
import synapse.rest.admin
from synapse.api.constants import ApprovalNoticeMedium, EventTypes, LoginType, UserTypes
from synapse.api.constants import ApprovalNoticeMedium, LoginType, UserTypes
from synapse.api.errors import Codes, HttpResponseException, ResourceLimitError
from synapse.api.room_versions import RoomVersions
from synapse.media.filepath import MediaFilePaths
@@ -5091,271 +5089,3 @@ class UserSuspensionTestCase(unittest.HomeserverTestCase):
res5 = self.get_success(self.store.get_user_suspended_status(self.bad_user))
self.assertEqual(True, res5)
class UserRedactionTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets,
login.register_servlets,
admin.register_servlets,
room.register_servlets,
sync.register_servlets,
]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.admin = self.register_user("thomas", "pass", True)
self.admin_tok = self.login("thomas", "pass")
self.bad_user = self.register_user("teresa", "pass")
self.bad_user_tok = self.login("teresa", "pass")
self.store = hs.get_datastores().main
self.spam_checker = hs.get_module_api_callbacks().spam_checker
# create rooms - room versions 11+ store the `redacts` key in content while
# earlier ones don't so we use a mix of room versions
self.rm1 = self.helper.create_room_as(
self.admin, tok=self.admin_tok, room_version="7"
)
self.rm2 = self.helper.create_room_as(self.admin, tok=self.admin_tok)
self.rm3 = self.helper.create_room_as(
self.admin, tok=self.admin_tok, room_version="11"
)
def test_redact_messages_all_rooms(self) -> None:
"""
Test that request to redact events in all rooms user is member of is successful
"""
# join rooms, send some messages
originals = []
for rm in [self.rm1, self.rm2, self.rm3]:
join = self.helper.join(rm, self.bad_user, tok=self.bad_user_tok)
originals.append(join["event_id"])
for i in range(15):
event = {"body": f"hello{i}", "msgtype": "m.text"}
res = self.helper.send_event(
rm, "m.room.message", event, tok=self.bad_user_tok, expect_code=200
)
originals.append(res["event_id"])
# redact all events in all rooms
channel = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": []},
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
matched = []
for rm in [self.rm1, self.rm2, self.rm3]:
filter = json.dumps({"types": [EventTypes.Redaction]})
channel = self.make_request(
"GET",
f"rooms/{rm}/messages?filter={filter}&limit=50",
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
for event in channel.json_body["chunk"]:
for event_id in originals:
if (
event["type"] == "m.room.redaction"
and event["redacts"] == event_id
):
matched.append(event_id)
self.assertEqual(len(matched), len(originals))
def test_redact_messages_specific_rooms(self) -> None:
"""
Test that request to redact events in specified rooms user is member of is successful
"""
originals = []
for rm in [self.rm1, self.rm2, self.rm3]:
join = self.helper.join(rm, self.bad_user, tok=self.bad_user_tok)
originals.append(join["event_id"])
for i in range(15):
event = {"body": f"hello{i}", "msgtype": "m.text"}
res = self.helper.send_event(
rm, "m.room.message", event, tok=self.bad_user_tok
)
originals.append(res["event_id"])
# redact messages in rooms 1 and 3
channel = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": [self.rm1, self.rm3]},
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
# messages in requested rooms are redacted
for rm in [self.rm1, self.rm3]:
filter = json.dumps({"types": [EventTypes.Redaction]})
channel = self.make_request(
"GET",
f"rooms/{rm}/messages?filter={filter}&limit=50",
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
matches = []
for event in channel.json_body["chunk"]:
for event_id in originals:
if (
event["type"] == "m.room.redaction"
and event["redacts"] == event_id
):
matches.append((event_id, event))
# we redacted 16 messages
self.assertEqual(len(matches), 16)
channel = self.make_request(
"GET", f"rooms/{self.rm2}/messages?limit=50", access_token=self.admin_tok
)
self.assertEqual(channel.code, 200)
# messages in remaining room are not
for event in channel.json_body["chunk"]:
if event["type"] == "m.room.redaction":
self.fail("found redaction in room 2")
def test_redact_status(self) -> None:
rm2_originals = []
for rm in [self.rm1, self.rm2, self.rm3]:
join = self.helper.join(rm, self.bad_user, tok=self.bad_user_tok)
if rm == self.rm2:
rm2_originals.append(join["event_id"])
for i in range(5):
event = {"body": f"hello{i}", "msgtype": "m.text"}
res = self.helper.send_event(
rm, "m.room.message", event, tok=self.bad_user_tok
)
if rm == self.rm2:
rm2_originals.append(res["event_id"])
# redact messages in rooms 1 and 3
channel = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": [self.rm1, self.rm3]},
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
id = channel.json_body.get("redact_id")
channel2 = self.make_request(
"GET",
f"/_synapse/admin/v1/user/redact_status/{id}",
access_token=self.admin_tok,
)
self.assertEqual(channel2.code, 200)
self.assertEqual(channel2.json_body.get("status"), "complete")
self.assertEqual(channel2.json_body.get("failed_redactions"), {})
# mock that will cause persisting the redaction events to fail
async def check_event_for_spam(event: str) -> str:
return "spam"
self.spam_checker.check_event_for_spam = check_event_for_spam # type: ignore
channel3 = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": [self.rm2]},
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
id = channel3.json_body.get("redact_id")
channel4 = self.make_request(
"GET",
f"/_synapse/admin/v1/user/redact_status/{id}",
access_token=self.admin_tok,
)
self.assertEqual(channel4.code, 200)
self.assertEqual(channel4.json_body.get("status"), "complete")
failed_redactions = channel4.json_body.get("failed_redactions")
assert failed_redactions is not None
matched = []
for original in rm2_originals:
if failed_redactions.get(original) is not None:
matched.append(original)
self.assertEqual(len(matched), len(rm2_originals))
def test_admin_redact_works_if_user_kicked_or_banned(self) -> None:
originals = []
for rm in [self.rm1, self.rm2, self.rm3]:
join = self.helper.join(rm, self.bad_user, tok=self.bad_user_tok)
originals.append(join["event_id"])
for i in range(5):
event = {"body": f"hello{i}", "msgtype": "m.text"}
res = self.helper.send_event(
rm, "m.room.message", event, tok=self.bad_user_tok
)
originals.append(res["event_id"])
# kick user from rooms 1 and 3
for r in [self.rm1, self.rm2]:
channel = self.make_request(
"POST",
f"/_matrix/client/r0/rooms/{r}/kick",
content={"reason": "being a bummer", "user_id": self.bad_user},
access_token=self.admin_tok,
)
self.assertEqual(channel.code, HTTPStatus.OK, channel.result)
# redact messages in room 1 and 3
channel1 = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": [self.rm1, self.rm3]},
access_token=self.admin_tok,
)
self.assertEqual(channel1.code, 200)
id = channel1.json_body.get("redact_id")
# check that there were no failed redactions in room 1 and 3
channel2 = self.make_request(
"GET",
f"/_synapse/admin/v1/user/redact_status/{id}",
access_token=self.admin_tok,
)
self.assertEqual(channel2.code, 200)
self.assertEqual(channel2.json_body.get("status"), "complete")
failed_redactions = channel2.json_body.get("failed_redactions")
self.assertEqual(failed_redactions, {})
# ban user
channel3 = self.make_request(
"POST",
f"/_matrix/client/r0/rooms/{self.rm2}/ban",
content={"reason": "being a bummer", "user_id": self.bad_user},
access_token=self.admin_tok,
)
self.assertEqual(channel3.code, HTTPStatus.OK, channel3.result)
# redact messages in room 2
channel4 = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": [self.rm2]},
access_token=self.admin_tok,
)
self.assertEqual(channel4.code, 200)
id2 = channel1.json_body.get("redact_id")
# check that there were no failed redactions in room 2
channel5 = self.make_request(
"GET",
f"/_synapse/admin/v1/user/redact_status/{id2}",
access_token=self.admin_tok,
)
self.assertEqual(channel5.code, 200)
self.assertEqual(channel5.json_body.get("status"), "complete")
failed_redactions = channel5.json_body.get("failed_redactions")
self.assertEqual(failed_redactions, {})

View File

@@ -11,11 +11,9 @@
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import enum
import logging
from parameterized import parameterized, parameterized_class
from typing_extensions import assert_never
from parameterized import parameterized_class
from twisted.test.proto_helpers import MemoryReactor
@@ -32,11 +30,6 @@ from tests.server import TimedOutException
logger = logging.getLogger(__name__)
class TagAction(enum.Enum):
ADD = enum.auto()
REMOVE = enum.auto()
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
@@ -357,15 +350,7 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
account_data_map[AccountDataTypes.TAG], {"tags": {"m.favourite": {}}}
)
@parameterized.expand(
[
("add tags", TagAction.ADD),
("remove tags", TagAction.REMOVE),
]
)
def test_room_account_data_incremental_sync(
self, test_description: str, tag_action: TagAction
) -> None:
def test_room_account_data_incremental_sync(self) -> None:
"""
On incremental sync, we return all account data for a given room but only for
rooms that we request and are being returned in the Sliding Sync response.
@@ -449,42 +434,23 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
content={"roo": "rar"},
)
)
if tag_action == TagAction.ADD:
# Add another room tag
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.server_notice",
content={},
)
# Add another room tag
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.server_notice",
content={},
)
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.server_notice",
content={},
)
)
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.server_notice",
content={},
)
elif tag_action == TagAction.REMOVE:
# Remove the room tag
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id1,
tag="m.favourite",
)
)
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id2,
tag="m.favourite",
)
)
else:
assert_never(tag_action)
)
# Make an incremental Sliding Sync request with the account_data extension enabled
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
@@ -511,32 +477,14 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
exact=True,
)
self.assertEqual(account_data_map["org.matrix.roorarraz2"], {"roo": "rar"})
if tag_action == TagAction.ADD:
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {"m.favourite": {}, "m.server_notice": {}}},
)
elif tag_action == TagAction.REMOVE:
# If we previously showed the client that the room has tags, when it no
# longer has tags, we need to show them an empty map.
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {}},
)
else:
assert_never(tag_action)
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {"m.favourite": {}, "m.server_notice": {}}},
)
@parameterized.expand(
[
("add tags", TagAction.ADD),
("remove tags", TagAction.REMOVE),
]
)
def test_room_account_data_incremental_sync_out_of_range_never(
self, test_description: str, tag_action: TagAction
) -> None:
"""Tests that we don't return account data for rooms that are out of
range, but then do send all account data once they're in range.
def test_room_account_data_incremental_sync_out_of_range_never(self) -> None:
"""Tests that we don't return account data for rooms that fall out of
range, but then do send all account data once they're back in range.
(initial/HaveSentRoomFlag.NEVER)
"""
@@ -629,42 +577,23 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
content={"roo": "rar"},
)
)
if tag_action == TagAction.ADD:
# Add another room tag
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.server_notice",
content={},
)
# Add another room tag
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.server_notice",
content={},
)
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.server_notice",
content={},
)
)
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.server_notice",
content={},
)
elif tag_action == TagAction.REMOVE:
# Remove the room tag
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id1,
tag="m.favourite",
)
)
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id2,
tag="m.favourite",
)
)
else:
assert_never(tag_action)
)
# Move room2 into range.
self.helper.send(room_id2, body="new event", tok=user1_tok)
@@ -688,43 +617,19 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
.get("rooms")
.get(room_id2)
}
expected_account_data_keys = {
"org.matrix.roorarraz",
"org.matrix.roorarraz2",
}
if tag_action == TagAction.ADD:
expected_account_data_keys.add(AccountDataTypes.TAG)
self.assertIncludes(
account_data_map.keys(),
expected_account_data_keys,
{"org.matrix.roorarraz", "org.matrix.roorarraz2", AccountDataTypes.TAG},
exact=True,
)
self.assertEqual(account_data_map["org.matrix.roorarraz"], {"roo": "rar"})
self.assertEqual(account_data_map["org.matrix.roorarraz2"], {"roo": "rar"})
if tag_action == TagAction.ADD:
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {"m.favourite": {}, "m.server_notice": {}}},
)
elif tag_action == TagAction.REMOVE:
# Since we never told the client about the room tags, we don't need to say
# anything if there are no tags now (the client doesn't need an update).
self.assertIsNone(
account_data_map.get(AccountDataTypes.TAG),
account_data_map,
)
else:
assert_never(tag_action)
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {"m.favourite": {}, "m.server_notice": {}}},
)
@parameterized.expand(
[
("add tags", TagAction.ADD),
("remove tags", TagAction.REMOVE),
]
)
def test_room_account_data_incremental_sync_out_of_range_previously(
self, test_description: str, tag_action: TagAction
) -> None:
def test_room_account_data_incremental_sync_out_of_range_previously(self) -> None:
"""Tests that we don't return account data for rooms that fall out of
range, but then do send all account data that has changed they're back in range.
@@ -820,42 +725,23 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
content={"roo": "rar"},
)
)
if tag_action == TagAction.ADD:
# Add another room tag
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.server_notice",
content={},
)
# Add another room tag
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.server_notice",
content={},
)
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.server_notice",
content={},
)
)
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.server_notice",
content={},
)
elif tag_action == TagAction.REMOVE:
# Remove the room tag
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id1,
tag="m.favourite",
)
)
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id2,
tag="m.favourite",
)
)
else:
assert_never(tag_action)
)
# Make an incremental Sliding Sync request for just room1
response_body, from_token = self.do_sync(
@@ -922,20 +808,10 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
exact=True,
)
self.assertEqual(account_data_map["org.matrix.roorarraz2"], {"roo": "rar"})
if tag_action == TagAction.ADD:
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {"m.favourite": {}, "m.server_notice": {}}},
)
elif tag_action == TagAction.REMOVE:
# If we previously showed the client that the room has tags, when it no
# longer has tags, we need to show them an empty map.
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {}},
)
else:
assert_never(tag_action)
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {"m.favourite": {}, "m.server_notice": {}}},
)
def test_wait_for_new_data(self) -> None:
"""

View File

@@ -230,21 +230,32 @@ class SlidingSyncFiltersTestCase(SlidingSyncBase):
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# Make sure the response has the lists we requested
self.assertIncludes(
self.assertListEqual(
list(response_body["lists"].keys()),
["all-list", "foo-list"],
response_body["lists"].keys(),
{"all-list", "foo-list"},
)
# Make sure the lists have the correct rooms
self.assertIncludes(
set(response_body["lists"]["all-list"]["ops"][0]["room_ids"]),
{space_room_id, room_id},
exact=True,
self.assertListEqual(
list(response_body["lists"]["all-list"]["ops"]),
[
{
"op": "SYNC",
"range": [0, 99],
"room_ids": [space_room_id, room_id],
}
],
)
self.assertIncludes(
set(response_body["lists"]["foo-list"]["ops"][0]["room_ids"]),
{space_room_id},
exact=True,
self.assertListEqual(
list(response_body["lists"]["foo-list"]["ops"]),
[
{
"op": "SYNC",
"range": [0, 99],
"room_ids": [space_room_id],
}
],
)
# Everyone leaves the encrypted space room
@@ -273,23 +284,26 @@ class SlidingSyncFiltersTestCase(SlidingSyncBase):
}
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
# Make sure the response has the lists we requested
self.assertIncludes(
response_body["lists"].keys(),
{"all-list", "foo-list"},
exact=True,
)
# Make sure the lists have the correct rooms even though we `newly_left`
self.assertIncludes(
set(response_body["lists"]["all-list"]["ops"][0]["room_ids"]),
{space_room_id, room_id},
exact=True,
self.assertListEqual(
list(response_body["lists"]["all-list"]["ops"]),
[
{
"op": "SYNC",
"range": [0, 99],
"room_ids": [space_room_id, room_id],
}
],
)
self.assertIncludes(
set(response_body["lists"]["foo-list"]["ops"][0]["room_ids"]),
{space_room_id},
exact=True,
self.assertListEqual(
list(response_body["lists"]["foo-list"]["ops"]),
[
{
"op": "SYNC",
"range": [0, 99],
"room_ids": [space_room_id],
}
],
)
def test_filters_is_dm(self) -> None:

View File

@@ -935,8 +935,7 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
op = response_body["lists"]["foo-list"]["ops"][0]
self.assertEqual(op["op"], "SYNC")
self.assertEqual(op["range"], [0, 1])
# Note that we don't sort the rooms when the range includes all of the rooms, so
# we just assert that the rooms are included
# Note that we don't order the ops anymore, so we need to compare sets.
self.assertIncludes(set(op["room_ids"]), {room_id1, room_id2}, exact=True)
# The `bump_stamp` for room1 should point at the latest message (not the
@@ -1198,55 +1197,3 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
joined_dm_room_id: True,
},
)
def test_old_room_with_unknown_room_version(self) -> None:
"""Test that an old room with unknown room version does not break
sync."""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
# We first create a standard room, then we'll change the room version in
# the DB.
room_id = self.helper.create_room_as(
user1_id,
tok=user1_tok,
)
# Poke the database and update the room version to an unknown one.
self.get_success(
self.hs.get_datastores().main.db_pool.simple_update(
"rooms",
keyvalues={"room_id": room_id},
updatevalues={"room_version": "unknown-room-version"},
desc="updated-room-version",
)
)
# Invalidate method so that it returns the currently updated version
# instead of the cached version.
self.hs.get_datastores().main.get_room_version_id.invalidate((room_id,))
# For old unknown room versions we won't have an entry in this table
# (due to us skipping unknown room versions in the background update).
self.get_success(
self.store.db_pool.simple_delete(
table="sliding_sync_joined_rooms",
keyvalues={"room_id": room_id},
desc="delete_sliding_room",
)
)
# Also invalidate some caches to ensure we pull things from the DB.
self.store._events_stream_cache._entity_to_key.pop(room_id)
self.store._get_max_event_pos.invalidate((room_id,))
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [],
"timeline_limit": 5,
}
}
}
response_body, _ = self.do_sync(sync_body, tok=user1_tok)

View File

@@ -15,7 +15,7 @@ import logging
from typing import Any, Dict, Iterable, List, Literal, Optional, Tuple
from unittest.mock import AsyncMock
from parameterized import parameterized, parameterized_class
from parameterized import parameterized_class
from typing_extensions import assert_never
from twisted.test.proto_helpers import MemoryReactor
@@ -23,17 +23,13 @@ from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
from synapse.api.constants import (
AccountDataTypes,
EventContentFields,
EventTypes,
JoinRules,
Membership,
RoomTypes,
)
from synapse.api.room_versions import RoomVersions
from synapse.events import EventBase, StrippedStateEvent, make_event_from_dict
from synapse.events.snapshot import EventContext
from synapse.handlers.sliding_sync import StateValues
from synapse.rest.client import account_data, devices, login, receipts, room, sync
from synapse.rest.client import devices, login, receipts, room, sync
from synapse.server import HomeServer
from synapse.types import (
JsonDict,
@@ -47,7 +43,6 @@ from synapse.util.stringutils import random_string
from tests import unittest
from tests.server import TimedOutException
from tests.test_utils.event_injection import create_event
logger = logging.getLogger(__name__)
@@ -418,7 +413,6 @@ class SlidingSyncTestCase(SlidingSyncBase):
sync.register_servlets,
devices.register_servlets,
receipts.register_servlets,
account_data.register_servlets,
]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
@@ -426,9 +420,6 @@ class SlidingSyncTestCase(SlidingSyncBase):
self.event_sources = hs.get_event_sources()
self.storage_controllers = hs.get_storage_controllers()
self.account_data_handler = hs.get_account_data_handler()
persistence = self.hs.get_storage_controllers().persistence
assert persistence is not None
self.persistence = persistence
super().prepare(reactor, clock, hs)
@@ -679,116 +670,6 @@ class SlidingSyncTestCase(SlidingSyncBase):
exact=True,
)
def test_ignored_user_invites_initial_sync(self) -> None:
"""
Make sure we ignore invites if they are from one of the `m.ignored_user_list` on
initial sync.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
# Create a room that user1 is already in
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok)
# Create a room that user2 is already in
room_id2 = self.helper.create_room_as(user2_id, tok=user2_tok)
# User1 is invited to room_id2
self.helper.invite(room_id2, src=user2_id, targ=user1_id, tok=user2_tok)
# Sync once before we ignore to make sure the rooms can show up
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 99]],
"required_state": [],
"timeline_limit": 0,
},
}
}
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
# room_id2 shows up because we haven't ignored the user yet
self.assertIncludes(
set(response_body["lists"]["foo-list"]["ops"][0]["room_ids"]),
{room_id1, room_id2},
exact=True,
)
# User1 ignores user2
channel = self.make_request(
"PUT",
f"/_matrix/client/v3/user/{user1_id}/account_data/{AccountDataTypes.IGNORED_USER_LIST}",
content={"ignored_users": {user2_id: {}}},
access_token=user1_tok,
)
self.assertEqual(channel.code, 200, channel.result)
# Sync again (initial sync)
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
# The invite for room_id2 should no longer show up because user2 is ignored
self.assertIncludes(
set(response_body["lists"]["foo-list"]["ops"][0]["room_ids"]),
{room_id1},
exact=True,
)
def test_ignored_user_invites_incremental_sync(self) -> None:
"""
Make sure we ignore invites if they are from one of the `m.ignored_user_list` on
incremental sync.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
# Create a room that user1 is already in
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok)
# Create a room that user2 is already in
room_id2 = self.helper.create_room_as(user2_id, tok=user2_tok)
# User1 ignores user2
channel = self.make_request(
"PUT",
f"/_matrix/client/v3/user/{user1_id}/account_data/{AccountDataTypes.IGNORED_USER_LIST}",
content={"ignored_users": {user2_id: {}}},
access_token=user1_tok,
)
self.assertEqual(channel.code, 200, channel.result)
# Initial sync
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 99]],
"required_state": [],
"timeline_limit": 0,
},
}
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# User1 only has membership in room_id1 at this point
self.assertIncludes(
set(response_body["lists"]["foo-list"]["ops"][0]["room_ids"]),
{room_id1},
exact=True,
)
# User1 is invited to room_id2 after the initial sync
self.helper.invite(room_id2, src=user2_id, targ=user1_id, tok=user2_tok)
# Sync again (incremental sync)
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
# The invite for room_id2 doesn't show up because user2 is ignored
self.assertIncludes(
set(response_body["lists"]["foo-list"]["ops"][0]["room_ids"]),
{room_id1},
exact=True,
)
def test_sort_list(self) -> None:
"""
Test that the `lists` are sorted by `stream_ordering`
@@ -805,38 +686,7 @@ class SlidingSyncTestCase(SlidingSyncBase):
self.helper.send(room_id1, "activity in room1", tok=user1_tok)
self.helper.send(room_id2, "activity in room2", tok=user1_tok)
# Make the Sliding Sync request where the range includes *some* of the rooms
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [],
"timeline_limit": 1,
}
}
}
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
# Make sure it has the foo-list we requested
self.assertIncludes(
response_body["lists"].keys(),
{"foo-list"},
)
# Make sure the list is sorted in the way we expect (we only sort when the range
# doesn't include all of the room)
self.assertListEqual(
list(response_body["lists"]["foo-list"]["ops"]),
[
{
"op": "SYNC",
"range": [0, 1],
"room_ids": [room_id2, room_id1],
}
],
response_body["lists"]["foo-list"],
)
# Make the Sliding Sync request where the range includes *all* of the rooms
# Make the Sliding Sync request
sync_body = {
"lists": {
"foo-list": {
@@ -849,24 +699,24 @@ class SlidingSyncTestCase(SlidingSyncBase):
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
# Make sure it has the foo-list we requested
self.assertIncludes(
self.assertListEqual(
list(response_body["lists"].keys()),
["foo-list"],
response_body["lists"].keys(),
{"foo-list"},
)
# Since the range includes all of the rooms, we don't sort the list
self.assertEqual(
len(response_body["lists"]["foo-list"]["ops"]),
1,
# Make sure the list is sorted in the way we expect
self.assertListEqual(
list(response_body["lists"]["foo-list"]["ops"]),
[
{
"op": "SYNC",
"range": [0, 99],
"room_ids": [room_id2, room_id1, room_id3],
}
],
response_body["lists"]["foo-list"],
)
op = response_body["lists"]["foo-list"]["ops"][0]
self.assertEqual(op["op"], "SYNC")
self.assertEqual(op["range"], [0, 99])
# Note that we don't sort the rooms when the range includes all of the rooms, so
# we just assert that the rooms are included
self.assertIncludes(
set(op["room_ids"]), {room_id1, room_id2, room_id3}, exact=True
)
def test_sliced_windows(self) -> None:
"""
@@ -996,472 +846,3 @@ class SlidingSyncTestCase(SlidingSyncBase):
# Make the Sliding Sync request
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
self.assertEqual(response_body["rooms"][room_id1]["initial"], True)
def test_state_reset_room_comes_down_incremental_sync(self) -> None:
"""Test that a room that we were state reset out of comes down
incremental sync"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
room_id1 = self.helper.create_room_as(
user2_id,
is_public=True,
tok=user2_tok,
extra_content={
"name": "my super room",
},
)
# Create an event for us to point back to for the state reset
event_response = self.helper.send(room_id1, "test", tok=user2_tok)
event_id = event_response["event_id"]
self.helper.join(room_id1, user1_id, tok=user1_tok)
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [
# Request all state just to see what we get back when we are
# state reset out of the room
[StateValues.WILDCARD, StateValues.WILDCARD]
],
"timeline_limit": 1,
}
}
}
# Make the Sliding Sync request
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# Make sure we see room1
self.assertIncludes(set(response_body["rooms"].keys()), {room_id1}, exact=True)
self.assertEqual(response_body["rooms"][room_id1]["initial"], True)
# Trigger a state reset
join_rule_event, join_rule_context = self.get_success(
create_event(
self.hs,
prev_event_ids=[event_id],
type=EventTypes.JoinRules,
state_key="",
content={"join_rule": JoinRules.INVITE},
sender=user2_id,
room_id=room_id1,
room_version=self.get_success(self.store.get_room_version_id(room_id1)),
)
)
_, join_rule_event_pos, _ = self.get_success(
self.persistence.persist_event(join_rule_event, join_rule_context)
)
# FIXME: We're manually busting the cache since
# https://github.com/element-hq/synapse/issues/17368 is not solved yet
self.store._membership_stream_cache.entity_has_changed(
user1_id, join_rule_event_pos.stream
)
# Ensure that the state reset worked and only user2 is in the room now
users_in_room = self.get_success(self.store.get_users_in_room(room_id1))
self.assertIncludes(set(users_in_room), {user2_id}, exact=True)
state_map_at_reset = self.get_success(
self.storage_controllers.state.get_current_state(room_id1)
)
# Update the state after user1 was state reset out of the room
self.helper.send_state(
room_id1,
EventTypes.Name,
{EventContentFields.ROOM_NAME: "my super duper room"},
tok=user2_tok,
)
# Make another Sliding Sync request (incremental)
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
# Expect to see room1 because it is `newly_left` thanks to being state reset out
# of it since the last time we synced. We need to let the client know that
# something happened and that they are no longer in the room.
self.assertIncludes(set(response_body["rooms"].keys()), {room_id1}, exact=True)
# We set `initial=True` to indicate that the client should reset the state they
# have about the room
self.assertEqual(response_body["rooms"][room_id1]["initial"], True)
# They shouldn't see anything past the state reset
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
# We should see all the state events in the room
state_map_at_reset.values(),
exact=True,
)
# The position where the state reset happened
self.assertEqual(
response_body["rooms"][room_id1]["bump_stamp"],
join_rule_event_pos.stream,
response_body["rooms"][room_id1],
)
# Other non-important things. We just want to check what these are so we know
# what happens in a state reset scenario.
#
# Room name was set at the time of the state reset so we should still be able to
# see it.
self.assertEqual(response_body["rooms"][room_id1]["name"], "my super room")
# Could be set but there is no avatar for this room
self.assertIsNone(
response_body["rooms"][room_id1].get("avatar"),
response_body["rooms"][room_id1],
)
# Could be set but this room isn't marked as a DM
self.assertIsNone(
response_body["rooms"][room_id1].get("is_dm"),
response_body["rooms"][room_id1],
)
# Empty timeline because we are not in the room at all (they are all being
# filtered out)
self.assertIsNone(
response_body["rooms"][room_id1].get("timeline"),
response_body["rooms"][room_id1],
)
# `limited` since we're not providing any timeline events but there are some in
# the room.
self.assertEqual(response_body["rooms"][room_id1]["limited"], True)
# User is no longer in the room so they can't see this info
self.assertIsNone(
response_body["rooms"][room_id1].get("joined_count"),
response_body["rooms"][room_id1],
)
self.assertIsNone(
response_body["rooms"][room_id1].get("invited_count"),
response_body["rooms"][room_id1],
)
def test_state_reset_previously_room_comes_down_incremental_sync_with_filters(
self,
) -> None:
"""
Test that a room that we were state reset out of should always be sent down
regardless of the filters if it has been sent down the connection before.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
# Create a space room
space_room_id = self.helper.create_room_as(
user2_id,
tok=user2_tok,
extra_content={
"creation_content": {EventContentFields.ROOM_TYPE: RoomTypes.SPACE},
"name": "my super space",
},
)
# Create an event for us to point back to for the state reset
event_response = self.helper.send(space_room_id, "test", tok=user2_tok)
event_id = event_response["event_id"]
self.helper.join(space_room_id, user1_id, tok=user1_tok)
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [
# Request all state just to see what we get back when we are
# state reset out of the room
[StateValues.WILDCARD, StateValues.WILDCARD]
],
"timeline_limit": 1,
"filters": {
"room_types": [RoomTypes.SPACE],
},
}
}
}
# Make the Sliding Sync request
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# Make sure we see room1
self.assertIncludes(
set(response_body["rooms"].keys()), {space_room_id}, exact=True
)
self.assertEqual(response_body["rooms"][space_room_id]["initial"], True)
# Trigger a state reset
join_rule_event, join_rule_context = self.get_success(
create_event(
self.hs,
prev_event_ids=[event_id],
type=EventTypes.JoinRules,
state_key="",
content={"join_rule": JoinRules.INVITE},
sender=user2_id,
room_id=space_room_id,
room_version=self.get_success(
self.store.get_room_version_id(space_room_id)
),
)
)
_, join_rule_event_pos, _ = self.get_success(
self.persistence.persist_event(join_rule_event, join_rule_context)
)
# FIXME: We're manually busting the cache since
# https://github.com/element-hq/synapse/issues/17368 is not solved yet
self.store._membership_stream_cache.entity_has_changed(
user1_id, join_rule_event_pos.stream
)
# Ensure that the state reset worked and only user2 is in the room now
users_in_room = self.get_success(self.store.get_users_in_room(space_room_id))
self.assertIncludes(set(users_in_room), {user2_id}, exact=True)
state_map_at_reset = self.get_success(
self.storage_controllers.state.get_current_state(space_room_id)
)
# Update the state after user1 was state reset out of the room
self.helper.send_state(
space_room_id,
EventTypes.Name,
{EventContentFields.ROOM_NAME: "my super duper space"},
tok=user2_tok,
)
# User2 also leaves the room so the server is no longer participating in the room
# and we don't have access to current state
self.helper.leave(space_room_id, user2_id, tok=user2_tok)
# Make another Sliding Sync request (incremental)
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
# Expect to see room1 because it is `newly_left` thanks to being state reset out
# of it since the last time we synced. We need to let the client know that
# something happened and that they are no longer in the room.
self.assertIncludes(
set(response_body["rooms"].keys()), {space_room_id}, exact=True
)
# We set `initial=True` to indicate that the client should reset the state they
# have about the room
self.assertEqual(response_body["rooms"][space_room_id]["initial"], True)
# They shouldn't see anything past the state reset
self._assertRequiredStateIncludes(
response_body["rooms"][space_room_id]["required_state"],
# We should see all the state events in the room
state_map_at_reset.values(),
exact=True,
)
# The position where the state reset happened
self.assertEqual(
response_body["rooms"][space_room_id]["bump_stamp"],
join_rule_event_pos.stream,
response_body["rooms"][space_room_id],
)
# Other non-important things. We just want to check what these are so we know
# what happens in a state reset scenario.
#
# Room name was set at the time of the state reset so we should still be able to
# see it.
self.assertEqual(
response_body["rooms"][space_room_id]["name"], "my super space"
)
# Could be set but there is no avatar for this room
self.assertIsNone(
response_body["rooms"][space_room_id].get("avatar"),
response_body["rooms"][space_room_id],
)
# Could be set but this room isn't marked as a DM
self.assertIsNone(
response_body["rooms"][space_room_id].get("is_dm"),
response_body["rooms"][space_room_id],
)
# Empty timeline because we are not in the room at all (they are all being
# filtered out)
self.assertIsNone(
response_body["rooms"][space_room_id].get("timeline"),
response_body["rooms"][space_room_id],
)
# `limited` since we're not providing any timeline events but there are some in
# the room.
self.assertEqual(response_body["rooms"][space_room_id]["limited"], True)
# User is no longer in the room so they can't see this info
self.assertIsNone(
response_body["rooms"][space_room_id].get("joined_count"),
response_body["rooms"][space_room_id],
)
self.assertIsNone(
response_body["rooms"][space_room_id].get("invited_count"),
response_body["rooms"][space_room_id],
)
@parameterized.expand(
[
("server_leaves_room", True),
("server_participating_in_room", False),
]
)
def test_state_reset_never_room_incremental_sync_with_filters(
self, test_description: str, server_leaves_room: bool
) -> None:
"""
Test that a room that we were state reset out of should be sent down if we can
figure out the state or if it was sent down the connection before.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
# Create a space room
space_room_id = self.helper.create_room_as(
user2_id,
tok=user2_tok,
extra_content={
"creation_content": {EventContentFields.ROOM_TYPE: RoomTypes.SPACE},
"name": "my super space",
},
)
# Create another space room
space_room_id2 = self.helper.create_room_as(
user2_id,
tok=user2_tok,
extra_content={
"creation_content": {EventContentFields.ROOM_TYPE: RoomTypes.SPACE},
},
)
# Create an event for us to point back to for the state reset
event_response = self.helper.send(space_room_id, "test", tok=user2_tok)
event_id = event_response["event_id"]
# User1 joins the rooms
#
self.helper.join(space_room_id, user1_id, tok=user1_tok)
# Join space_room_id2 so that it is at the top of the list
self.helper.join(space_room_id2, user1_id, tok=user1_tok)
# Make a SS request for only the top room.
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 0]],
"required_state": [
# Request all state just to see what we get back when we are
# state reset out of the room
[StateValues.WILDCARD, StateValues.WILDCARD]
],
"timeline_limit": 1,
"filters": {
"room_types": [RoomTypes.SPACE],
},
}
}
}
# Make the Sliding Sync request
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# Make sure we only see space_room_id2
self.assertIncludes(
set(response_body["rooms"].keys()), {space_room_id2}, exact=True
)
self.assertEqual(response_body["rooms"][space_room_id2]["initial"], True)
# Just create some activity in space_room_id2 so it appears when we incremental sync again
self.helper.send(space_room_id2, "test", tok=user2_tok)
# Trigger a state reset
join_rule_event, join_rule_context = self.get_success(
create_event(
self.hs,
prev_event_ids=[event_id],
type=EventTypes.JoinRules,
state_key="",
content={"join_rule": JoinRules.INVITE},
sender=user2_id,
room_id=space_room_id,
room_version=self.get_success(
self.store.get_room_version_id(space_room_id)
),
)
)
_, join_rule_event_pos, _ = self.get_success(
self.persistence.persist_event(join_rule_event, join_rule_context)
)
# FIXME: We're manually busting the cache since
# https://github.com/element-hq/synapse/issues/17368 is not solved yet
self.store._membership_stream_cache.entity_has_changed(
user1_id, join_rule_event_pos.stream
)
# Ensure that the state reset worked and only user2 is in the room now
users_in_room = self.get_success(self.store.get_users_in_room(space_room_id))
self.assertIncludes(set(users_in_room), {user2_id}, exact=True)
# Update the state after user1 was state reset out of the room.
# This will also bump it to the top of the list.
self.helper.send_state(
space_room_id,
EventTypes.Name,
{EventContentFields.ROOM_NAME: "my super duper space"},
tok=user2_tok,
)
if server_leaves_room:
# User2 also leaves the room so the server is no longer participating in the room
# and we don't have access to current state
self.helper.leave(space_room_id, user2_id, tok=user2_tok)
# Make another Sliding Sync request (incremental)
sync_body = {
"lists": {
"foo-list": {
# Expand the range to include all rooms
"ranges": [[0, 1]],
"required_state": [
# Request all state just to see what we get back when we are
# state reset out of the room
[StateValues.WILDCARD, StateValues.WILDCARD]
],
"timeline_limit": 1,
"filters": {
"room_types": [RoomTypes.SPACE],
},
}
}
}
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
if self.use_new_tables:
if server_leaves_room:
# We still only expect to see space_room_id2 because even though we were state
# reset out of space_room_id, it was never sent down the connection before so we
# don't need to bother the client with it.
self.assertIncludes(
set(response_body["rooms"].keys()), {space_room_id2}, exact=True
)
else:
# Both rooms show up because we can figure out the state for the
# `filters.room_types` if someone is still in the room (we look at the
# current state because `room_type` never changes).
self.assertIncludes(
set(response_body["rooms"].keys()),
{space_room_id, space_room_id2},
exact=True,
)
else:
# Both rooms show up because we can actually take the time to figure out the
# state for the `filters.room_types` in the fallback path (we look at
# historical state for `LEAVE` membership).
self.assertIncludes(
set(response_body["rooms"].keys()),
{space_room_id, space_room_id2},
exact=True,
)

View File

@@ -1,346 +0,0 @@
"""Tests REST events for /delayed_events paths."""
from http import HTTPStatus
from typing import List
from parameterized import parameterized
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.errors import Codes
from synapse.rest.client import delayed_events, room
from synapse.server import HomeServer
from synapse.types import JsonDict
from synapse.util import Clock
from tests.unittest import HomeserverTestCase
PATH_PREFIX = "/_matrix/client/unstable/org.matrix.msc4140/delayed_events"
_HS_NAME = "red"
_EVENT_TYPE = "com.example.test"
class DelayedEventsTestCase(HomeserverTestCase):
"""Tests getting and managing delayed events."""
servlets = [delayed_events.register_servlets, room.register_servlets]
user_id = f"@sid1:{_HS_NAME}"
def default_config(self) -> JsonDict:
config = super().default_config()
config["server_name"] = _HS_NAME
config["max_event_delay_duration"] = "24h"
return config
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.room_id = self.helper.create_room_as(
self.user_id,
extra_content={
"preset": "trusted_private_chat",
},
)
def test_delayed_events_empty_on_startup(self) -> None:
self.assertListEqual([], self._get_delayed_events())
def test_delayed_state_events_are_sent_on_timeout(self) -> None:
state_key = "to_send_on_timeout"
setter_key = "setter"
setter_expected = "on_timeout"
channel = self.make_request(
"PUT",
_get_path_for_delayed_state(self.room_id, _EVENT_TYPE, state_key, 900),
{
setter_key: setter_expected,
},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
content = self._get_delayed_event_content(events[0])
self.assertEqual(setter_expected, content.get(setter_key), content)
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
self.reactor.advance(1)
self.assertListEqual([], self._get_delayed_events())
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
def test_update_delayed_event_without_id(self) -> None:
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/",
)
self.assertEqual(HTTPStatus.NOT_FOUND, channel.code, channel.result)
def test_update_delayed_event_without_body(self) -> None:
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/abc",
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
Codes.NOT_JSON,
channel.json_body["errcode"],
)
def test_update_delayed_event_without_action(self) -> None:
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/abc",
{},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
Codes.MISSING_PARAM,
channel.json_body["errcode"],
)
def test_update_delayed_event_with_invalid_action(self) -> None:
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/abc",
{"action": "oops"},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
Codes.INVALID_PARAM,
channel.json_body["errcode"],
)
@parameterized.expand(["cancel", "restart", "send"])
def test_update_delayed_event_without_match(self, action: str) -> None:
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/abc",
{"action": action},
)
self.assertEqual(HTTPStatus.NOT_FOUND, channel.code, channel.result)
def test_cancel_delayed_state_event(self) -> None:
state_key = "to_never_send"
setter_key = "setter"
setter_expected = "none"
channel = self.make_request(
"PUT",
_get_path_for_delayed_state(self.room_id, _EVENT_TYPE, state_key, 1500),
{
setter_key: setter_expected,
},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
self.assertIsNotNone(delay_id)
self.reactor.advance(1)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
content = self._get_delayed_event_content(events[0])
self.assertEqual(setter_expected, content.get(setter_key), content)
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/{delay_id}",
{"action": "cancel"},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.assertListEqual([], self._get_delayed_events())
self.reactor.advance(1)
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
def test_send_delayed_state_event(self) -> None:
state_key = "to_send_on_request"
setter_key = "setter"
setter_expected = "on_send"
channel = self.make_request(
"PUT",
_get_path_for_delayed_state(self.room_id, _EVENT_TYPE, state_key, 100000),
{
setter_key: setter_expected,
},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
self.assertIsNotNone(delay_id)
self.reactor.advance(1)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
content = self._get_delayed_event_content(events[0])
self.assertEqual(setter_expected, content.get(setter_key), content)
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/{delay_id}",
{"action": "send"},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.assertListEqual([], self._get_delayed_events())
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
def test_restart_delayed_state_event(self) -> None:
state_key = "to_send_on_restarted_timeout"
setter_key = "setter"
setter_expected = "on_timeout"
channel = self.make_request(
"PUT",
_get_path_for_delayed_state(self.room_id, _EVENT_TYPE, state_key, 1500),
{
setter_key: setter_expected,
},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
self.assertIsNotNone(delay_id)
self.reactor.advance(1)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
content = self._get_delayed_event_content(events[0])
self.assertEqual(setter_expected, content.get(setter_key), content)
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/{delay_id}",
{"action": "restart"},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.reactor.advance(1)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
content = self._get_delayed_event_content(events[0])
self.assertEqual(setter_expected, content.get(setter_key), content)
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
self.reactor.advance(1)
self.assertListEqual([], self._get_delayed_events())
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
def test_delayed_state_events_are_cancelled_by_more_recent_state(self) -> None:
state_key = "to_be_cancelled"
setter_key = "setter"
channel = self.make_request(
"PUT",
_get_path_for_delayed_state(self.room_id, _EVENT_TYPE, state_key, 900),
{
setter_key: "on_timeout",
},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
setter_expected = "manual"
self.helper.send_state(
self.room_id,
_EVENT_TYPE,
{
setter_key: setter_expected,
},
None,
state_key=state_key,
)
self.assertListEqual([], self._get_delayed_events())
self.reactor.advance(1)
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
def _get_delayed_events(self) -> List[JsonDict]:
channel = self.make_request(
"GET",
PATH_PREFIX,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
key = "delayed_events"
self.assertIn(key, channel.json_body)
events = channel.json_body[key]
self.assertIsInstance(events, list)
return events
def _get_delayed_event_content(self, event: JsonDict) -> JsonDict:
key = "content"
self.assertIn(key, event)
content = event[key]
self.assertIsInstance(content, dict)
return content
def _get_path_for_delayed_state(
room_id: str, event_type: str, state_key: str, delay_ms: int
) -> str:
return f"rooms/{room_id}/state/{event_type}/{state_key}?org.matrix.msc4140.delay={delay_ms}"

View File

@@ -1,308 +0,0 @@
from http import HTTPStatus
from parameterized import parameterized_class
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.errors import Codes
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.rest import admin
from synapse.rest.client import login, room
from synapse.server import HomeServer
from synapse.types import JsonDict
from synapse.util import Clock
from tests.unittest import HomeserverTestCase
_STATE_EVENT_TEST_TYPE = "com.example.test"
# To stress-test parsing, include separator & sigil characters
_STATE_KEY_SUFFIX = "_state_key_suffix:!@#$123"
class OwnedStateBase(HomeserverTestCase):
servlets = [
admin.register_servlets,
room.register_servlets,
login.register_servlets,
]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.creator_user_id = self.register_user("creator", "pass")
self.creator_access_token = self.login("creator", "pass")
self.user1_user_id = self.register_user("user1", "pass")
self.user1_access_token = self.login("user1", "pass")
self.room_id = self.helper.create_room_as(
self.creator_user_id,
tok=self.creator_access_token,
is_public=True,
extra_content={
"power_level_content_override": {
"events": {
_STATE_EVENT_TEST_TYPE: 0,
},
},
},
)
self.helper.join(
room=self.room_id, user=self.user1_user_id, tok=self.user1_access_token
)
class WithoutOwnedStateTestCase(OwnedStateBase):
def default_config(self) -> JsonDict:
config = super().default_config()
config["default_room_version"] = RoomVersions.V10.identifier
return config
def test_user_can_set_state_with_own_userid_key(self) -> None:
self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key=f"{self.user1_user_id}",
tok=self.user1_access_token,
expect_code=HTTPStatus.OK,
)
def test_room_creator_cannot_set_state_with_own_suffixed_key(self) -> None:
body = self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key=f"{self.creator_user_id}{_STATE_KEY_SUFFIX}",
tok=self.creator_access_token,
expect_code=HTTPStatus.FORBIDDEN,
)
self.assertEqual(
body["errcode"],
Codes.FORBIDDEN,
body,
)
def test_room_creator_cannot_set_state_with_other_userid_key(self) -> None:
body = self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key=f"{self.user1_user_id}",
tok=self.creator_access_token,
expect_code=HTTPStatus.FORBIDDEN,
)
self.assertEqual(
body["errcode"],
Codes.FORBIDDEN,
body,
)
def test_room_creator_cannot_set_state_with_other_suffixed_key(self) -> None:
body = self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key=f"{self.user1_user_id}{_STATE_KEY_SUFFIX}",
tok=self.creator_access_token,
expect_code=HTTPStatus.FORBIDDEN,
)
self.assertEqual(
body["errcode"],
Codes.FORBIDDEN,
body,
)
def test_room_creator_cannot_set_state_with_nonmember_userid_key(self) -> None:
body = self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key="@notinroom:hs2",
tok=self.creator_access_token,
expect_code=HTTPStatus.FORBIDDEN,
)
self.assertEqual(
body["errcode"],
Codes.FORBIDDEN,
body,
)
def test_room_creator_cannot_set_state_with_malformed_userid_key(self) -> None:
body = self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key="@oops",
tok=self.creator_access_token,
expect_code=HTTPStatus.FORBIDDEN,
)
self.assertEqual(
body["errcode"],
Codes.FORBIDDEN,
body,
)
@parameterized_class(
("room_version",),
[(i,) for i, v in KNOWN_ROOM_VERSIONS.items() if v.msc3757_enabled],
)
class MSC3757OwnedStateTestCase(OwnedStateBase):
room_version: str
def default_config(self) -> JsonDict:
config = super().default_config()
config["default_room_version"] = self.room_version
return config
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
super().prepare(reactor, clock, hs)
self.user2_user_id = self.register_user("user2", "pass")
self.user2_access_token = self.login("user2", "pass")
self.helper.join(
room=self.room_id, user=self.user2_user_id, tok=self.user2_access_token
)
def test_user_can_set_state_with_own_suffixed_key(self) -> None:
self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key=f"{self.user1_user_id}{_STATE_KEY_SUFFIX}",
tok=self.user1_access_token,
expect_code=HTTPStatus.OK,
)
def test_room_creator_can_set_state_with_other_userid_key(self) -> None:
self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key=f"{self.user1_user_id}",
tok=self.creator_access_token,
expect_code=HTTPStatus.OK,
)
def test_room_creator_can_set_state_with_other_suffixed_key(self) -> None:
self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key=f"{self.user1_user_id}{_STATE_KEY_SUFFIX}",
tok=self.creator_access_token,
expect_code=HTTPStatus.OK,
)
def test_user_cannot_set_state_with_other_userid_key(self) -> None:
body = self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key=f"{self.user2_user_id}",
tok=self.user1_access_token,
expect_code=HTTPStatus.FORBIDDEN,
)
self.assertEqual(
body["errcode"],
Codes.FORBIDDEN,
body,
)
def test_user_cannot_set_state_with_other_suffixed_key(self) -> None:
body = self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key=f"{self.user2_user_id}{_STATE_KEY_SUFFIX}",
tok=self.user1_access_token,
expect_code=HTTPStatus.FORBIDDEN,
)
self.assertEqual(
body["errcode"],
Codes.FORBIDDEN,
body,
)
def test_user_cannot_set_state_with_unseparated_suffixed_key(self) -> None:
body = self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key=f"{self.user1_user_id}{_STATE_KEY_SUFFIX[1:]}",
tok=self.user1_access_token,
expect_code=HTTPStatus.FORBIDDEN,
)
self.assertEqual(
body["errcode"],
Codes.FORBIDDEN,
body,
)
def test_user_cannot_set_state_with_misplaced_userid_in_key(self) -> None:
body = self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
# Still put @ at start of state key, because without it, there is no write protection at all
state_key=f"@prefix_{self.user1_user_id}{_STATE_KEY_SUFFIX}",
tok=self.user1_access_token,
expect_code=HTTPStatus.FORBIDDEN,
)
self.assertEqual(
body["errcode"],
Codes.FORBIDDEN,
body,
)
def test_room_creator_can_set_state_with_nonmember_userid_key(self) -> None:
self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key="@notinroom:hs2",
tok=self.creator_access_token,
expect_code=HTTPStatus.OK,
)
def test_room_creator_cannot_set_state_with_malformed_userid_key(self) -> None:
body = self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key="@oops",
tok=self.creator_access_token,
expect_code=HTTPStatus.BAD_REQUEST,
)
self.assertEqual(
body["errcode"],
Codes.BAD_JSON,
body,
)
def test_room_creator_cannot_set_state_with_improperly_suffixed_key(self) -> None:
body = self.helper.send_state(
self.room_id,
_STATE_EVENT_TEST_TYPE,
{},
state_key=f"{self.creator_user_id}@{_STATE_KEY_SUFFIX[1:]}",
tok=self.creator_access_token,
expect_code=HTTPStatus.BAD_REQUEST,
)
self.assertEqual(
body["errcode"],
Codes.BAD_JSON,
body,
)

View File

@@ -2291,106 +2291,6 @@ class RoomMessageFilterTestCase(RoomBase):
self.assertEqual(len(chunk), 2, [event["content"] for event in chunk])
class RoomDelayedEventTestCase(RoomBase):
"""Tests delayed events."""
user_id = "@sid1:red"
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.room_id = self.helper.create_room_as(self.user_id)
@unittest.override_config({"max_event_delay_duration": "24h"})
def test_send_delayed_invalid_event(self) -> None:
"""Test sending a delayed event with invalid content."""
channel = self.make_request(
"PUT",
(
"rooms/%s/send/m.room.message/mid1?org.matrix.msc4140.delay=2000"
% self.room_id
).encode("ascii"),
{},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertNotIn("org.matrix.msc4140.errcode", channel.json_body)
def test_delayed_event_unsupported_by_default(self) -> None:
"""Test that sending a delayed event is unsupported with the default config."""
channel = self.make_request(
"PUT",
(
"rooms/%s/send/m.room.message/mid1?org.matrix.msc4140.delay=2000"
% self.room_id
).encode("ascii"),
{"body": "test", "msgtype": "m.text"},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
"M_MAX_DELAY_UNSUPPORTED",
channel.json_body.get("org.matrix.msc4140.errcode"),
channel.json_body,
)
@unittest.override_config({"max_event_delay_duration": "1000"})
def test_delayed_event_exceeds_max_delay(self) -> None:
"""Test that sending a delayed event fails if its delay is longer than allowed."""
channel = self.make_request(
"PUT",
(
"rooms/%s/send/m.room.message/mid1?org.matrix.msc4140.delay=2000"
% self.room_id
).encode("ascii"),
{"body": "test", "msgtype": "m.text"},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
"M_MAX_DELAY_EXCEEDED",
channel.json_body.get("org.matrix.msc4140.errcode"),
channel.json_body,
)
@unittest.override_config({"max_event_delay_duration": "24h"})
def test_delayed_event_with_negative_delay(self) -> None:
"""Test that sending a delayed event fails if its delay is negative."""
channel = self.make_request(
"PUT",
(
"rooms/%s/send/m.room.message/mid1?org.matrix.msc4140.delay=-2000"
% self.room_id
).encode("ascii"),
{"body": "test", "msgtype": "m.text"},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
Codes.INVALID_PARAM, channel.json_body["errcode"], channel.json_body
)
@unittest.override_config({"max_event_delay_duration": "24h"})
def test_send_delayed_message_event(self) -> None:
"""Test sending a valid delayed message event."""
channel = self.make_request(
"PUT",
(
"rooms/%s/send/m.room.message/mid1?org.matrix.msc4140.delay=2000"
% self.room_id
).encode("ascii"),
{"body": "test", "msgtype": "m.text"},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
@unittest.override_config({"max_event_delay_duration": "24h"})
def test_send_delayed_state_event(self) -> None:
"""Test sending a valid delayed state event."""
channel = self.make_request(
"PUT",
(
"rooms/%s/state/m.room.topic/?org.matrix.msc4140.delay=2000"
% self.room_id
).encode("ascii"),
{"topic": "This is a topic"},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
class RoomSearchTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets_for_client_rest_resource,

View File

@@ -89,7 +89,7 @@ class TestResourceLimitsServerNotices(unittest.HomeserverTestCase):
return_value="!something:localhost"
)
self._rlsn._store.add_tag_to_room = AsyncMock(return_value=None) # type: ignore[method-assign]
self._rlsn._store.get_tags_for_room = AsyncMock(return_value={})
self._rlsn._store.get_tags_for_room = AsyncMock(return_value={}) # type: ignore[method-assign]
@override_config({"hs_disabled": True})
def test_maybe_send_server_notice_disabled_hs(self) -> None:

View File

@@ -27,13 +27,7 @@ from immutabledict import immutabledict
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.constants import (
Direction,
EventTypes,
JoinRules,
Membership,
RelationTypes,
)
from synapse.api.constants import Direction, EventTypes, Membership, RelationTypes
from synapse.api.filtering import Filter
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events import FrozenEventV3
@@ -1160,7 +1154,7 @@ class GetCurrentStateDeltaMembershipChangesForUserTestCase(HomeserverTestCase):
room_id=room_id1,
event_id=None,
event_pos=dummy_state_pos,
membership=Membership.LEAVE,
membership="leave",
sender=None, # user1_id,
prev_event_id=join_response1["event_id"],
prev_event_pos=join_pos1,
@@ -1170,81 +1164,6 @@ class GetCurrentStateDeltaMembershipChangesForUserTestCase(HomeserverTestCase):
],
)
def test_state_reset2(self) -> None:
"""
Test a state reset scenario where the user gets removed from the room (when
there is no corresponding leave event)
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
room_id1 = self.helper.create_room_as(user2_id, is_public=True, tok=user2_tok)
event_response = self.helper.send(room_id1, "test", tok=user2_tok)
event_id = event_response["event_id"]
user1_join_response = self.helper.join(room_id1, user1_id, tok=user1_tok)
user1_join_pos = self.get_success(
self.store.get_position_for_event(user1_join_response["event_id"])
)
before_reset_token = self.event_sources.get_current_token()
# Trigger a state reset
join_rule_event, join_rule_context = self.get_success(
create_event(
self.hs,
prev_event_ids=[event_id],
type=EventTypes.JoinRules,
state_key="",
content={"join_rule": JoinRules.INVITE},
sender=user2_id,
room_id=room_id1,
room_version=self.get_success(self.store.get_room_version_id(room_id1)),
)
)
_, join_rule_event_pos, _ = self.get_success(
self.persistence.persist_event(join_rule_event, join_rule_context)
)
# FIXME: We're manually busting the cache since
# https://github.com/element-hq/synapse/issues/17368 is not solved yet
self.store._membership_stream_cache.entity_has_changed(
user1_id, join_rule_event_pos.stream
)
after_reset_token = self.event_sources.get_current_token()
membership_changes = self.get_success(
self.store.get_current_state_delta_membership_changes_for_user(
user1_id,
from_key=before_reset_token.room_key,
to_key=after_reset_token.room_key,
)
)
# Let the whole diff show on failure
self.maxDiff = None
self.assertEqual(
membership_changes,
[
CurrentStateDeltaMembership(
room_id=room_id1,
event_id=None,
# The position where the state reset happened
event_pos=join_rule_event_pos,
membership=Membership.LEAVE,
sender=None,
prev_event_id=user1_join_response["event_id"],
prev_event_pos=user1_join_pos,
prev_membership="join",
prev_sender=user1_id,
),
],
)
def test_excluded_room_ids(self) -> None:
"""
Test that the `excluded_room_ids` option excludes changes from the specified rooms.