mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-17 02:10:27 +00:00
Compare commits
17 Commits
anoa/valid
...
v1.139.0rc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e2ec3b7d0d | ||
|
|
acb9ec3c38 | ||
|
|
9c4ba13a10 | ||
|
|
5857d2de59 | ||
|
|
b10f3f5959 | ||
|
|
fd29e3219c | ||
|
|
d308469e90 | ||
|
|
daf33e4954 | ||
|
|
ddc7627b22 | ||
|
|
5be7679dd9 | ||
|
|
e7d98d3429 | ||
|
|
d05f44a1c6 | ||
|
|
8d5d87fb0a | ||
|
|
9a88d25f8e | ||
|
|
5a9ca1e3d9 | ||
|
|
83aca3f097 | ||
|
|
d80f515622 |
85
CHANGES.md
85
CHANGES.md
@@ -1,3 +1,88 @@
|
||||
# Synapse 1.139.0rc3 (2025-09-25)
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- Fix a bug introduced in 1.139.0rc1 where `run_coroutine_in_background(...)` incorrectly handled logcontexts, resulting in partially broken logging. ([\#18964](https://github.com/element-hq/synapse/issues/18964))
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.139.0rc2 (2025-09-23)
|
||||
|
||||
## Internal Changes
|
||||
|
||||
- Drop support for Ubuntu 24.10 Oracular Oriole, and add support for Ubuntu 25.04 Plucky Puffin. ([\#18962](https://github.com/element-hq/synapse/issues/18962))
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.139.0rc1 (2025-09-23)
|
||||
|
||||
## Features
|
||||
|
||||
- Add experimental support for [MSC4308: Thread Subscriptions extension to Sliding Sync](https://github.com/matrix-org/matrix-spec-proposals/pull/4308) when [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-spec-proposals/pull/4306) and [MSC4186: Simplified Sliding Sync](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) are enabled. ([\#18695](https://github.com/element-hq/synapse/issues/18695))
|
||||
- Update push rules for experimental [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-doc/issues/4306) to follow a newer draft. ([\#18846](https://github.com/element-hq/synapse/issues/18846))
|
||||
- Add `get_media_upload_limits_for_user` and `on_media_upload_limit_exceeded` module API callbacks to the media repository. ([\#18848](https://github.com/element-hq/synapse/issues/18848))
|
||||
- Support [MSC4169](https://github.com/matrix-org/matrix-spec-proposals/pull/4169) for backwards-compatible redaction sending using the `/send` endpoint. Contributed by @SpiritCroc @ Beeper. ([\#18898](https://github.com/element-hq/synapse/issues/18898))
|
||||
- Add an in-memory cache to `_get_e2e_cross_signing_signatures_for_devices` to reduce DB load. ([\#18899](https://github.com/element-hq/synapse/issues/18899))
|
||||
- Update [MSC4190](https://github.com/matrix-org/matrix-spec-proposals/pull/4190) support to return correct errors and allow appservices to reset cross-signing keys without user-interactive authentication. Contributed by @tulir @ Beeper. ([\#18946](https://github.com/element-hq/synapse/issues/18946))
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- Ensure all PDUs sent via `/send` pass canonical JSON checks. ([\#18641](https://github.com/element-hq/synapse/issues/18641))
|
||||
- Fix bug where we did not send invite revocations over federation. ([\#18823](https://github.com/element-hq/synapse/issues/18823))
|
||||
- Fix prefixed support for [MSC4133](https://github.com/matrix-org/matrix-spec-proposals/pull/4133). ([\#18875](https://github.com/element-hq/synapse/issues/18875))
|
||||
- Fix open redirect in legacy SSO flow with the `idp` query parameter. ([\#18909](https://github.com/element-hq/synapse/issues/18909))
|
||||
- Fix a performance regression related to the experimental Delayed Events ([MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140)) feature. ([\#18926](https://github.com/element-hq/synapse/issues/18926))
|
||||
|
||||
## Updates to the Docker image
|
||||
|
||||
- Suppress "Applying schema" log noise bulk when `SYNAPSE_LOG_TESTING` is set. ([\#18878](https://github.com/element-hq/synapse/issues/18878))
|
||||
|
||||
## Improved Documentation
|
||||
|
||||
- Clarify Python dependency constraints in our deprecation policy. ([\#18856](https://github.com/element-hq/synapse/issues/18856))
|
||||
- Clarify necessary `jwt_config` parameter in OIDC documentation for authentik. Contributed by @maxkratz. ([\#18931](https://github.com/element-hq/synapse/issues/18931))
|
||||
|
||||
## Deprecations and Removals
|
||||
|
||||
- Remove obsolete and experimental `/sync/e2ee` endpoint. ([\#18583](https://github.com/element-hq/synapse/issues/18583))
|
||||
|
||||
## Internal Changes
|
||||
|
||||
- Fix `LaterGauge` metrics to collect from all servers. ([\#18791](https://github.com/element-hq/synapse/issues/18791))
|
||||
- Configure Synapse to run [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-spec-proposals/pull/4306) Complement tests. ([\#18819](https://github.com/element-hq/synapse/issues/18819))
|
||||
- Remove `sentinel` logcontext usage where we log in `setup`, `start` and `exit`. ([\#18870](https://github.com/element-hq/synapse/issues/18870))
|
||||
- Use the `Enum`'s value for the dictionary key when responding to an admin request for experimental features. ([\#18874](https://github.com/element-hq/synapse/issues/18874))
|
||||
- Start background tasks after we fork the process (daemonize). ([\#18886](https://github.com/element-hq/synapse/issues/18886))
|
||||
- Better explain how we manage the logcontext in `run_in_background(...)` and `run_as_background_process(...)`. ([\#18900](https://github.com/element-hq/synapse/issues/18900), [\#18906](https://github.com/element-hq/synapse/issues/18906))
|
||||
- Remove `sentinel` logcontext usage in `Clock` utilities like `looping_call` and `call_later`. ([\#18907](https://github.com/element-hq/synapse/issues/18907))
|
||||
- Replace usages of the deprecated `pkg_resources` interface in preparation of setuptools dropping it soon. ([\#18910](https://github.com/element-hq/synapse/issues/18910))
|
||||
- Split loading config from homeserver `setup`. ([\#18933](https://github.com/element-hq/synapse/issues/18933))
|
||||
- Fix `run_in_background` not being awaited properly in some tests causing `LoggingContext` problems. ([\#18937](https://github.com/element-hq/synapse/issues/18937))
|
||||
- Fix `run_as_background_process` not being awaited properly causing `LoggingContext` problems in experimental [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140): Delayed events implementation. ([\#18938](https://github.com/element-hq/synapse/issues/18938))
|
||||
- Introduce `Clock.call_when_running(...)` to wrap startup code in a logcontext, ensuring we can identify which server generated the logs. ([\#18944](https://github.com/element-hq/synapse/issues/18944))
|
||||
- Introduce `Clock.add_system_event_trigger(...)` to wrap system event callback code in a logcontext, ensuring we can identify which server generated the logs. ([\#18945](https://github.com/element-hq/synapse/issues/18945))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump actions/setup-go from 5.5.0 to 6.0.0. ([\#18891](https://github.com/element-hq/synapse/issues/18891))
|
||||
* Bump actions/setup-python from 5.6.0 to 6.0.0. ([\#18890](https://github.com/element-hq/synapse/issues/18890))
|
||||
* Bump authlib from 1.6.1 to 1.6.3. ([\#18921](https://github.com/element-hq/synapse/issues/18921))
|
||||
* Bump jsonschema from 4.25.0 to 4.25.1. ([\#18897](https://github.com/element-hq/synapse/issues/18897))
|
||||
* Bump log from 0.4.27 to 0.4.28. ([\#18892](https://github.com/element-hq/synapse/issues/18892))
|
||||
* Bump phonenumbers from 9.0.12 to 9.0.13. ([\#18893](https://github.com/element-hq/synapse/issues/18893))
|
||||
* Bump pydantic from 2.11.7 to 2.11.9. ([\#18922](https://github.com/element-hq/synapse/issues/18922))
|
||||
* Bump serde from 1.0.219 to 1.0.223. ([\#18920](https://github.com/element-hq/synapse/issues/18920))
|
||||
* Bump serde_json from 1.0.143 to 1.0.145. ([\#18919](https://github.com/element-hq/synapse/issues/18919))
|
||||
* Bump sigstore/cosign-installer from 3.9.2 to 3.10.0. ([\#18917](https://github.com/element-hq/synapse/issues/18917))
|
||||
* Bump towncrier from 24.8.0 to 25.8.0. ([\#18894](https://github.com/element-hq/synapse/issues/18894))
|
||||
* Bump types-psycopg2 from 2.9.21.20250809 to 2.9.21.20250915. ([\#18918](https://github.com/element-hq/synapse/issues/18918))
|
||||
* Bump types-requests from 2.32.4.20250611 to 2.32.4.20250809. ([\#18895](https://github.com/element-hq/synapse/issues/18895))
|
||||
* Bump types-setuptools from 80.9.0.20250809 to 80.9.0.20250822. ([\#18924](https://github.com/element-hq/synapse/issues/18924))
|
||||
|
||||
# Synapse 1.138.0 (2025-09-09)
|
||||
|
||||
No significant changes since 1.138.0rc1.
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Remove obsolete and experimental `/sync/e2ee` endpoint.
|
||||
@@ -1 +0,0 @@
|
||||
Ensure all PDUs sent via `/send` pass canonical JSON checks.
|
||||
@@ -1 +0,0 @@
|
||||
Add experimental support for [MSC4308: Thread Subscriptions extension to Sliding Sync](https://github.com/matrix-org/matrix-spec-proposals/pull/4308) when [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-spec-proposals/pull/4306) and [MSC4186: Simplified Sliding Sync](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) are enabled.
|
||||
@@ -1 +0,0 @@
|
||||
Fix `LaterGauge` metrics to collect from all servers.
|
||||
@@ -1 +0,0 @@
|
||||
Configure Synapse to run MSC4306: Thread Subscriptions Complement tests.
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug where we did not send invite revocations over federation.
|
||||
@@ -1 +0,0 @@
|
||||
Update push rules for experimental [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-doc/issues/4306) to follow newer draft.
|
||||
@@ -1 +0,0 @@
|
||||
Add `get_media_upload_limits_for_user` and `on_media_upload_limit_exceeded` module API callbacks for media repository.
|
||||
@@ -1 +0,0 @@
|
||||
Clarify Python dependency constraints in our deprecation policy.
|
||||
@@ -1 +0,0 @@
|
||||
Remove `sentinel` logcontext usage where we log in `setup`, `start` and exit.
|
||||
@@ -1 +0,0 @@
|
||||
Use the `Enum`'s value for the dictionary key when responding to an admin request for experimental features.
|
||||
@@ -1 +0,0 @@
|
||||
Fix prefixed support for MSC4133.
|
||||
@@ -1 +0,0 @@
|
||||
Suppress "Applying schema" log noise bulk when `SYNAPSE_LOG_TESTING` is set.
|
||||
@@ -1 +0,0 @@
|
||||
Start background tasks after we fork the process (daemonize).
|
||||
@@ -1 +0,0 @@
|
||||
Add an in-memory cache to `_get_e2e_cross_signing_signatures_for_devices` to reduce DB load.
|
||||
@@ -1 +0,0 @@
|
||||
Better explain how we manage the logcontext in `run_in_background(...)` and `run_as_background_process(...)`.
|
||||
@@ -1 +0,0 @@
|
||||
Better explain how we manage the logcontext in `run_in_background(...)` and `run_as_background_process(...)`.
|
||||
@@ -1 +0,0 @@
|
||||
Fix open redirect in legacy SSO flow with the `idp` query parameter.
|
||||
@@ -1 +0,0 @@
|
||||
Replace usages of the deprecated `pkg_resources` interface in preparation of setuptools dropping it soon.
|
||||
@@ -1,2 +0,0 @@
|
||||
Clarify necessary `jwt_config` parameter in OIDC documentation for authentik.
|
||||
Contributed by @maxkratz.
|
||||
18
debian/changelog
vendored
18
debian/changelog
vendored
@@ -1,3 +1,21 @@
|
||||
matrix-synapse-py3 (1.139.0~rc3) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.139.0rc3.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Sep 2025 12:13:23 +0100
|
||||
|
||||
matrix-synapse-py3 (1.139.0~rc2) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.139.0rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Sep 2025 15:31:42 +0100
|
||||
|
||||
matrix-synapse-py3 (1.139.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.139.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Sep 2025 13:24:50 +0100
|
||||
|
||||
matrix-synapse-py3 (1.138.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.138.0.
|
||||
|
||||
@@ -117,6 +117,29 @@ each upgrade are complete before moving on to the next upgrade, to avoid
|
||||
stacking them up. You can monitor the currently running background updates with
|
||||
[the Admin API](usage/administration/admin_api/background_updates.html#status).
|
||||
|
||||
# Upgrading to v1.139.0
|
||||
|
||||
## Drop support for Ubuntu 24.10 Oracular Oriole, and add support for Ubuntu 25.04 Plucky Puffin
|
||||
|
||||
Ubuntu 24.10 Oracular Oriole [has been end-of-life since 10 Jul
|
||||
2025](https://endoflife.date/ubuntu). This release drops support for Ubuntu
|
||||
24.10, and in its place adds support for Ubuntu 25.04 Plucky Puffin.
|
||||
|
||||
## `/register` requests from old application service implementations may break when using MAS
|
||||
|
||||
Application Services that do not set `inhibit_login=true` when calling `POST
|
||||
/_matrix/client/v3/register` will receive the error
|
||||
`IO.ELEMENT.MSC4190.M_APPSERVICE_LOGIN_UNSUPPORTED` in response. This is a
|
||||
result of [MSC4190: Device management for application
|
||||
services](https://github.com/matrix-org/matrix-spec-proposals/pull/4190) which
|
||||
adds new endpoints for application services to create encryption-ready devices
|
||||
with other than `/login` or `/register` without `inhibit_login=true`.
|
||||
|
||||
If an application service you use starts to fail with the mentioned error,
|
||||
ensure it is up to date. If it is, then kindly let the author know that they
|
||||
need to update their implementation to call `/register` with
|
||||
`inhibit_login=true`.
|
||||
|
||||
# Upgrading to v1.136.0
|
||||
|
||||
## Deprecate `run_as_background_process` exported as part of the module API interface in favor of `ModuleApi.run_as_background_process`
|
||||
|
||||
@@ -101,7 +101,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.138.0"
|
||||
version = "1.139.0rc3"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
$schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json
|
||||
$id: https://element-hq.github.io/synapse/schema/synapse/v1.138/synapse-config.schema.json
|
||||
$id: https://element-hq.github.io/synapse/schema/synapse/v1.139/synapse-config.schema.json
|
||||
type: object
|
||||
properties:
|
||||
modules:
|
||||
|
||||
@@ -32,7 +32,7 @@ DISTS = (
|
||||
"debian:sid", # (rolling distro, no EOL)
|
||||
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)
|
||||
"ubuntu:noble", # 24.04 LTS (EOL 2029-06)
|
||||
"ubuntu:oracular", # 24.10 (EOL 2025-07)
|
||||
"ubuntu:plucky", # 25.04 (EOL 2026-01)
|
||||
"debian:trixie", # (EOL not specified yet)
|
||||
)
|
||||
|
||||
|
||||
@@ -68,6 +68,18 @@ PROMETHEUS_METRIC_MISSING_FROM_LIST_TO_CHECK = ErrorCode(
|
||||
category="per-homeserver-tenant-metrics",
|
||||
)
|
||||
|
||||
PREFER_SYNAPSE_CLOCK_CALL_WHEN_RUNNING = ErrorCode(
|
||||
"prefer-synapse-clock-call-when-running",
|
||||
"`synapse.util.Clock.call_when_running` should be used instead of `reactor.callWhenRunning`",
|
||||
category="synapse-reactor-clock",
|
||||
)
|
||||
|
||||
PREFER_SYNAPSE_CLOCK_ADD_SYSTEM_EVENT_TRIGGER = ErrorCode(
|
||||
"prefer-synapse-clock-add-system-event-trigger",
|
||||
"`synapse.util.Clock.add_system_event_trigger` should be used instead of `reactor.addSystemEventTrigger`",
|
||||
category="synapse-reactor-clock",
|
||||
)
|
||||
|
||||
|
||||
class Sentinel(enum.Enum):
|
||||
# defining a sentinel in this way allows mypy to correctly handle the
|
||||
@@ -229,9 +241,77 @@ class SynapsePlugin(Plugin):
|
||||
):
|
||||
return check_is_cacheable_wrapper
|
||||
|
||||
if fullname in (
|
||||
"twisted.internet.interfaces.IReactorCore.callWhenRunning",
|
||||
"synapse.types.ISynapseThreadlessReactor.callWhenRunning",
|
||||
"synapse.types.ISynapseReactor.callWhenRunning",
|
||||
):
|
||||
return check_call_when_running
|
||||
|
||||
if fullname in (
|
||||
"twisted.internet.interfaces.IReactorCore.addSystemEventTrigger",
|
||||
"synapse.types.ISynapseThreadlessReactor.addSystemEventTrigger",
|
||||
"synapse.types.ISynapseReactor.addSystemEventTrigger",
|
||||
):
|
||||
return check_add_system_event_trigger
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def check_call_when_running(ctx: MethodSigContext) -> CallableType:
|
||||
"""
|
||||
Ensure that the `reactor.callWhenRunning` callsites aren't used.
|
||||
|
||||
`synapse.util.Clock.call_when_running` should always be used instead of
|
||||
`reactor.callWhenRunning`.
|
||||
|
||||
Since `reactor.callWhenRunning` is a reactor callback, the callback will start out
|
||||
with the sentinel logcontext. `synapse.util.Clock` starts a default logcontext as we
|
||||
want to know which server the logs came from.
|
||||
|
||||
Args:
|
||||
ctx: The `FunctionSigContext` from mypy.
|
||||
"""
|
||||
signature: CallableType = ctx.default_signature
|
||||
ctx.api.fail(
|
||||
(
|
||||
"Expected all `reactor.callWhenRunning` calls to use `synapse.util.Clock.call_when_running` instead. "
|
||||
"This is so all Synapse code runs with a logcontext as we want to know which server the logs came from."
|
||||
),
|
||||
ctx.context,
|
||||
code=PREFER_SYNAPSE_CLOCK_CALL_WHEN_RUNNING,
|
||||
)
|
||||
|
||||
return signature
|
||||
|
||||
|
||||
def check_add_system_event_trigger(ctx: MethodSigContext) -> CallableType:
|
||||
"""
|
||||
Ensure that the `reactor.addSystemEventTrigger` callsites aren't used.
|
||||
|
||||
`synapse.util.Clock.add_system_event_trigger` should always be used instead of
|
||||
`reactor.addSystemEventTrigger`.
|
||||
|
||||
Since `reactor.addSystemEventTrigger` is a reactor callback, the callback will start out
|
||||
with the sentinel logcontext. `synapse.util.Clock` starts a default logcontext as we
|
||||
want to know which server the logs came from.
|
||||
|
||||
Args:
|
||||
ctx: The `FunctionSigContext` from mypy.
|
||||
"""
|
||||
signature: CallableType = ctx.default_signature
|
||||
ctx.api.fail(
|
||||
(
|
||||
"Expected all `reactor.addSystemEventTrigger` calls to use `synapse.util.Clock.add_system_event_trigger` instead. "
|
||||
"This is so all Synapse code runs with a logcontext as we want to know which server the logs came from."
|
||||
),
|
||||
ctx.context,
|
||||
code=PREFER_SYNAPSE_CLOCK_ADD_SYSTEM_EVENT_TRIGGER,
|
||||
)
|
||||
|
||||
return signature
|
||||
|
||||
|
||||
def analyze_prometheus_metric_classes(ctx: ClassDefContext) -> None:
|
||||
"""
|
||||
Cross-check the list of Prometheus metric classes against the
|
||||
|
||||
@@ -30,7 +30,7 @@ from signedjson.sign import sign_json
|
||||
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||
from synapse.crypto.event_signing import add_hashes_and_signatures
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.json import json_encoder
|
||||
|
||||
|
||||
def main() -> None:
|
||||
|
||||
@@ -54,11 +54,11 @@ from twisted.internet import defer, reactor as reactor_
|
||||
from synapse.config.database import DatabaseConnectionConfig
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.logging.context import (
|
||||
LoggingContext,
|
||||
make_deferred_yieldable,
|
||||
run_in_background,
|
||||
)
|
||||
from synapse.notifier import ReplicationNotifier
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage import DataStore
|
||||
from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn
|
||||
from synapse.storage.databases.main import FilteringWorkerStore
|
||||
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
|
||||
@@ -98,8 +98,7 @@ from synapse.storage.databases.state.bg_updates import StateBackgroundUpdateStor
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.storage.prepare_database import prepare_database
|
||||
from synapse.types import ISynapseReactor
|
||||
from synapse.util import SYNAPSE_VERSION, Clock
|
||||
from synapse.util.stringutils import random_string
|
||||
from synapse.util import SYNAPSE_VERSION
|
||||
|
||||
# Cast safety: Twisted does some naughty magic which replaces the
|
||||
# twisted.internet.reactor module with a Reactor instance at runtime.
|
||||
@@ -318,31 +317,16 @@ class Store(
|
||||
)
|
||||
|
||||
|
||||
class MockHomeserver:
|
||||
class MockHomeserver(HomeServer):
|
||||
DATASTORE_CLASS = DataStore
|
||||
|
||||
def __init__(self, config: HomeServerConfig):
|
||||
self.clock = Clock(reactor)
|
||||
self.config = config
|
||||
self.hostname = config.server.server_name
|
||||
self.version_string = SYNAPSE_VERSION
|
||||
self.instance_id = random_string(5)
|
||||
|
||||
def get_clock(self) -> Clock:
|
||||
return self.clock
|
||||
|
||||
def get_reactor(self) -> ISynapseReactor:
|
||||
return reactor
|
||||
|
||||
def get_instance_id(self) -> str:
|
||||
return self.instance_id
|
||||
|
||||
def get_instance_name(self) -> str:
|
||||
return "master"
|
||||
|
||||
def should_send_federation(self) -> bool:
|
||||
return False
|
||||
|
||||
def get_replication_notifier(self) -> ReplicationNotifier:
|
||||
return ReplicationNotifier()
|
||||
super().__init__(
|
||||
hostname=config.server.server_name,
|
||||
config=config,
|
||||
reactor=reactor,
|
||||
version_string=f"Synapse/{SYNAPSE_VERSION}",
|
||||
)
|
||||
|
||||
|
||||
class Porter:
|
||||
@@ -351,12 +335,12 @@ class Porter:
|
||||
sqlite_config: Dict[str, Any],
|
||||
progress: "Progress",
|
||||
batch_size: int,
|
||||
hs_config: HomeServerConfig,
|
||||
hs: HomeServer,
|
||||
):
|
||||
self.sqlite_config = sqlite_config
|
||||
self.progress = progress
|
||||
self.batch_size = batch_size
|
||||
self.hs_config = hs_config
|
||||
self.hs = hs
|
||||
|
||||
async def setup_table(self, table: str) -> Tuple[str, int, int, int, int]:
|
||||
if table in APPEND_ONLY_TABLES:
|
||||
@@ -676,8 +660,7 @@ class Porter:
|
||||
|
||||
engine = create_engine(db_config.config)
|
||||
|
||||
hs = MockHomeserver(self.hs_config)
|
||||
server_name = hs.hostname
|
||||
server_name = self.hs.hostname
|
||||
|
||||
with make_conn(
|
||||
db_config=db_config,
|
||||
@@ -688,16 +671,16 @@ class Porter:
|
||||
engine.check_database(
|
||||
db_conn, allow_outdated_version=allow_outdated_version
|
||||
)
|
||||
prepare_database(db_conn, engine, config=self.hs_config)
|
||||
prepare_database(db_conn, engine, config=self.hs.config)
|
||||
# Type safety: ignore that we're using Mock homeservers here.
|
||||
store = Store(
|
||||
DatabasePool(
|
||||
hs, # type: ignore[arg-type]
|
||||
self.hs,
|
||||
db_config,
|
||||
engine,
|
||||
),
|
||||
db_conn,
|
||||
hs, # type: ignore[arg-type]
|
||||
self.hs,
|
||||
)
|
||||
db_conn.commit()
|
||||
|
||||
@@ -795,7 +778,7 @@ class Porter:
|
||||
return
|
||||
|
||||
self.postgres_store = self.build_db_store(
|
||||
self.hs_config.database.get_single_database()
|
||||
self.hs.config.database.get_single_database()
|
||||
)
|
||||
|
||||
await self.remove_ignored_background_updates_from_database()
|
||||
@@ -1584,6 +1567,8 @@ def main() -> None:
|
||||
config = HomeServerConfig()
|
||||
config.parse_config_dict(hs_config, "", "")
|
||||
|
||||
hs = MockHomeserver(config)
|
||||
|
||||
def start(stdscr: Optional["curses.window"] = None) -> None:
|
||||
progress: Progress
|
||||
if stdscr:
|
||||
@@ -1595,15 +1580,14 @@ def main() -> None:
|
||||
sqlite_config=sqlite_config,
|
||||
progress=progress,
|
||||
batch_size=args.batch_size,
|
||||
hs_config=config,
|
||||
hs=hs,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def run() -> Generator["defer.Deferred[Any]", Any, None]:
|
||||
with LoggingContext("synapse_port_db_run"):
|
||||
yield defer.ensureDeferred(porter.run())
|
||||
yield defer.ensureDeferred(porter.run())
|
||||
|
||||
reactor.callWhenRunning(run)
|
||||
hs.get_clock().call_when_running(run)
|
||||
|
||||
reactor.run()
|
||||
|
||||
|
||||
@@ -74,7 +74,7 @@ def run_background_updates(hs: HomeServer) -> None:
|
||||
)
|
||||
)
|
||||
|
||||
reactor.callWhenRunning(run)
|
||||
hs.get_clock().call_when_running(run)
|
||||
|
||||
reactor.run()
|
||||
|
||||
|
||||
@@ -43,9 +43,9 @@ from synapse.logging.opentracing import (
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.synapse_rust.http_client import HttpClient
|
||||
from synapse.types import JsonDict, Requester, UserID, create_requester
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
||||
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
|
||||
from synapse.util.json import json_decoder
|
||||
|
||||
from . import introspection_response_timer
|
||||
|
||||
|
||||
@@ -48,9 +48,9 @@ from synapse.logging.opentracing import (
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.synapse_rust.http_client import HttpClient
|
||||
from synapse.types import Requester, UserID, create_requester
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
||||
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
|
||||
from synapse.util.json import json_decoder
|
||||
|
||||
from . import introspection_response_timer
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
from twisted.web import http
|
||||
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.json import json_decoder
|
||||
|
||||
if typing.TYPE_CHECKING:
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
@@ -140,6 +140,9 @@ class Codes(str, Enum):
|
||||
# Part of MSC4155
|
||||
INVITE_BLOCKED = "ORG.MATRIX.MSC4155.M_INVITE_BLOCKED"
|
||||
|
||||
# Part of MSC4190
|
||||
APPSERVICE_LOGIN_UNSUPPORTED = "IO.ELEMENT.MSC4190.M_APPSERVICE_LOGIN_UNSUPPORTED"
|
||||
|
||||
# Part of MSC4306: Thread Subscriptions
|
||||
MSC4306_CONFLICTING_UNSUBSCRIPTION = (
|
||||
"IO.ELEMENT.MSC4306.M_CONFLICTING_UNSUBSCRIPTION"
|
||||
|
||||
@@ -26,7 +26,7 @@ from synapse.api.errors import LimitExceededError
|
||||
from synapse.config.ratelimiting import RatelimitSettings
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.types import Requester
|
||||
from synapse.util import Clock
|
||||
from synapse.util.clock import Clock
|
||||
|
||||
if TYPE_CHECKING:
|
||||
# To avoid circular imports:
|
||||
|
||||
@@ -241,7 +241,7 @@ def redirect_stdio_to_logs() -> None:
|
||||
|
||||
|
||||
def register_start(
|
||||
cb: Callable[P, Awaitable], *args: P.args, **kwargs: P.kwargs
|
||||
hs: "HomeServer", cb: Callable[P, Awaitable], *args: P.args, **kwargs: P.kwargs
|
||||
) -> None:
|
||||
"""Register a callback with the reactor, to be called once it is running
|
||||
|
||||
@@ -278,7 +278,8 @@ def register_start(
|
||||
# on as normal.
|
||||
os._exit(1)
|
||||
|
||||
reactor.callWhenRunning(lambda: defer.ensureDeferred(wrapper()))
|
||||
clock = hs.get_clock()
|
||||
clock.call_when_running(lambda: defer.ensureDeferred(wrapper()))
|
||||
|
||||
|
||||
def listen_metrics(bind_addresses: StrCollection, port: int) -> None:
|
||||
@@ -517,7 +518,9 @@ async def start(hs: "HomeServer") -> None:
|
||||
# numbers of DNS requests don't starve out other users of the threadpool.
|
||||
resolver_threadpool = ThreadPool(name="gai_resolver")
|
||||
resolver_threadpool.start()
|
||||
reactor.addSystemEventTrigger("during", "shutdown", resolver_threadpool.stop)
|
||||
hs.get_clock().add_system_event_trigger(
|
||||
"during", "shutdown", resolver_threadpool.stop
|
||||
)
|
||||
reactor.installNameResolver(
|
||||
GAIResolver(reactor, getThreadPool=lambda: resolver_threadpool)
|
||||
)
|
||||
@@ -604,7 +607,7 @@ async def start(hs: "HomeServer") -> None:
|
||||
logger.info("Shutting down...")
|
||||
|
||||
# Log when we start the shut down process.
|
||||
hs.get_reactor().addSystemEventTrigger("before", "shutdown", log_shutdown)
|
||||
hs.get_clock().add_system_event_trigger("before", "shutdown", log_shutdown)
|
||||
|
||||
setup_sentry(hs)
|
||||
setup_sdnotify(hs)
|
||||
@@ -719,7 +722,7 @@ def setup_sdnotify(hs: "HomeServer") -> None:
|
||||
# we're not using systemd.
|
||||
sdnotify(b"READY=1\nMAINPID=%i" % (os.getpid(),))
|
||||
|
||||
hs.get_reactor().addSystemEventTrigger(
|
||||
hs.get_clock().add_system_event_trigger(
|
||||
"before", "shutdown", sdnotify, b"STOPPING=1"
|
||||
)
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ import logging
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
from typing import List, Mapping, Optional, Sequence
|
||||
from typing import List, Mapping, Optional, Sequence, Tuple
|
||||
|
||||
from twisted.internet import defer, task
|
||||
|
||||
@@ -256,7 +256,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||
return self.base_directory
|
||||
|
||||
|
||||
def start(config_options: List[str]) -> None:
|
||||
def load_config(argv_options: List[str]) -> Tuple[HomeServerConfig, argparse.Namespace]:
|
||||
parser = argparse.ArgumentParser(description="Synapse Admin Command")
|
||||
HomeServerConfig.add_arguments_to_parser(parser)
|
||||
|
||||
@@ -282,11 +282,15 @@ def start(config_options: List[str]) -> None:
|
||||
export_data_parser.set_defaults(func=export_data_command)
|
||||
|
||||
try:
|
||||
config, args = HomeServerConfig.load_config_with_parser(parser, config_options)
|
||||
config, args = HomeServerConfig.load_config_with_parser(parser, argv_options)
|
||||
except ConfigError as e:
|
||||
sys.stderr.write("\n" + str(e) + "\n")
|
||||
sys.exit(1)
|
||||
|
||||
return config, args
|
||||
|
||||
|
||||
def start(config: HomeServerConfig, args: argparse.Namespace) -> None:
|
||||
if config.worker.worker_app is not None:
|
||||
assert config.worker.worker_app == "synapse.app.admin_cmd"
|
||||
|
||||
@@ -325,7 +329,7 @@ def start(config_options: List[str]) -> None:
|
||||
# command.
|
||||
|
||||
async def run() -> None:
|
||||
with LoggingContext("command"):
|
||||
with LoggingContext(name="command"):
|
||||
await _base.start(ss)
|
||||
await args.func(ss, args)
|
||||
|
||||
@@ -337,5 +341,6 @@ def start(config_options: List[str]) -> None:
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config, args = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config, args)
|
||||
|
||||
@@ -21,13 +21,14 @@
|
||||
|
||||
import sys
|
||||
|
||||
from synapse.app.generic_worker import start
|
||||
from synapse.app.generic_worker import load_config, start
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -21,13 +21,14 @@
|
||||
|
||||
import sys
|
||||
|
||||
from synapse.app.generic_worker import start
|
||||
from synapse.app.generic_worker import load_config, start
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -20,13 +20,14 @@
|
||||
|
||||
import sys
|
||||
|
||||
from synapse.app.generic_worker import start
|
||||
from synapse.app.generic_worker import load_config, start
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -21,13 +21,14 @@
|
||||
|
||||
import sys
|
||||
|
||||
from synapse.app.generic_worker import start
|
||||
from synapse.app.generic_worker import load_config, start
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -21,13 +21,14 @@
|
||||
|
||||
import sys
|
||||
|
||||
from synapse.app.generic_worker import start
|
||||
from synapse.app.generic_worker import load_config, start
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -21,13 +21,14 @@
|
||||
|
||||
import sys
|
||||
|
||||
from synapse.app.generic_worker import start
|
||||
from synapse.app.generic_worker import load_config, start
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -310,13 +310,26 @@ class GenericWorkerServer(HomeServer):
|
||||
self.get_replication_command_handler().start_replication(self)
|
||||
|
||||
|
||||
def start(config_options: List[str]) -> None:
|
||||
def load_config(argv_options: List[str]) -> HomeServerConfig:
|
||||
"""
|
||||
Parse the commandline and config files (does not generate config)
|
||||
|
||||
Args:
|
||||
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
|
||||
|
||||
Returns:
|
||||
Config object.
|
||||
"""
|
||||
try:
|
||||
config = HomeServerConfig.load_config("Synapse worker", config_options)
|
||||
config = HomeServerConfig.load_config("Synapse worker", argv_options)
|
||||
except ConfigError as e:
|
||||
sys.stderr.write("\n" + str(e) + "\n")
|
||||
sys.exit(1)
|
||||
|
||||
return config
|
||||
|
||||
|
||||
def start(config: HomeServerConfig) -> None:
|
||||
# For backwards compatibility let any of the old app names.
|
||||
assert config.worker.worker_app in (
|
||||
"synapse.app.appservice",
|
||||
@@ -356,11 +369,9 @@ def start(config_options: List[str]) -> None:
|
||||
handle_startup_exception(e)
|
||||
|
||||
async def start() -> None:
|
||||
# Re-establish log context now that we're back from the reactor
|
||||
with LoggingContext("start"):
|
||||
await _base.start(hs)
|
||||
await _base.start(hs)
|
||||
|
||||
register_start(start)
|
||||
register_start(hs, start)
|
||||
|
||||
# redirect stdio to the logs, if configured.
|
||||
if not hs.config.logging.no_redirect_stdio:
|
||||
@@ -370,8 +381,9 @@ def start(config_options: List[str]) -> None:
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -308,17 +308,21 @@ class SynapseHomeServer(HomeServer):
|
||||
logger.warning("Unrecognized listener type: %s", listener.type)
|
||||
|
||||
|
||||
def setup(config_options: List[str]) -> SynapseHomeServer:
|
||||
def load_or_generate_config(argv_options: List[str]) -> HomeServerConfig:
|
||||
"""
|
||||
Parse the commandline and config files
|
||||
|
||||
Supports generation of config files, so is used for the main homeserver app.
|
||||
|
||||
Args:
|
||||
config_options_options: The options passed to Synapse. Usually `sys.argv[1:]`.
|
||||
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
|
||||
|
||||
Returns:
|
||||
A homeserver instance.
|
||||
"""
|
||||
try:
|
||||
config = HomeServerConfig.load_or_generate_config(
|
||||
"Synapse Homeserver", config_options
|
||||
"Synapse Homeserver", argv_options
|
||||
)
|
||||
except ConfigError as e:
|
||||
sys.stderr.write("\n")
|
||||
@@ -332,6 +336,20 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
|
||||
# generating config files and shouldn't try to continue.
|
||||
sys.exit(0)
|
||||
|
||||
return config
|
||||
|
||||
|
||||
def setup(config: HomeServerConfig) -> SynapseHomeServer:
|
||||
"""
|
||||
Create and setup a Synapse homeserver instance given a configuration.
|
||||
|
||||
Args:
|
||||
config: The configuration for the homeserver.
|
||||
|
||||
Returns:
|
||||
A homeserver instance.
|
||||
"""
|
||||
|
||||
if config.worker.worker_app:
|
||||
raise ConfigError(
|
||||
"You have specified `worker_app` in the config but are attempting to start a non-worker "
|
||||
@@ -377,19 +395,17 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
|
||||
handle_startup_exception(e)
|
||||
|
||||
async def start() -> None:
|
||||
# Re-establish log context now that we're back from the reactor
|
||||
with LoggingContext("start"):
|
||||
# Load the OIDC provider metadatas, if OIDC is enabled.
|
||||
if hs.config.oidc.oidc_enabled:
|
||||
oidc = hs.get_oidc_handler()
|
||||
# Loading the provider metadata also ensures the provider config is valid.
|
||||
await oidc.load_metadata()
|
||||
# Load the OIDC provider metadatas, if OIDC is enabled.
|
||||
if hs.config.oidc.oidc_enabled:
|
||||
oidc = hs.get_oidc_handler()
|
||||
# Loading the provider metadata also ensures the provider config is valid.
|
||||
await oidc.load_metadata()
|
||||
|
||||
await _base.start(hs)
|
||||
await _base.start(hs)
|
||||
|
||||
hs.get_datastores().main.db_pool.updates.start_doing_background_updates()
|
||||
hs.get_datastores().main.db_pool.updates.start_doing_background_updates()
|
||||
|
||||
register_start(start)
|
||||
register_start(hs, start)
|
||||
|
||||
return hs
|
||||
|
||||
@@ -407,10 +423,12 @@ def run(hs: HomeServer) -> None:
|
||||
|
||||
|
||||
def main() -> None:
|
||||
homeserver_config = load_or_generate_config(sys.argv[1:])
|
||||
|
||||
with LoggingContext("main"):
|
||||
# check base requirements
|
||||
check_requirements()
|
||||
hs = setup(sys.argv[1:])
|
||||
hs = setup(homeserver_config)
|
||||
|
||||
# redirect stdio to the logs, if configured.
|
||||
if not hs.config.logging.no_redirect_stdio:
|
||||
|
||||
@@ -21,13 +21,14 @@
|
||||
|
||||
import sys
|
||||
|
||||
from synapse.app.generic_worker import start
|
||||
from synapse.app.generic_worker import load_config, start
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -21,13 +21,14 @@
|
||||
|
||||
import sys
|
||||
|
||||
from synapse.app.generic_worker import start
|
||||
from synapse.app.generic_worker import load_config, start
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -21,13 +21,14 @@
|
||||
|
||||
import sys
|
||||
|
||||
from synapse.app.generic_worker import start
|
||||
from synapse.app.generic_worker import load_config, start
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -21,13 +21,14 @@
|
||||
|
||||
import sys
|
||||
|
||||
from synapse.app.generic_worker import start
|
||||
from synapse.app.generic_worker import load_config, start
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
|
||||
|
||||
def main() -> None:
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
homeserver_config = load_config(sys.argv[1:])
|
||||
with LoggingContext(name="main"):
|
||||
start(homeserver_config)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -84,7 +84,7 @@ from synapse.logging.context import run_in_background
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.types import DeviceListUpdates, JsonMapping
|
||||
from synapse.util import Clock
|
||||
from synapse.util.clock import Clock
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
@@ -646,12 +646,16 @@ class RootConfig:
|
||||
|
||||
@classmethod
|
||||
def load_or_generate_config(
|
||||
cls: Type[TRootConfig], description: str, argv: List[str]
|
||||
cls: Type[TRootConfig], description: str, argv_options: List[str]
|
||||
) -> Optional[TRootConfig]:
|
||||
"""Parse the commandline and config files
|
||||
|
||||
Supports generation of config files, so is used for the main homeserver app.
|
||||
|
||||
Args:
|
||||
description: TODO
|
||||
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
|
||||
|
||||
Returns:
|
||||
Config object, or None if --generate-config or --generate-keys was set
|
||||
"""
|
||||
@@ -747,7 +751,7 @@ class RootConfig:
|
||||
)
|
||||
|
||||
cls.invoke_all_static("add_arguments", parser)
|
||||
config_args = parser.parse_args(argv)
|
||||
config_args = parser.parse_args(argv_options)
|
||||
|
||||
config_files = find_config_files(search_paths=config_args.config_path)
|
||||
|
||||
|
||||
@@ -556,6 +556,9 @@ class ExperimentalConfig(Config):
|
||||
# MSC4133: Custom profile fields
|
||||
self.msc4133_enabled: bool = experimental.get("msc4133_enabled", False)
|
||||
|
||||
# MSC4169: Backwards-compatible redaction sending using `/send`
|
||||
self.msc4169_enabled: bool = experimental.get("msc4169_enabled", False)
|
||||
|
||||
# MSC4210: Remove legacy mentions
|
||||
self.msc4210_enabled: bool = experimental.get("msc4210_enabled", False)
|
||||
|
||||
|
||||
@@ -38,7 +38,7 @@ from synapse.storage.databases.main import DataStore
|
||||
from synapse.synapse_rust.events import EventInternalMetadata
|
||||
from synapse.types import EventID, JsonDict, StrCollection
|
||||
from synapse.types.state import StateFilter
|
||||
from synapse.util import Clock
|
||||
from synapse.util.clock import Clock
|
||||
from synapse.util.stringutils import random_string
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
||||
@@ -178,7 +178,7 @@ from synapse.types import (
|
||||
StrCollection,
|
||||
get_domain_from_id,
|
||||
)
|
||||
from synapse.util import Clock
|
||||
from synapse.util.clock import Clock
|
||||
from synapse.util.metrics import Measure
|
||||
from synapse.util.retryutils import filter_destinations_by_retry_limiter
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ from synapse.logging.opentracing import (
|
||||
)
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.json import json_decoder
|
||||
from synapse.util.metrics import measure_func
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
||||
@@ -62,7 +62,7 @@ class DeactivateAccountHandler:
|
||||
# Start the user parter loop so it can resume parting users from rooms where
|
||||
# it left off (if it has work left to do).
|
||||
if hs.config.worker.worker_app is None:
|
||||
hs.get_reactor().callWhenRunning(self._start_user_parting)
|
||||
hs.get_clock().call_when_running(self._start_user_parting)
|
||||
else:
|
||||
self._notify_account_deactivated_client = (
|
||||
ReplicationNotifyAccountDeactivatedServlet.make_client(hs)
|
||||
|
||||
@@ -18,12 +18,15 @@ from typing import TYPE_CHECKING, List, Optional, Set, Tuple
|
||||
from twisted.internet.interfaces import IDelayedCall
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.api.errors import ShadowBanError
|
||||
from synapse.api.errors import ShadowBanError, SynapseError
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.config.workers import MAIN_PROCESS_INSTANCE_NAME
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.logging.opentracing import set_tag
|
||||
from synapse.metrics import SERVER_NAME_LABEL, event_processing_positions
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.metrics.background_process_metrics import (
|
||||
run_as_background_process,
|
||||
)
|
||||
from synapse.replication.http.delayed_events import (
|
||||
ReplicationAddedDelayedEventRestServlet,
|
||||
)
|
||||
@@ -45,6 +48,7 @@ from synapse.types import (
|
||||
)
|
||||
from synapse.util.events import generate_fake_event_id
|
||||
from synapse.util.metrics import Measure
|
||||
from synapse.util.sentinel import Sentinel
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -146,10 +150,37 @@ class DelayedEventsHandler:
|
||||
)
|
||||
|
||||
async def _unsafe_process_new_event(self) -> None:
|
||||
# We purposefully fetch the current max room stream ordering before
|
||||
# doing anything else, as it could increment duing processing of state
|
||||
# deltas. We want to avoid updating `delayed_events_stream_pos` past
|
||||
# the stream ordering of the state deltas we've processed. Otherwise
|
||||
# we'll leave gaps in our processing.
|
||||
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
|
||||
|
||||
# Check that there are actually any delayed events to process. If not, bail early.
|
||||
delayed_events_count = await self._store.get_count_of_delayed_events()
|
||||
if delayed_events_count == 0:
|
||||
# There are no delayed events to process. Update the
|
||||
# `delayed_events_stream_pos` to the latest `events` stream pos and
|
||||
# exit early.
|
||||
self._event_pos = room_max_stream_ordering
|
||||
|
||||
logger.debug(
|
||||
"No delayed events to process. Updating `delayed_events_stream_pos` to max stream ordering (%s)",
|
||||
room_max_stream_ordering,
|
||||
)
|
||||
|
||||
await self._store.update_delayed_events_stream_pos(room_max_stream_ordering)
|
||||
|
||||
event_processing_positions.labels(
|
||||
name="delayed_events", **{SERVER_NAME_LABEL: self.server_name}
|
||||
).set(room_max_stream_ordering)
|
||||
|
||||
return
|
||||
|
||||
# If self._event_pos is None then means we haven't fetched it from the DB yet
|
||||
if self._event_pos is None:
|
||||
self._event_pos = await self._store.get_delayed_events_stream_pos()
|
||||
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
|
||||
if self._event_pos > room_max_stream_ordering:
|
||||
# apparently, we've processed more events than exist in the database!
|
||||
# this can happen if events are removed with history purge or similar.
|
||||
@@ -167,7 +198,7 @@ class DelayedEventsHandler:
|
||||
self._clock, name="delayed_events_delta", server_name=self.server_name
|
||||
):
|
||||
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
|
||||
if self._event_pos == room_max_stream_ordering:
|
||||
if self._event_pos >= room_max_stream_ordering:
|
||||
return
|
||||
|
||||
logger.debug(
|
||||
@@ -202,23 +233,81 @@ class DelayedEventsHandler:
|
||||
Process current state deltas to cancel other users' pending delayed events
|
||||
that target the same state.
|
||||
"""
|
||||
# Get the senders of each delta's state event (as sender information is
|
||||
# not currently stored in the `current_state_deltas` table).
|
||||
event_id_and_sender_dict = await self._store.get_senders_for_event_ids(
|
||||
[delta.event_id for delta in deltas if delta.event_id is not None]
|
||||
)
|
||||
|
||||
# Note: No need to batch as `get_current_state_deltas` will only ever
|
||||
# return 100 rows at a time.
|
||||
for delta in deltas:
|
||||
logger.debug(
|
||||
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
|
||||
)
|
||||
|
||||
# `delta.event_id` and `delta.sender` can be `None` in a few valid
|
||||
# cases (see the docstring of
|
||||
# `get_current_state_delta_membership_changes_for_user` for details).
|
||||
if delta.event_id is None:
|
||||
logger.debug(
|
||||
"Not handling delta for deleted state: %r %r",
|
||||
# TODO: Differentiate between this being caused by a state reset
|
||||
# which removed a user from a room, or the homeserver
|
||||
# purposefully having left the room. We can do so by checking
|
||||
# whether there are any local memberships still left in the
|
||||
# room. If so, then this is the result of a state reset.
|
||||
#
|
||||
# If it is a state reset, we should avoid cancelling new,
|
||||
# delayed state events due to old state resurfacing. So we
|
||||
# should skip and log a warning in this case.
|
||||
#
|
||||
# If the homeserver has left the room, then we should cancel all
|
||||
# delayed state events intended for this room, as there is no
|
||||
# need to try and send a delayed event into a room we've left.
|
||||
logger.warning(
|
||||
"Skipping state delta (%r, %r) without corresponding event ID. "
|
||||
"This can happen if the homeserver has left the room (in which "
|
||||
"case this can be ignored), or if there has been a state reset "
|
||||
"which has caused the sender to be kicked out of the room",
|
||||
delta.event_type,
|
||||
delta.state_key,
|
||||
)
|
||||
continue
|
||||
|
||||
logger.debug(
|
||||
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
|
||||
sender_str = event_id_and_sender_dict.get(
|
||||
delta.event_id, Sentinel.UNSET_SENTINEL
|
||||
)
|
||||
|
||||
event = await self._store.get_event(delta.event_id, allow_none=True)
|
||||
if not event:
|
||||
if sender_str is None:
|
||||
# An event exists, but the `sender` field was "null" and Synapse
|
||||
# incorrectly accepted the event. This is not expected.
|
||||
logger.error(
|
||||
"Skipping state delta with event ID '%s' as 'sender' was None. "
|
||||
"This is unexpected - please report it as a bug!",
|
||||
delta.event_id,
|
||||
)
|
||||
continue
|
||||
if sender_str is Sentinel.UNSET_SENTINEL:
|
||||
# We have an event ID, but the event was not found in the
|
||||
# datastore. This can happen if a room, or its history, is
|
||||
# purged. State deltas related to the room are left behind, but
|
||||
# the event no longer exists.
|
||||
#
|
||||
# As we cannot get the sender of this event, we can't calculate
|
||||
# whether to cancel delayed events related to this one. So we skip.
|
||||
logger.debug(
|
||||
"Skipping state delta with event ID '%s' - the room, or its history, may have been purged",
|
||||
delta.event_id,
|
||||
)
|
||||
continue
|
||||
|
||||
try:
|
||||
sender = UserID.from_string(sender_str)
|
||||
except SynapseError as e:
|
||||
logger.error(
|
||||
"Skipping state delta with Matrix User ID '%s' that failed to parse: %s",
|
||||
sender_str,
|
||||
e,
|
||||
)
|
||||
continue
|
||||
sender = UserID.from_string(event.sender)
|
||||
|
||||
next_send_ts = await self._store.cancel_delayed_state_events(
|
||||
room_id=delta.room_id,
|
||||
@@ -328,7 +417,7 @@ class DelayedEventsHandler:
|
||||
requester,
|
||||
(requester.user.to_string(), requester.device_id),
|
||||
)
|
||||
await self._initialized_from_db
|
||||
await make_deferred_yieldable(self._initialized_from_db)
|
||||
|
||||
next_send_ts = await self._store.cancel_delayed_event(
|
||||
delay_id=delay_id,
|
||||
@@ -354,7 +443,7 @@ class DelayedEventsHandler:
|
||||
requester,
|
||||
(requester.user.to_string(), requester.device_id),
|
||||
)
|
||||
await self._initialized_from_db
|
||||
await make_deferred_yieldable(self._initialized_from_db)
|
||||
|
||||
next_send_ts = await self._store.restart_delayed_event(
|
||||
delay_id=delay_id,
|
||||
@@ -380,7 +469,7 @@ class DelayedEventsHandler:
|
||||
# Use standard request limiter for sending delayed events on-demand,
|
||||
# as an on-demand send is similar to sending a regular event.
|
||||
await self._request_ratelimiter.ratelimit(requester)
|
||||
await self._initialized_from_db
|
||||
await make_deferred_yieldable(self._initialized_from_db)
|
||||
|
||||
event, next_send_ts = await self._store.process_target_delayed_event(
|
||||
delay_id=delay_id,
|
||||
|
||||
@@ -1002,7 +1002,7 @@ class DeviceWriterHandler(DeviceHandler):
|
||||
# rolling-restarting Synapse.
|
||||
if self._is_main_device_list_writer:
|
||||
# On start up check if there are any updates pending.
|
||||
hs.get_reactor().callWhenRunning(self._handle_new_device_update_async)
|
||||
hs.get_clock().call_when_running(self._handle_new_device_update_async)
|
||||
self.device_list_updater = DeviceListUpdater(hs, self)
|
||||
hs.get_federation_registry().register_edu_handler(
|
||||
EduTypes.DEVICE_LIST_UPDATE,
|
||||
|
||||
@@ -34,7 +34,7 @@ from synapse.logging.opentracing import (
|
||||
set_tag,
|
||||
)
|
||||
from synapse.types import JsonDict, Requester, StreamKeyType, UserID, get_domain_from_id
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.json import json_encoder
|
||||
from synapse.util.stringutils import random_string
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
||||
@@ -44,9 +44,9 @@ from synapse.types import (
|
||||
get_domain_from_id,
|
||||
get_verify_key_from_cross_signing_key,
|
||||
)
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.async_helpers import Linearizer, concurrently_execute
|
||||
from synapse.util.cancellation import cancellable
|
||||
from synapse.util.json import json_decoder
|
||||
from synapse.util.retryutils import (
|
||||
NotRetryingDestination,
|
||||
filter_destinations_by_retry_limiter,
|
||||
|
||||
@@ -39,8 +39,8 @@ from synapse.http import RequestTimedOutError
|
||||
from synapse.http.client import SimpleHttpClient
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.types import JsonDict, Requester
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.hash import sha256_and_url_safe_base64
|
||||
from synapse.util.json import json_decoder
|
||||
from synapse.util.stringutils import (
|
||||
assert_valid_client_secret,
|
||||
random_string,
|
||||
|
||||
@@ -81,9 +81,10 @@ from synapse.types import (
|
||||
create_requester,
|
||||
)
|
||||
from synapse.types.state import StateFilter
|
||||
from synapse.util import json_decoder, json_encoder, log_failure, unwrapFirstError
|
||||
from synapse.util import log_failure, unwrapFirstError
|
||||
from synapse.util.async_helpers import Linearizer, gather_results
|
||||
from synapse.util.caches.expiringcache import ExpiringCache
|
||||
from synapse.util.json import json_decoder, json_encoder
|
||||
from synapse.util.metrics import measure_func
|
||||
from synapse.visibility import get_effective_room_visibility_from_state
|
||||
|
||||
@@ -1013,14 +1014,37 @@ class EventCreationHandler:
|
||||
await self.clock.sleep(random.randint(1, 10))
|
||||
raise ShadowBanError()
|
||||
|
||||
if ratelimit:
|
||||
room_version = None
|
||||
|
||||
if (
|
||||
event_dict["type"] == EventTypes.Redaction
|
||||
and "redacts" in event_dict["content"]
|
||||
and self.hs.config.experimental.msc4169_enabled
|
||||
):
|
||||
room_id = event_dict["room_id"]
|
||||
try:
|
||||
room_version = await self.store.get_room_version(room_id)
|
||||
except NotFoundError:
|
||||
# The room doesn't exist.
|
||||
raise AuthError(403, f"User {requester.user} not in room {room_id}")
|
||||
|
||||
if not room_version.updated_redaction_rules:
|
||||
# Legacy room versions need the "redacts" field outside of the event's
|
||||
# content. However clients may still send it within the content, so move
|
||||
# the field if necessary for compatibility.
|
||||
redacts = event_dict.get("redacts") or event_dict["content"].pop(
|
||||
"redacts", None
|
||||
)
|
||||
if redacts is not None and "redacts" not in event_dict:
|
||||
event_dict["redacts"] = redacts
|
||||
|
||||
if ratelimit:
|
||||
if room_version is None:
|
||||
room_id = event_dict["room_id"]
|
||||
try:
|
||||
room_version = await self.store.get_room_version(room_id)
|
||||
except NotFoundError:
|
||||
raise AuthError(403, f"User {requester.user} not in room {room_id}")
|
||||
|
||||
if room_version.updated_redaction_rules:
|
||||
redacts = event_dict["content"].get("redacts")
|
||||
else:
|
||||
|
||||
@@ -67,8 +67,9 @@ from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.module_api import ModuleApi
|
||||
from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart
|
||||
from synapse.util import Clock, json_decoder
|
||||
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
||||
from synapse.util.clock import Clock
|
||||
from synapse.util.json import json_decoder
|
||||
from synapse.util.macaroons import MacaroonGenerator, OidcSessionData
|
||||
from synapse.util.templates import _localpart_from_email_filter
|
||||
|
||||
|
||||
@@ -541,7 +541,7 @@ class WorkerPresenceHandler(BasePresenceHandler):
|
||||
self.send_stop_syncing, UPDATE_SYNCING_USERS_MS
|
||||
)
|
||||
|
||||
hs.get_reactor().addSystemEventTrigger(
|
||||
hs.get_clock().add_system_event_trigger(
|
||||
"before",
|
||||
"shutdown",
|
||||
run_as_background_process,
|
||||
@@ -842,7 +842,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
# have not yet been persisted
|
||||
self.unpersisted_users_changes: Set[str] = set()
|
||||
|
||||
hs.get_reactor().addSystemEventTrigger(
|
||||
hs.get_clock().add_system_event_trigger(
|
||||
"before",
|
||||
"shutdown",
|
||||
run_as_background_process,
|
||||
@@ -1548,7 +1548,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
self.clock, name="presence_delta", server_name=self.server_name
|
||||
):
|
||||
room_max_stream_ordering = self.store.get_room_max_stream_ordering()
|
||||
if self._event_pos == room_max_stream_ordering:
|
||||
if self._event_pos >= room_max_stream_ordering:
|
||||
return
|
||||
|
||||
logger.debug(
|
||||
|
||||
@@ -13,7 +13,6 @@
|
||||
#
|
||||
|
||||
|
||||
import enum
|
||||
import logging
|
||||
from itertools import chain
|
||||
from typing import (
|
||||
@@ -75,6 +74,7 @@ from synapse.types.handlers.sliding_sync import (
|
||||
)
|
||||
from synapse.types.state import StateFilter
|
||||
from synapse.util import MutableOverlayMapping
|
||||
from synapse.util.sentinel import Sentinel
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -83,12 +83,6 @@ if TYPE_CHECKING:
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Sentinel(enum.Enum):
|
||||
# defining a sentinel in this way allows mypy to correctly handle the
|
||||
# type of a dictionary lookup and subsequent type narrowing.
|
||||
UNSET_SENTINEL = object()
|
||||
|
||||
|
||||
# Helper definition for the types that we might return. We do this to avoid
|
||||
# copying data between types (which can be expensive for many rooms).
|
||||
RoomsForUserType = Union[RoomsForUserStateReset, RoomsForUser, RoomsForUserSlidingSync]
|
||||
|
||||
@@ -27,7 +27,7 @@ from twisted.web.client import PartialDownloadError
|
||||
|
||||
from synapse.api.constants import LoginType
|
||||
from synapse.api.errors import Codes, LoginError, SynapseError
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.json import json_decoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
@@ -87,8 +87,8 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.logging.opentracing import set_tag, start_active_span, tags
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.types import ISynapseReactor, StrSequence
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.async_helpers import timeout_deferred
|
||||
from synapse.util.json import json_decoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
@@ -49,7 +49,7 @@ from synapse.http.federation.well_known_resolver import WellKnownResolver
|
||||
from synapse.http.proxyagent import ProxyAgent
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.types import ISynapseReactor
|
||||
from synapse.util import Clock
|
||||
from synapse.util.clock import Clock
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -27,7 +27,6 @@ from typing import Callable, Dict, Optional, Tuple
|
||||
import attr
|
||||
|
||||
from twisted.internet import defer
|
||||
from twisted.internet.interfaces import IReactorTime
|
||||
from twisted.web.client import RedirectAgent
|
||||
from twisted.web.http import stringToDatetime
|
||||
from twisted.web.http_headers import Headers
|
||||
@@ -35,8 +34,10 @@ from twisted.web.iweb import IAgent, IResponse
|
||||
|
||||
from synapse.http.client import BodyExceededMaxSize, read_body_with_max_size
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.util import Clock, json_decoder
|
||||
from synapse.types import ISynapseThreadlessReactor
|
||||
from synapse.util.caches.ttlcache import TTLCache
|
||||
from synapse.util.clock import Clock
|
||||
from synapse.util.json import json_decoder
|
||||
from synapse.util.metrics import Measure
|
||||
|
||||
# period to cache .well-known results for by default
|
||||
@@ -88,7 +89,7 @@ class WellKnownResolver:
|
||||
def __init__(
|
||||
self,
|
||||
server_name: str,
|
||||
reactor: IReactorTime,
|
||||
reactor: ISynapseThreadlessReactor,
|
||||
agent: IAgent,
|
||||
user_agent: bytes,
|
||||
well_known_cache: Optional[TTLCache[bytes, Optional[bytes]]] = None,
|
||||
|
||||
@@ -89,8 +89,8 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.logging.opentracing import set_tag, start_active_span, tags
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.async_helpers import AwakenableSleeper, Linearizer, timeout_deferred
|
||||
from synapse.util.json import json_decoder
|
||||
from synapse.util.metrics import Measure
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
|
||||
@@ -52,10 +52,11 @@ from zope.interface import implementer
|
||||
|
||||
from twisted.internet import defer, interfaces, reactor
|
||||
from twisted.internet.defer import CancelledError
|
||||
from twisted.internet.interfaces import IReactorTime
|
||||
from twisted.python import failure
|
||||
from twisted.web import resource
|
||||
|
||||
from synapse.types import ISynapseThreadlessReactor
|
||||
|
||||
try:
|
||||
from twisted.web.pages import notFound
|
||||
except ImportError:
|
||||
@@ -77,10 +78,11 @@ from synapse.api.errors import (
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.logging.context import defer_to_thread, preserve_fn, run_in_background
|
||||
from synapse.logging.opentracing import active_span, start_active_span, trace_servlet
|
||||
from synapse.util import Clock, json_encoder
|
||||
from synapse.util.caches import intern_dict
|
||||
from synapse.util.cancellation import is_function_cancellable
|
||||
from synapse.util.clock import Clock
|
||||
from synapse.util.iterutils import chunk_seq
|
||||
from synapse.util.json import json_encoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import opentracing
|
||||
@@ -410,7 +412,7 @@ class DirectServeJsonResource(_AsyncResource):
|
||||
clock: Optional[Clock] = None,
|
||||
):
|
||||
if clock is None:
|
||||
clock = Clock(cast(IReactorTime, reactor))
|
||||
clock = Clock(cast(ISynapseThreadlessReactor, reactor))
|
||||
|
||||
super().__init__(clock, extract_context)
|
||||
self.canonical_json = canonical_json
|
||||
@@ -589,7 +591,7 @@ class DirectServeHtmlResource(_AsyncResource):
|
||||
clock: Optional[Clock] = None,
|
||||
):
|
||||
if clock is None:
|
||||
clock = Clock(cast(IReactorTime, reactor))
|
||||
clock = Clock(cast(ISynapseThreadlessReactor, reactor))
|
||||
|
||||
super().__init__(clock, extract_context)
|
||||
|
||||
|
||||
@@ -51,7 +51,7 @@ from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.http import redact_uri
|
||||
from synapse.http.server import HttpServer
|
||||
from synapse.types import JsonDict, RoomAlias, RoomID, StrCollection
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.json import json_decoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
@@ -802,13 +802,15 @@ def run_in_background(
|
||||
deferred returned by the function completes.
|
||||
|
||||
To explain how the log contexts work here:
|
||||
- When `run_in_background` is called, the current context is stored ("original"),
|
||||
we kick off the background task in the current context, and we restore that
|
||||
original context before returning
|
||||
- When the background task finishes, we don't want to leak our context into the
|
||||
reactor which would erroneously get attached to the next operation picked up by
|
||||
the event loop. We add a callback to the deferred which will clear the logging
|
||||
context after it finishes and yields control back to the reactor.
|
||||
- When `run_in_background` is called, the calling logcontext is stored
|
||||
("original"), we kick off the background task in the current context, and we
|
||||
restore that original context before returning.
|
||||
- For a completed deferred, that's the end of the story.
|
||||
- For an incomplete deferred, when the background task finishes, we don't want to
|
||||
leak our context into the reactor which would erroneously get attached to the
|
||||
next operation picked up by the event loop. We add a callback to the deferred
|
||||
which will clear the logging context after it finishes and yields control back to
|
||||
the reactor.
|
||||
|
||||
Useful for wrapping functions that return a deferred or coroutine, which you don't
|
||||
yield or await on (for instance because you want to pass it to
|
||||
@@ -857,22 +859,36 @@ def run_in_background(
|
||||
|
||||
# The deferred has already completed
|
||||
if d.called and not d.paused:
|
||||
# The function should have maintained the logcontext, so we can
|
||||
# optimise out the messing about
|
||||
# If the function messes with logcontexts, we can assume it follows the Synapse
|
||||
# logcontext rules (Rules for functions returning awaitables: "If the awaitable
|
||||
# is already complete, the function returns with the same logcontext it started
|
||||
# with."). If it function doesn't touch logcontexts at all, we can also assume
|
||||
# the logcontext is unchanged.
|
||||
#
|
||||
# Either way, the function should have maintained the calling logcontext, so we
|
||||
# can avoid messing with it further. Additionally, if the deferred has already
|
||||
# completed, then it would be a mistake to then add a deferred callback (below)
|
||||
# to reset the logcontext to the sentinel logcontext as that would run
|
||||
# immediately (remember our goal is to maintain the calling logcontext when we
|
||||
# return).
|
||||
return d
|
||||
|
||||
# The function may have reset the context before returning, so we need to restore it
|
||||
# now.
|
||||
# Since the function we called may follow the Synapse logcontext rules (Rules for
|
||||
# functions returning awaitables: "If the awaitable is incomplete, the function
|
||||
# clears the logcontext before returning"), the function may have reset the
|
||||
# logcontext before returning, so we need to restore the calling logcontext now
|
||||
# before we return ourselves.
|
||||
#
|
||||
# Our goal is to have the caller logcontext unchanged after firing off the
|
||||
# background task and returning.
|
||||
set_current_context(calling_context)
|
||||
|
||||
# The original logcontext will be restored when the deferred completes, but
|
||||
# there is nothing waiting for it, so it will get leaked into the reactor (which
|
||||
# would then get picked up by the next thing the reactor does). We therefore
|
||||
# need to reset the logcontext here (set the `sentinel` logcontext) before
|
||||
# yielding control back to the reactor.
|
||||
# If the function we called is playing nice and following the Synapse logcontext
|
||||
# rules, it will restore original calling logcontext when the deferred completes;
|
||||
# but there is nothing waiting for it, so it will get leaked into the reactor (which
|
||||
# would then get picked up by the next thing the reactor does). We therefore need to
|
||||
# reset the logcontext here (set the `sentinel` logcontext) before yielding control
|
||||
# back to the reactor.
|
||||
#
|
||||
# (If this feels asymmetric, consider it this way: we are
|
||||
# effectively forking a new thread of execution. We are
|
||||
@@ -894,10 +910,9 @@ def run_coroutine_in_background(
|
||||
Useful for wrapping coroutines that you don't yield or await on (for
|
||||
instance because you want to pass it to deferred.gatherResults()).
|
||||
|
||||
This is a special case of `run_in_background` where we can accept a
|
||||
coroutine directly rather than a function. We can do this because coroutines
|
||||
do not run until called, and so calling an async function without awaiting
|
||||
cannot change the log contexts.
|
||||
This is a special case of `run_in_background` where we can accept a coroutine
|
||||
directly rather than a function. We can do this because coroutines do not continue
|
||||
running once they have yielded.
|
||||
|
||||
This is an ergonomic helper so we can do this:
|
||||
```python
|
||||
@@ -908,33 +923,7 @@ def run_coroutine_in_background(
|
||||
run_in_background(lambda: func1(arg1))
|
||||
```
|
||||
"""
|
||||
calling_context = current_context()
|
||||
|
||||
# Wrap the coroutine in a deferred, which will have the side effect of executing the
|
||||
# coroutine in the background.
|
||||
d = defer.ensureDeferred(coroutine)
|
||||
|
||||
# The function may have reset the context before returning, so we need to restore it
|
||||
# now.
|
||||
#
|
||||
# Our goal is to have the caller logcontext unchanged after firing off the
|
||||
# background task and returning.
|
||||
set_current_context(calling_context)
|
||||
|
||||
# The original logcontext will be restored when the deferred completes, but
|
||||
# there is nothing waiting for it, so it will get leaked into the reactor (which
|
||||
# would then get picked up by the next thing the reactor does). We therefore
|
||||
# need to reset the logcontext here (set the `sentinel` logcontext) before
|
||||
# yielding control back to the reactor.
|
||||
#
|
||||
# (If this feels asymmetric, consider it this way: we are
|
||||
# effectively forking a new thread of execution. We are
|
||||
# probably currently within a ``with LoggingContext()`` block,
|
||||
# which is supposed to have a single entry and exit point. But
|
||||
# by spawning off another deferred, we are effectively
|
||||
# adding a new exit point.)
|
||||
d.addBoth(_set_context_cb, SENTINEL_CONTEXT)
|
||||
return d
|
||||
return run_in_background(lambda: coroutine)
|
||||
|
||||
|
||||
T = TypeVar("T")
|
||||
|
||||
@@ -60,8 +60,18 @@ class PeriodicallyFlushingMemoryHandler(MemoryHandler):
|
||||
else:
|
||||
reactor_to_use = reactor
|
||||
|
||||
# call our hook when the reactor start up
|
||||
reactor_to_use.callWhenRunning(on_reactor_running)
|
||||
# Call our hook when the reactor start up
|
||||
#
|
||||
# type-ignore: Ideally, we'd use `Clock.call_when_running(...)`, but
|
||||
# `PeriodicallyFlushingMemoryHandler` is instantiated via Python logging
|
||||
# configuration, so it's not straightforward to pass in the homeserver's clock
|
||||
# (and we don't want to burden other peoples logging config with the details).
|
||||
#
|
||||
# The important reason why we want to use `Clock.call_when_running` is so that
|
||||
# the callback runs with a logcontext as we want to know which server the logs
|
||||
# came from. But since we don't log anything in the callback, it's safe to
|
||||
# ignore the lint here.
|
||||
reactor_to_use.callWhenRunning(on_reactor_running) # type: ignore[prefer-synapse-clock-call-when-running]
|
||||
|
||||
def shouldFlush(self, record: LogRecord) -> bool:
|
||||
"""
|
||||
|
||||
@@ -204,7 +204,7 @@ from twisted.web.http import Request
|
||||
from twisted.web.http_headers import Headers
|
||||
|
||||
from synapse.config import ConfigError
|
||||
from synapse.util import json_decoder, json_encoder
|
||||
from synapse.util.json import json_decoder, json_encoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.http.site import SynapseRequest
|
||||
|
||||
@@ -54,8 +54,8 @@ from synapse.logging.context import (
|
||||
make_deferred_yieldable,
|
||||
run_in_background,
|
||||
)
|
||||
from synapse.util import Clock
|
||||
from synapse.util.async_helpers import DeferredEvent
|
||||
from synapse.util.clock import Clock
|
||||
from synapse.util.stringutils import is_ascii
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
||||
@@ -55,7 +55,7 @@ from synapse.api.errors import NotFoundError
|
||||
from synapse.logging.context import defer_to_thread, run_in_background
|
||||
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
|
||||
from synapse.media._base import ThreadedFileSender
|
||||
from synapse.util import Clock
|
||||
from synapse.util.clock import Clock
|
||||
from synapse.util.file_consumer import BackgroundFileConsumer
|
||||
|
||||
from ..types import JsonDict
|
||||
|
||||
@@ -27,7 +27,7 @@ import attr
|
||||
|
||||
from synapse.media.preview_html import parse_html_description
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.json import json_decoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from lxml import etree
|
||||
|
||||
@@ -46,9 +46,9 @@ from synapse.media.oembed import OEmbedProvider
|
||||
from synapse.media.preview_html import decode_body, parse_html_to_open_graph
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.types import JsonDict, UserID
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.async_helpers import ObservableDeferred
|
||||
from synapse.util.caches.expiringcache import ExpiringCache
|
||||
from synapse.util.json import json_encoder
|
||||
from synapse.util.stringutils import random_string
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
||||
@@ -158,9 +158,9 @@ from synapse.types import (
|
||||
create_requester,
|
||||
)
|
||||
from synapse.types.state import StateFilter
|
||||
from synapse.util import Clock
|
||||
from synapse.util.async_helpers import maybe_awaitable
|
||||
from synapse.util.caches.descriptors import CachedFunction, cached as _cached
|
||||
from synapse.util.clock import Clock
|
||||
from synapse.util.frozenutils import freeze
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
||||
@@ -29,7 +29,7 @@ import logging
|
||||
from typing import List, Optional, Tuple, Type, TypeVar
|
||||
|
||||
from synapse.replication.tcp.streams._base import StreamRow
|
||||
from synapse.util import json_decoder, json_encoder
|
||||
from synapse.util.json import json_decoder, json_encoder
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ from prometheus_client import Counter, Histogram
|
||||
from synapse.logging import opentracing
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.util import json_decoder, json_encoder
|
||||
from synapse.util.json import json_decoder, json_encoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from txredisapi import ConnectionHandler
|
||||
|
||||
@@ -55,7 +55,7 @@ from synapse.replication.tcp.commands import (
|
||||
ServerCommand,
|
||||
parse_command_from_line,
|
||||
)
|
||||
from synapse.util import Clock
|
||||
from synapse.util.clock import Clock
|
||||
from synapse.util.stringutils import random_string
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
||||
@@ -399,10 +399,15 @@ class SigningKeyUploadServlet(RestServlet):
|
||||
if not keys_are_different:
|
||||
return 200, {}
|
||||
|
||||
# MSC4190 can skip UIA for replacing cross-signing keys as well.
|
||||
is_appservice_with_msc4190 = (
|
||||
requester.app_service and requester.app_service.msc4190_device_management
|
||||
)
|
||||
|
||||
# The keys are different; is x-signing set up? If no, then this is first-time
|
||||
# setup, and that is allowed without UIA, per MSC3967.
|
||||
# If yes, then we need to authenticate the change.
|
||||
if is_cross_signing_setup:
|
||||
if is_cross_signing_setup and not is_appservice_with_msc4190:
|
||||
# With MSC3861, UIA is not possible. Instead, the auth service has to
|
||||
# explicitly mark the master key as replaceable.
|
||||
if self.hs.config.mas.enabled:
|
||||
|
||||
@@ -216,6 +216,13 @@ class LoginRestServlet(RestServlet):
|
||||
"This login method is only valid for application services"
|
||||
)
|
||||
|
||||
if appservice.msc4190_device_management:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"This appservice has MSC4190 enabled, so appservice login cannot be used.",
|
||||
errcode=Codes.APPSERVICE_LOGIN_UNSUPPORTED,
|
||||
)
|
||||
|
||||
if appservice.is_rate_limited():
|
||||
await self._address_ratelimiter.ratelimit(
|
||||
None, request.getClientAddress().host
|
||||
|
||||
@@ -782,8 +782,12 @@ class RegisterRestServlet(RestServlet):
|
||||
user_id, appservice = await self.registration_handler.appservice_register(
|
||||
username, as_token
|
||||
)
|
||||
if appservice.msc4190_device_management:
|
||||
body["inhibit_login"] = True
|
||||
if appservice.msc4190_device_management and not body.get("inhibit_login"):
|
||||
raise SynapseError(
|
||||
400,
|
||||
"This appservice has MSC4190 enabled, so the inhibit_login parameter must be set to true.",
|
||||
errcode=Codes.APPSERVICE_LOGIN_UNSUPPORTED,
|
||||
)
|
||||
|
||||
return await self._create_registration_details(
|
||||
user_id,
|
||||
@@ -923,6 +927,12 @@ class RegisterAppServiceOnlyRestServlet(RestServlet):
|
||||
"Registration has been disabled. Only m.login.application_service registrations are allowed.",
|
||||
errcode=Codes.FORBIDDEN,
|
||||
)
|
||||
if not body.get("inhibit_login"):
|
||||
raise SynapseError(
|
||||
400,
|
||||
"This server uses OAuth2, so the inhibit_login parameter must be set to true for appservice registrations.",
|
||||
errcode=Codes.APPSERVICE_LOGIN_UNSUPPORTED,
|
||||
)
|
||||
|
||||
kind = parse_string(request, "kind", default="user")
|
||||
|
||||
|
||||
@@ -58,8 +58,8 @@ from synapse.logging.opentracing import log_kv, set_tag, trace_with_opname
|
||||
from synapse.rest.admin.experimental_features import ExperimentalFeature
|
||||
from synapse.types import JsonDict, Requester, SlidingSyncStreamToken, StreamToken
|
||||
from synapse.types.rest.client import SlidingSyncBody
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.caches.lrucache import LruCache
|
||||
from synapse.util.json import json_decoder
|
||||
|
||||
from ._base import client_patterns, set_timeline_upper_limit
|
||||
|
||||
|
||||
@@ -180,6 +180,8 @@ class VersionsRestServlet(RestServlet):
|
||||
"org.matrix.msc4155": self.config.experimental.msc4155_enabled,
|
||||
# MSC4306: Support for thread subscriptions
|
||||
"org.matrix.msc4306": self.config.experimental.msc4306_enabled,
|
||||
# MSC4169: Backwards-compatible redaction sending using `/send`
|
||||
"com.beeper.msc4169": self.config.experimental.msc4169_enabled,
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
@@ -38,8 +38,8 @@ from synapse.http.servlet import (
|
||||
from synapse.storage.keys import FetchKeyResultForRemote
|
||||
from synapse.types import JsonDict
|
||||
from synapse.types.rest import RequestBodyModel
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.async_helpers import yieldable_gather_results
|
||||
from synapse.util.json import json_decoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
@@ -28,7 +28,7 @@ from synapse.api.errors import NotFoundError
|
||||
from synapse.http.server import DirectServeJsonResource
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.json import json_encoder
|
||||
from synapse.util.stringutils import parse_server_name
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
||||
@@ -156,7 +156,7 @@ from synapse.storage.controllers import StorageControllers
|
||||
from synapse.streams.events import EventSources
|
||||
from synapse.synapse_rust.rendezvous import RendezvousHandler
|
||||
from synapse.types import DomainSpecificString, ISynapseReactor
|
||||
from synapse.util import Clock
|
||||
from synapse.util.clock import Clock
|
||||
from synapse.util.distributor import Distributor
|
||||
from synapse.util.macaroons import MacaroonGenerator
|
||||
from synapse.util.ratelimitutils import FederationRateLimiter
|
||||
@@ -1007,7 +1007,7 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||
)
|
||||
|
||||
media_threadpool.start()
|
||||
self.get_reactor().addSystemEventTrigger(
|
||||
self.get_clock().add_system_event_trigger(
|
||||
"during", "shutdown", media_threadpool.stop
|
||||
)
|
||||
|
||||
|
||||
@@ -29,8 +29,8 @@ from synapse.storage.database import (
|
||||
make_in_list_sql_clause, # noqa: F401
|
||||
)
|
||||
from synapse.types import get_domain_from_id
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.caches.descriptors import CachedFunction
|
||||
from synapse.util.json import json_decoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
@@ -45,7 +45,8 @@ from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.storage.types import Connection, Cursor
|
||||
from synapse.types import JsonDict, StrCollection
|
||||
from synapse.util import Clock, json_encoder
|
||||
from synapse.util.clock import Clock
|
||||
from synapse.util.json import json_encoder
|
||||
|
||||
from . import engines
|
||||
|
||||
|
||||
@@ -682,6 +682,8 @@ class StateStorageController:
|
||||
- the stream id which these results go up to
|
||||
- list of current_state_delta_stream rows. If it is empty, we are
|
||||
up to date.
|
||||
|
||||
A maximum of 100 rows will be returned.
|
||||
"""
|
||||
# FIXME(faster_joins): what do we do here?
|
||||
# https://github.com/matrix-org/synapse/issues/13008
|
||||
|
||||
@@ -48,9 +48,9 @@ from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
|
||||
from synapse.storage.invite_rule import InviteRulesConfig
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||
from synapse.types import JsonDict, JsonMapping
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.caches.descriptors import cached
|
||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
from synapse.util.json import json_encoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
@@ -42,8 +42,8 @@ from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
|
||||
from synapse.storage.types import Cursor
|
||||
from synapse.storage.util.sequence import build_sequence_generator
|
||||
from synapse.types import DeviceListUpdates, JsonMapping
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.caches.descriptors import _CacheContext, cached
|
||||
from synapse.util.json import json_encoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -83,6 +83,10 @@ class ApplicationServiceWorkerStore(RoomMemberWorkerStore):
|
||||
hs.hostname, hs.config.appservice.app_service_config_files
|
||||
)
|
||||
self.exclusive_user_regex = _make_exclusive_regex(self.services_cache)
|
||||
# When OAuth is enabled, force all appservices to enable MSC4190 too.
|
||||
if hs.config.mas.enabled or hs.config.experimental.msc3861.enabled:
|
||||
for appservice in self.services_cache:
|
||||
appservice.msc4190_device_management = True
|
||||
|
||||
def get_max_as_txn_id(txn: Cursor) -> int:
|
||||
logger.warning("Falling back to slow query, you should port to postgres")
|
||||
|
||||
@@ -32,7 +32,7 @@ from synapse.storage.database import (
|
||||
)
|
||||
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
||||
from synapse.storage.databases.main.events_worker import EventsWorkerStore
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.json import json_encoder
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
@@ -455,7 +455,7 @@ class ClientIpWorkerStore(ClientIpBackgroundUpdateStore, MonthlyActiveUsersWorke
|
||||
self._client_ip_looper = self._clock.looping_call(
|
||||
self._update_client_ips_batch, 5 * 1000
|
||||
)
|
||||
self.hs.get_reactor().addSystemEventTrigger(
|
||||
self.hs.get_clock().add_system_event_trigger(
|
||||
"before", "shutdown", self._update_client_ips_batch
|
||||
)
|
||||
|
||||
|
||||
@@ -22,7 +22,8 @@ from synapse.storage._base import SQLBaseStore, db_to_json
|
||||
from synapse.storage.database import LoggingTransaction, StoreError
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.types import JsonDict, RoomID
|
||||
from synapse.util import json_encoder, stringutils as stringutils
|
||||
from synapse.util import stringutils
|
||||
from synapse.util.json import json_encoder
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -182,6 +183,21 @@ class DelayedEventsStore(SQLBaseStore):
|
||||
"restart_delayed_event", restart_delayed_event_txn
|
||||
)
|
||||
|
||||
async def get_count_of_delayed_events(self) -> int:
|
||||
"""Returns the number of pending delayed events in the DB."""
|
||||
|
||||
def _get_count_of_delayed_events(txn: LoggingTransaction) -> int:
|
||||
sql = "SELECT count(*) FROM delayed_events"
|
||||
|
||||
txn.execute(sql)
|
||||
resp = txn.fetchone()
|
||||
return resp[0] if resp is not None else 0
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"get_count_of_delayed_events",
|
||||
_get_count_of_delayed_events,
|
||||
)
|
||||
|
||||
async def get_all_delayed_events_for_user(
|
||||
self,
|
||||
user_localpart: str,
|
||||
|
||||
@@ -53,10 +53,11 @@ from synapse.storage.database import (
|
||||
)
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||
from synapse.types import JsonDict, StrCollection
|
||||
from synapse.util import Duration, json_encoder
|
||||
from synapse.util import Duration
|
||||
from synapse.util.caches.expiringcache import ExpiringCache
|
||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
from synapse.util.iterutils import batch_iter
|
||||
from synapse.util.json import json_encoder
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
||||
@@ -64,11 +64,11 @@ from synapse.types import (
|
||||
StrCollection,
|
||||
get_verify_key_from_cross_signing_key,
|
||||
)
|
||||
from synapse.util import json_decoder, json_encoder
|
||||
from synapse.util.caches.descriptors import cached, cachedList
|
||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
from synapse.util.cancellation import cancellable
|
||||
from synapse.util.iterutils import batch_iter
|
||||
from synapse.util.json import json_decoder, json_encoder
|
||||
from synapse.util.stringutils import shortstr
|
||||
|
||||
if TYPE_CHECKING:
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user