Compare commits

..

61 Commits

Author SHA1 Message Date
Andrew Morgan
036fb87584 1.139.2 2025-10-07 16:30:03 +01:00
Andrew Morgan
5e3839e2af Update KeyUploadServlet to handle case where client sends device_keys: null (#19023) 2025-10-07 16:28:26 +01:00
Andrew Morgan
76b012c3f5 1.139.1 2025-10-07 11:58:08 +01:00
Till
26aaaf9e48 Validate the body of requests to /keys/upload (#17097)
Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-authored-by: Eric Eastwood <erice@element.io>
2025-10-07 11:34:07 +01:00
Andrew Morgan
4a37c4d87a Remove unstable prefixes for MSC2732: Olm fallback keys (#18996)
Co-authored-by: Eric Eastwood <erice@element.io>
2025-10-07 11:34:03 +01:00
Andrew Morgan
d67280f5d8 Remove unstable prefixes for MSC2732
This MSC was accepted in 2022. We shouldn't need to continue supporting the unstable field names.
2025-10-07 11:33:58 +01:00
Andrew Morgan
0aeb95fb07 Add MAS note to 1.139.0 changelog 2025-09-30 12:05:28 +01:00
Andrew Morgan
72020f3f2c 1.139.0 2025-09-30 11:58:59 +01:00
Andrew Morgan
e2ec3b7d0d 1.139.0rc3 2025-09-25 12:14:20 +01:00
Eric Eastwood
acb9ec3c38 Fix run_coroutine_in_background(...) incorrectly handling logcontext (#18964)
Regressed in
https://github.com/element-hq/synapse/pull/18900#discussion_r2331554278
(see conversation there for more context)


### How is this a regression?

> To give this an update with more hindsight; this logic *was* redundant
with the early return and it is safe to remove this complexity

> 
> It seems like this actually has to do with completed vs incomplete
deferreds...
> 
> To explain how things previously worked *without* the early-return
shortcut:
> 
> With the normal case of **incomplete awaitable**, we store the
`calling_context` and the `f` function is called and runs until it
yields to the reactor. Because `f` follows the logcontext rules, it sets
the `sentinel` logcontext. Then in `run_in_background(...)`, we restore
the `calling_context`, store the current `ctx` (which is `sentinel`) and
return. When the deferred completes, we restore `ctx` (which is
`sentinel`) before yielding to the reactor again (all good
)
> 
> With the other case where we see a **completed awaitable**, we store
the `calling_context` and the `f` function is called and runs to
completion (no logcontext change). *This is where the shortcut would
kick in but I'm going to continue explaining as if we commented out the
shortcut.* -- Then in `run_in_background(...)`, we restore the
`calling_context`, store the current `ctx` (which is same as the
`calling_context`). Because the deferred is already completed, our extra
callback is called immediately and we restore `ctx` (which is same as
the `calling_context`). Since we never yield to the reactor, the
`calling_context` is perfect as that's what we want again (all good
)
> 
> ---
> 
> But this also means that our early-return shortcut is no longer just
an optimization and is *necessary* to act correctly in the **completed
awaitable** case as we want to return with the `calling_context` and not
reset to the `sentinel` context. I've updated the comment in
https://github.com/element-hq/synapse/pull/18964 to explain the
necessity as it's currently just described as an optimization.
> 
> But because we made the same change to
`run_coroutine_in_background(...)` which didn't have the same
early-return shortcut, we regressed the correct behavior  . This is
being fixed in https://github.com/element-hq/synapse/pull/18964
>
>
> *-- @MadLittleMods,
https://github.com/element-hq/synapse/pull/18900#discussion_r2373582917*

### How did we find this problem?

Spawning from @wrjlewis
[seeing](https://matrix.to/#/!SGNQGPGUwtcPBUotTL:matrix.org/$h3TxxPVlqC6BTL07dbrsz6PmaUoZxLiXnSTEY-QYDtA?via=jki.re&via=matrix.org&via=element.io)
`Starting metrics collection 'typing.get_new_events' from sentinel
context: metrics will be lost` in the logs:

<details>
<summary>More logs</summary>

```
synapse.http.request_metrics - 222 - ERROR - sentinel - Trying to stop RequestMetrics in the sentinel context.
2025-09-23 14:43:19,712 - synapse.util.metrics - 212 - WARNING - sentinel - Starting metrics collection 'typing.get_new_events' from sentinel context: metrics will be lost
2025-09-23 14:43:19,713 - synapse.rest.client.sync - 851 - INFO - sentinel - Client has disconnected; not serializing response.
2025-09-23 14:43:19,713 - synapse.http.server - 825 - WARNING - sentinel - Not sending response to request <XForwardedForRequest at 0x7f23e8111ed0 method='POST' uri='/_matrix/client/unstable/org.matrix.simplified_msc3575/sync?pos=281963%2Fs929324_147053_10_2652457_147960_2013_25554_4709564_0_164_2&timeout=30000' clientproto='HTTP/1.1' site='8008'>, already dis
connected.
2025-09-23 14:43:19,713 - synapse.access.http.8008 - 515 - INFO - sentinel - 92.40.194.87 - 8008 - {@me:wi11.co.uk} Processed request: 30.005sec/-8.041sec (0.001sec, 0.000sec) (0.000sec/0.002sec/2) 0B 200! "POST /_matrix/client/unstable/org.matrix.simplified_msc3575/
```

</details>

From the logs there, we can see things relating to
`typing.get_new_events` and
`/_matrix/client/unstable/org.matrix.simplified_msc3575/sync` which led
me to trying out Sliding Sync with the typing extension enabled and
allowed me to reproduce the problem locally. Sliding Sync is a unique
scenario as it's the only place we use `gather_optional_coroutines(...)`
-> `run_coroutine_in_background(...)` (introduced in
https://github.com/element-hq/synapse/pull/17884) to exhibit this
behavior.


### Testing strategy

1. Configure Synapse to enable
[MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186):
Simplified Sliding Sync which is actually under
[MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575)
    ```yaml
    experimental_features:
      msc3575_enabled: true
    ```
1. Start synapse: `poetry run synapse_homeserver --config-path
homeserver.yaml`
 1. Make a Sliding Sync request with one of the extensions enabled
    ```http
POST
http://localhost:8008/_matrix/client/unstable/org.matrix.simplified_msc3575/sync
    {
      "lists": {},
      "room_subscriptions": {
            "!FlgJYGQKAIvAscfBhq:my.synapse.linux.server": {
                "required_state": [],
                "timeline_limit": 1
            }
        },
        "extensions": {
            "typing": {
                "enabled": true
            }
        }
    }
    ```
1. Open your homeserver logs and notice warnings about `Starting ...
from sentinel context: metrics will be lost`
2025-09-25 12:13:01 +01:00
Andrew Morgan
9c4ba13a10 Add entry to v1.139.0 upgrade notes about appservices and /register requests 2025-09-23 16:27:38 +01:00
Andrew Morgan
5857d2de59 Note ubuntu release support update in the upgrade notes 2025-09-23 15:34:26 +01:00
Andrew Morgan
b10f3f5959 1.139.0rc2 2025-09-23 15:31:49 +01:00
Andrew Morgan
fd29e3219c Drop support for Ubuntu 24.10 'Oracular Oriole', add support for Ubuntu 25.04 'Plucky Puffin' (#18962) 2025-09-23 15:28:40 +01:00
Andrew Morgan
d308469e90 Update changelog to move MSC4190 entry to Features 2025-09-23 14:28:38 +01:00
Andrew Morgan
daf33e4954 1.139.0rc1 2025-09-23 13:28:34 +01:00
Andrew Morgan
ddc7627b22 Fix performance regression related to delayed events processing (#18926) 2025-09-23 09:47:30 +01:00
Eric Eastwood
5be7679dd9 Split loading config vs homeserver setup (#18933)
This allows us to get access to `server_name` so we can use it when
creating the `LoggingContext("main")` in the future (pre-requisite for
https://github.com/element-hq/synapse/pull/18868).

This also allows us more flexibility to parse config however we want and
setup a Synapse homeserver. Like what we do in [Synapse Pro for Small
Hosts](https://github.com/element-hq/synapse-small-hosts).

Split out from https://github.com/element-hq/synapse/pull/18868
2025-09-22 14:53:02 -05:00
Eric Eastwood
e7d98d3429 Remove sentinel logcontext in Clock utilities (looping_call, looping_call_now, call_later) (#18907)
Part of https://github.com/element-hq/synapse/issues/18905

Lints for ensuring we use `Clock.call_later` instead of
`reactor.callLater`, etc are coming in
https://github.com/element-hq/synapse/pull/18944

### Testing strategy

 1. Configure Synapse to log at the `DEBUG` level
1. Start Synapse: `poetry run synapse_homeserver --config-path
homeserver.yaml`
1. Wait 10 seconds for the [database profiling
loop](9cc4001778/synapse/storage/database.py (L711))
to execute
1. Notice the logcontext being used for the `Total database time` log
line

Before (`sentinel`):

```
2025-09-10 16:36:58,651 - synapse.storage.TIME - 707 - DEBUG - sentinel - Total database time: 0.646% {room_forgetter_stream_pos(2): 0.131%, reap_monthly_active_users(1): 0.083%, get_device_change_last_converted_pos(1): 0.078%}
```

After (`looping_call`):

```
2025-09-10 16:36:58,651 - synapse.storage.TIME - 707 - DEBUG - looping_call - Total database time: 0.646% {room_forgetter_stream_pos(2): 0.131%, reap_monthly_active_users(1): 0.083%, get_device_change_last_converted_pos(1): 0.078%}
```
2025-09-22 14:51:13 -05:00
Eric Eastwood
d05f44a1c6 Introduce Clock.add_system_event_trigger(...) to include logcontext by default (#18945)
Introduce `Clock.add_system_event_trigger(...)` to wrap system event
callback code in a logcontext, ensuring we can identify which server
generated the logs.

Background:

>  Ideally, nothing from the Synapse homeserver would be logged against the `sentinel` 
>  logcontext as we want to know which server the logs came from. In practice, this is not 
>  always the case yet especially outside of request handling. 
>   
>  Global things outside of Synapse (e.g. Twisted reactor code) should run in the 
>  `sentinel` logcontext. It's only when it calls into application code that a logcontext 
>  gets activated. This means the reactor should be started in the `sentinel` logcontext, 
>  and any time an awaitable yields control back to the reactor, it should reset the 
>  logcontext to be the `sentinel` logcontext. This is important to avoid leaking the 
>  current logcontext to the reactor (which would then get picked up and associated with 
>  the next thing the reactor does). 
>
> *-- `docs/log_contexts.md`

Also adds a lint to prefer `Clock.add_system_event_trigger(...)` over
`reactor.addSystemEventTrigger(...)`

Part of https://github.com/element-hq/synapse/issues/18905
2025-09-22 11:47:22 -05:00
Eric Eastwood
8d5d87fb0a Fix run_as_background_process not be awaited properly causing LoggingContext problems (#18938)
Basically, searching for any instance of
`run_as_background_process(...)` and making sure we wrap the deferred in
`make_deferred_yieldable(...)` if we try to `await` the result to make
it follow the [Synapse logcontext
rules](https://github.com/element-hq/synapse/blob/develop/docs/log_contexts.md).

Part of https://github.com/element-hq/synapse/issues/18905
2025-09-22 11:02:08 -05:00
Eric Eastwood
9a88d25f8e Fix run_in_background not be awaited properly causing LoggingContext problems (#18937)
Basically, searching for any instance of `run_in_background(...)` and
making sure we wrap the deferred in `make_deferred_yieldable(...)` if we
try to `await` the result to make it follow the [Synapse logcontext
rules](https://github.com/element-hq/synapse/blob/develop/docs/log_contexts.md).

Turns out, we only have this problem in some tests (phew)

Part of https://github.com/element-hq/synapse/issues/18905
2025-09-22 10:55:45 -05:00
Eric Eastwood
5a9ca1e3d9 Introduce Clock.call_when_running(...) to include logcontext by default (#18944)
Introduce `Clock.call_when_running(...)` to wrap startup code in a
logcontext, ensuring we can identify which server generated the logs.

Background:

>  Ideally, nothing from the Synapse homeserver would be logged against the `sentinel` 
>  logcontext as we want to know which server the logs came from. In practice, this is not 
>  always the case yet especially outside of request handling. 
>   
>  Global things outside of Synapse (e.g. Twisted reactor code) should run in the 
>  `sentinel` logcontext. It's only when it calls into application code that a logcontext 
>  gets activated. This means the reactor should be started in the `sentinel` logcontext, 
>  and any time an awaitable yields control back to the reactor, it should reset the 
>  logcontext to be the `sentinel` logcontext. This is important to avoid leaking the 
>  current logcontext to the reactor (which would then get picked up and associated with 
>  the next thing the reactor does). 
>
> *-- `docs/log_contexts.md`

Also adds a lint to prefer `Clock.call_when_running(...)` over
`reactor.callWhenRunning(...)`

Part of https://github.com/element-hq/synapse/issues/18905
2025-09-22 10:27:59 -05:00
SpiritCroc
83aca3f097 Implement MSC4169: backwards-compatible redaction sending for rooms < v11 using the /send endpoint (#18898)
Implement
[MSC4169](https://github.com/matrix-org/matrix-spec-proposals/pull/4169)

While there is a dedicated API endpoint for redactions, being able to
send redactions using the normal send endpoint is useful when using
[MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140)
for sending delayed redactions to replicate expiring messages. Currently
this would only work on rooms >= v11 but fail with an internal server
error on older room versions when setting the `redacts` field in the
content, since older rooms would require that field to be outside of
`content`. We can address this by copying it over if necessary.

Relevant spec at
https://spec.matrix.org/v1.8/rooms/v11/#moving-the-redacts-property-of-mroomredaction-events-to-a-content-property

---------

Co-authored-by: Tulir Asokan <tulir@maunium.net>
2025-09-22 14:50:52 +01:00
Tulir Asokan
d80f515622 Update MSC4190 support (#18946) 2025-09-22 14:45:05 +01:00
Max Kratz
4367fb2d07 OIDC doc: adds missing jwt_config values to authentik example (#18931)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-09-18 15:05:41 +01:00
Andrew Morgan
b596faa4ec Cache _get_e2e_cross_signing_signatures_for_devices (#18899) 2025-09-18 12:06:08 +01:00
Eric Eastwood
6f9fab1089 Fix open redirect in legacy SSO flow (idp) (#18909)
- Validate the `idp` parameter to only accept the ones that are known in
the config file
- URL-encode the `idp` parameter for safety's sake (this is the main
fix)

Fix https://github.com/matrix-org/internal-config/issues/1651 (internal
link)

Regressed in https://github.com/element-hq/synapse/pull/17972
2025-09-17 13:54:47 -05:00
Eric Eastwood
84d64251dc Remove sentinel logcontext where we log in setup, start and exit (#18870)
Remove `sentinel` logcontext where we log in `setup`, `start`, and exit.

Instead of having one giant PR that removes all places we use `sentinel`
logcontext, I've decided to tackle this more piece-meal. This PR covers
the parts if you just startup Synapse and exit it with no requests or
activity going on in between.

Part of https://github.com/element-hq/synapse/issues/18905 (Remove
`sentinel` logcontext where we log in Synapse)

Prerequisite for https://github.com/element-hq/synapse/pull/18868.
Logging with the `sentinel` logcontext means we won't know which server
the log came from.



### Why


9cc4001778/docs/log_contexts.md (L71-L81)

(docs updated in https://github.com/element-hq/synapse/pull/18900)


### Testing strategy

1. Run Synapse normally and with `daemonize: true`: `poetry run
synapse_homeserver --config-path homeserver.yaml`
 1. Execute some requests
 1. Shutdown the server
 1. Look for any bad log entries in your homeserver logs:
    - `Expected logging context sentinel but found main`
    - `Expected logging context main was lost`
    - `Expected previous context`
    - `utime went backwards!`/`stime went backwards!`
- `Called stop on logcontext POST-0 without recording a start rusage`
 1. Look for any logs coming from the `sentinel` context


With these changes, you should only see the following logs (not from
Synapse) using the `sentinel` context if you start up Synapse and exit:

`homeserver.log`
```
2025-09-10 14:45:39,924 - asyncio - 64 - DEBUG - sentinel - Using selector: EpollSelector

2025-09-10 14:45:40,562 - twisted - 281 - INFO - sentinel - Received SIGINT, shutting down.

2025-09-10 14:45:40,562 - twisted - 281 - INFO - sentinel - (TCP Port 9322 Closed)
2025-09-10 14:45:40,563 - twisted - 281 - INFO - sentinel - (TCP Port 8008 Closed)
2025-09-10 14:45:40,563 - twisted - 281 - INFO - sentinel - (TCP Port 9093 Closed)
2025-09-10 14:45:40,564 - twisted - 281 - INFO - sentinel - Main loop terminated.
```
2025-09-16 17:15:08 -05:00
dependabot[bot]
2bed3fb566 Bump serde from 1.0.219 to 1.0.223 (#18920)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 20:05:23 +01:00
dependabot[bot]
2c60b67a95 Bump types-setuptools from 80.9.0.20250809 to 80.9.0.20250822 (#18924)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 17:37:43 +01:00
dependabot[bot]
6358afff8d Bump pydantic from 2.11.7 to 2.11.9 (#18922)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 17:37:24 +01:00
dependabot[bot]
f7b547e2d8 Bump authlib from 1.6.1 to 1.6.3 (#18921)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 17:35:11 +01:00
dependabot[bot]
8f7bd946de Bump serde_json from 1.0.143 to 1.0.145 (#18919)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 17:31:12 +01:00
dependabot[bot]
4f80fa4b0a Bump types-psycopg2 from 2.9.21.20250809 to 2.9.21.20250915 (#18918)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 17:29:49 +01:00
dependabot[bot]
b2592667a4 Bump sigstore/cosign-installer from 3.9.2 to 3.10.0 (#18917)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 17:26:04 +01:00
Eric Eastwood
769d30a247 Clarify Python dependency constraints (#18856)
Clarify Python dependency constraints

Spawning from
https://github.com/element-hq/synapse/pull/18852#issuecomment-3212003675
as I don't actually know the the exact rule of thumb. It's unclear to me
what we care about exactly. Our [deprecation
policy](https://element-hq.github.io/synapse/latest/deprecation_policy.html)
mentions Debian oldstable support at-least for the version of SQLite.
But then we only refer to Debian stable for the Twisted dependency.
2025-09-15 09:45:41 -05:00
Eric Eastwood
7ecfe8b1a8 Better explain which context the task is run in when using run_in_background(...) or run_as_background_process(...) (#18906)
Follow-up to https://github.com/element-hq/synapse/pull/18900
2025-09-12 09:29:35 -05:00
Hugh Nimmo-Smith
e1036ffa48 Add get_media_upload_limits_for_user and on_media_upload_limit_exceeded callbacks to module API (#18848)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-09-12 12:26:19 +01:00
Andrew Morgan
8c98cf7e55 Remove usage of deprecated pkg_resources interface (#18910) 2025-09-12 10:57:04 +01:00
Kegan Dougal
ec64c3e88d Ensure we /send PDUs which pass canonical JSON checks (#18641)
### Pull Request Checklist

Fixes https://github.com/element-hq/synapse/issues/18554

Looks like this was missed when it was
[implemented](2277df2a1e).

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: reivilibre <oliverw@element.io>
2025-09-12 08:54:20 +00:00
reivilibre
ada3a3b2b3 Add experimental support for MSC4308: Thread Subscriptions extension to Sliding Sync when MSC4306 and MSC4186 are enabled. (#18695)
Closes: #18436

Implements:
https://github.com/matrix-org/matrix-spec-proposals/pull/4308

Follows: #18674

Adds an extension to Sliding Sync and a companion
endpoint needed for backpaginating missed thread subscription changes,
as described in MSC4308

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-09-11 14:45:04 +01:00
Eric Eastwood
9cc4001778 Better explain logcontext in run_in_background(...) and run_as_background_process(...) (#18900)
Also adds a section in the docs explaining the `sentinel` logcontext.

Spawning from https://github.com/element-hq/synapse/pull/18870


### Testing strategy

1. Run Synapse normally and with `daemonize: true`: `poetry run
synapse_homeserver --config-path homeserver.yaml`
 1. Execute some requests
 1. Shutdown the server
 1. Look for any bad log entries in your homeserver logs:
    - `Expected logging context sentinel but found main`
    - `Expected logging context main was lost`
    - `Expected previous context`
    - `utime went backwards!`/`stime went backwards!`
- `Called stop on logcontext POST-0 without recording a start rusage`
    - `Background process re-entered without a proc`

Twisted trial tests:

 1. Run full Twisted trial test suite.
1. Check the logs for `Test starting with non-sentinel logging context ...`
2025-09-10 10:22:53 -05:00
reivilibre
c68c5dd07b Update push rules for experimental MSC4306: Thread Subscriptions to follow newer draft. (#18846)
Follows: #18762

Implements: MSC4306

Closes: #18431
Closes: #18437

Move the MSC4306 push rules to a new kind `postcontent` 

Prevent users from creating user-defined `postcontent` rules 

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-09-09 18:37:04 +01:00
dependabot[bot]
92bdf77c3f Bump jsonschema from 4.25.0 to 4.25.1 (#18897)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-09 16:41:19 +01:00
dependabot[bot]
e43bf10187 Bump types-requests from 2.32.4.20250611 to 2.32.4.20250809 (#18895)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-09 16:39:44 +01:00
dependabot[bot]
6146dbad3e Bump towncrier from 24.8.0 to 25.8.0 (#18894)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-09 16:39:17 +01:00
Eric Eastwood
ca655e4020 Start background tasks after we fork the process (daemonize) (#18886)
Spawning from https://github.com/element-hq/synapse/pull/18871

[This change](6ce2f3e59d)
was originally used to fix CPU time going backwards when we `daemonize`.

While, we don't seem to run into this problem on `develop`, I still
think this is a good change to make. We don't need background tasks
running on a process that will soon be forcefully exited and where the
reactor isn't even running yet. We now kick off the background tasks
(`run_as_background_process`) after we have forked the process and
started the reactor.

Also as simple note, we don't need background tasks running in both halves of a fork.
2025-09-09 10:10:34 -05:00
dependabot[bot]
7951d41b4e Bump phonenumbers from 9.0.12 to 9.0.13 (#18893)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-09 15:53:24 +01:00
dependabot[bot]
e235099ab9 Bump log from 0.4.27 to 0.4.28 (#18892)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-09 15:52:59 +01:00
dependabot[bot]
3e865e403b Bump actions/setup-go from 5.5.0 to 6.0.0 (#18891)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-09 15:52:05 +01:00
dependabot[bot]
35e7e659f6 Bump actions/setup-python from 5.6.0 to 6.0.0 (#18890)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-09 15:49:22 +01:00
Andrew Morgan
39e4f27347 Merge branch 'master' into develop 2025-09-09 12:30:12 +01:00
reivilibre
6fe8137a4a Configure Synapse to run MSC4306: Thread Subscriptions Complement tests. (#18819)
Pairs with: https://github.com/matrix-org/complement/pull/795

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-09-09 11:40:10 +01:00
David Baker
d48e69ad4c Fix prefixed support for MSC4133 (#18875)
This fixes two bugs that affect the availability of MSC4133 until the
next spec release.

1. The servlet didn't recognise the unstable endpoint even when the
homeserver advertised it
 2. The HS didn't advertise support for the stable prefixed version

Would only have been a problem until the next spec release but it's nice
to have it work before then.
2025-09-09 09:53:08 +01:00
Amin Farjadi
74fdbc7b75 Fix typo in structured_logging.md for file handler config (#18872) 2025-09-09 09:51:36 +01:00
Jason Little
4d55f2f301 fix: Use the Enum's value for the dictionary key when responding to an admin request for experimental features (#18874)
While exploring bring up of using `orjson`, exposed an interesting flaw.
The stdlib `json` encoder seems to be ok with coercing a `str` from an
`Enum`(specifically, a `Class[str, Enum]`). The `orjson` encoder does
not like that this is a class and not a proper `str` per spec. Using the
`.value` of the enum as the key for the dict produced while answering a
`GET` admin request for experimental features seems to fix this.
2025-09-09 09:50:09 +01:00
reivilibre
dfccde9f60 Remove obsolete and experimental /sync/e2ee endpoint. (#18583)
Introduced in: https://github.com/element-hq/synapse/pull/17167

The endpoint was part of experiments for MSC3575 but does not feature in
that MSC.

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-09-09 09:28:45 +01:00
Erik Johnston
4b43e6fe02 Handle rescinding invites over federation (#18823)
We should send events that rescind invites over federation.

Similarly, we should handle receiving such events. Unfortunately, the
protocol doesn't make it possible to fully auth such events, and so we
can only handle the case where the original inviter rescinded the invite
(rather than a room admin).

Complement test: https://github.com/matrix-org/complement/pull/797
2025-09-08 10:55:48 +01:00
Eric Eastwood
b2997a8f20 Suppress "Applying schema" log noise bulk when running Complement tests (#18878)
If Synapse is under test (`SYNAPSE_LOG_TESTING` is set), we don't care
about seeing the "Applying schema" log lines at the INFO level every
time we run the tests (it's 100 lines of bulk for each homeserver).

```
synapse_main | 2025-08-29 22:34:03,453 - synapse.storage.prepare_database - 433 - INFO - main - Applying schema deltas for v73
synapse_main | 2025-08-29 22:34:03,454 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/01event_failed_pull_attempts.sql
synapse_main | 2025-08-29 22:34:03,463 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/02add_pusher_enabled.sql
synapse_main | 2025-08-29 22:34:03,473 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/02room_id_indexes_for_purging.sql
synapse_main | 2025-08-29 22:34:03,482 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/03pusher_device_id.sql
synapse_main | 2025-08-29 22:34:03,492 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/03users_approved_column.sql
synapse_main | 2025-08-29 22:34:03,502 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/04partial_join_details.sql
synapse_main | 2025-08-29 22:34:03,513 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/04pending_device_list_updates.sql
...
```


The Synapse logs are visible when a Complement test fails or you use
`COMPLEMENT_ALWAYS_PRINT_SERVER_LOGS=1`. This is spawning from a
Complement test with three homeservers and wanting less log noise to
scroll through.
2025-09-02 13:34:47 -05:00
Eric Eastwood
bff4a11b3f Re-introduce: Fix LaterGauge metrics to collect from all servers (#18791)
Re-introduce: https://github.com/element-hq/synapse/pull/18751 that was
reverted in https://github.com/element-hq/synapse/pull/18789 (explains
why the PR was reverted in the first place).

- Adds a `cleanup` pattern that cleans up metrics from each homeserver
in the tests. Previously, the list of hooks built up until our CI
machines couldn't operate properly, see
https://github.com/element-hq/synapse/pull/18789
- Fix long-standing issue with `synapse_background_update_status`
metrics only tracking the last database listed in the config (see
https://github.com/element-hq/synapse/pull/18791#discussion_r2261706749)
2025-09-02 12:14:27 -05:00
386 changed files with 4734 additions and 1771 deletions

View File

@@ -120,7 +120,7 @@ jobs:
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1 uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Install Cosign - name: Install Cosign
uses: sigstore/cosign-installer@d58896d6a1865668819e1d91763c7751a165e159 # v3.9.2 uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
- name: Calculate docker image tag - name: Calculate docker image tag
uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0 uses: docker/metadata-action@c1e51972afc2121e065aed6d45c65596fe445f3f # v5.8.0

View File

@@ -24,7 +24,7 @@ jobs:
mdbook-version: '0.4.17' mdbook-version: '0.4.17'
- name: Setup python - name: Setup python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"

View File

@@ -64,7 +64,7 @@ jobs:
run: echo 'window.SYNAPSE_VERSION = "${{ needs.pre.outputs.branch-version }}";' > ./docs/website_files/version.js run: echo 'window.SYNAPSE_VERSION = "${{ needs.pre.outputs.branch-version }}";' > ./docs/website_files/version.js
- name: Setup python - name: Setup python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"

View File

@@ -93,7 +93,7 @@ jobs:
-e POSTGRES_PASSWORD=postgres \ -e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \ -e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.postgres-version }} postgres:${{ matrix.postgres-version }}
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"
- run: pip install .[all,test] - run: pip install .[all,test]
@@ -209,7 +209,7 @@ jobs:
- name: Prepare Complement's Prerequisites - name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0 - uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with: with:
cache-dependency-path: complement/go.sum cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod go-version-file: complement/go.mod

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: '3.x' python-version: '3.x'
- run: pip install tomli - run: pip install tomli

View File

@@ -28,7 +28,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"
- id: set-distros - id: set-distros
@@ -74,7 +74,7 @@ jobs:
${{ runner.os }}-buildx- ${{ runner.os }}-buildx-
- name: Set up python - name: Set up python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"
@@ -134,7 +134,7 @@ jobs:
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
# setup-python@v4 doesn't impose a default python version. Need to use 3.x # setup-python@v4 doesn't impose a default python version. Need to use 3.x
# here, because `python` on osx points to Python 2.7. # here, because `python` on osx points to Python 2.7.
@@ -166,7 +166,7 @@ jobs:
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.10" python-version: "3.10"

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"
- name: Install check-jsonschema - name: Install check-jsonschema
@@ -41,7 +41,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"
- name: Install PyYAML - name: Install PyYAML

View File

@@ -107,7 +107,7 @@ jobs:
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"
- run: "pip install 'click==8.1.1' 'GitPython>=3.1.20'" - run: "pip install 'click==8.1.1' 'GitPython>=3.1.20'"
@@ -117,7 +117,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"
- run: .ci/scripts/check_lockfile.py - run: .ci/scripts/check_lockfile.py
@@ -199,7 +199,7 @@ jobs:
with: with:
ref: ${{ github.event.pull_request.head.sha }} ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"
- run: "pip install 'towncrier>=18.6.0rc1'" - run: "pip install 'towncrier>=18.6.0rc1'"
@@ -327,7 +327,7 @@ jobs:
if: ${{ needs.changes.outputs.linting_readme == 'true' }} if: ${{ needs.changes.outputs.linting_readme == 'true' }}
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"
- run: "pip install rstcheck" - run: "pip install rstcheck"
@@ -377,7 +377,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: "3.x" python-version: "3.x"
- id: get-matrix - id: get-matrix
@@ -468,7 +468,7 @@ jobs:
sudo apt-get -qq install build-essential libffi-dev python3-dev \ sudo apt-get -qq install build-essential libffi-dev python3-dev \
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 - uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with: with:
python-version: '3.9' python-version: '3.9'
@@ -727,7 +727,7 @@ jobs:
- name: Prepare Complement's Prerequisites - name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0 - uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with: with:
cache-dependency-path: complement/go.sum cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod go-version-file: complement/go.mod

View File

@@ -182,7 +182,7 @@ jobs:
- name: Prepare Complement's Prerequisites - name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0 - uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
with: with:
cache-dependency-path: complement/go.sum cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod go-version-file: complement/go.mod

View File

@@ -1,12 +1,126 @@
# Synapse 1.138.1 (2025-09-24) # Synapse 1.139.2 (2025-10-07)
## Bugfixes ## Bugfixes
- Fix a bug introduced in 1.139.1 where a client could receive an Internal Server Error if they set `device_keys: null` in the request to [`POST /_matrix/client/v3/keys/upload`](https://spec.matrix.org/v1.16/client-server-api/#post_matrixclientv3keysupload). ([\#19023](https://github.com/element-hq/synapse/issues/19023))
# Synapse 1.139.1 (2025-10-07)
## Security Fixes
- Fix [CVE-2025-61672](https://www.cve.org/CVERecord?id=CVE-2025-61672) / [GHSA-fh66-fcv5-jjfr](https://github.com/element-hq/synapse/security/advisories/GHSA-fh66-fcv5-jjfr). Lack of validation for device keys in Synapse before 1.139.1 allows an attacker registered on the victim homeserver to degrade federation functionality, unpredictably breaking outbound federation to other homeservers. ([\#17097](https://github.com/element-hq/synapse/issues/17097))
## Deprecations and Removals
- Drop support for unstable field names from the long-accepted [MSC2732](https://github.com/matrix-org/matrix-spec-proposals/pull/2732) (Olm fallback keys) proposal. This change allows unit tests to pass following the security patch above. ([\#18996](https://github.com/element-hq/synapse/issues/18996))
# Synapse 1.139.0 (2025-09-30)
### `/register` requests from old application service implementations may break when using MAS
If you are using Matrix Authentication Service (MAS), as of this release any
Application Services that do not set `inhibit_login=true` when calling `POST
/_matrix/client/v3/register` will receive the error
`IO.ELEMENT.MSC4190.M_APPSERVICE_LOGIN_UNSUPPORTED` in response. Please see [the
upgrade
notes](https://element-hq.github.io/synapse/develop/upgrade.html#register-requests-from-old-application-service-implementations-may-break-when-using-mas)
for more information.
No significant changes since 1.139.0rc3.
# Synapse 1.139.0rc3 (2025-09-25)
## Bugfixes
- Fix a bug introduced in 1.139.0rc1 where `run_coroutine_in_background(...)` incorrectly handled logcontexts, resulting in partially broken logging. ([\#18964](https://github.com/element-hq/synapse/issues/18964))
# Synapse 1.139.0rc2 (2025-09-23)
## Internal Changes
- Drop support for Ubuntu 24.10 Oracular Oriole, and add support for Ubuntu 25.04 Plucky Puffin. ([\#18962](https://github.com/element-hq/synapse/issues/18962))
# Synapse 1.139.0rc1 (2025-09-23)
## Features
- Add experimental support for [MSC4308: Thread Subscriptions extension to Sliding Sync](https://github.com/matrix-org/matrix-spec-proposals/pull/4308) when [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-spec-proposals/pull/4306) and [MSC4186: Simplified Sliding Sync](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) are enabled. ([\#18695](https://github.com/element-hq/synapse/issues/18695))
- Update push rules for experimental [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-doc/issues/4306) to follow a newer draft. ([\#18846](https://github.com/element-hq/synapse/issues/18846))
- Add `get_media_upload_limits_for_user` and `on_media_upload_limit_exceeded` module API callbacks to the media repository. ([\#18848](https://github.com/element-hq/synapse/issues/18848))
- Support [MSC4169](https://github.com/matrix-org/matrix-spec-proposals/pull/4169) for backwards-compatible redaction sending using the `/send` endpoint. Contributed by @SpiritCroc @ Beeper. ([\#18898](https://github.com/element-hq/synapse/issues/18898))
- Add an in-memory cache to `_get_e2e_cross_signing_signatures_for_devices` to reduce DB load. ([\#18899](https://github.com/element-hq/synapse/issues/18899))
- Update [MSC4190](https://github.com/matrix-org/matrix-spec-proposals/pull/4190) support to return correct errors and allow appservices to reset cross-signing keys without user-interactive authentication. Contributed by @tulir @ Beeper. ([\#18946](https://github.com/element-hq/synapse/issues/18946))
## Bugfixes
- Ensure all PDUs sent via `/send` pass canonical JSON checks. ([\#18641](https://github.com/element-hq/synapse/issues/18641))
- Fix bug where we did not send invite revocations over federation. ([\#18823](https://github.com/element-hq/synapse/issues/18823))
- Fix prefixed support for [MSC4133](https://github.com/matrix-org/matrix-spec-proposals/pull/4133). ([\#18875](https://github.com/element-hq/synapse/issues/18875))
- Fix open redirect in legacy SSO flow with the `idp` query parameter. ([\#18909](https://github.com/element-hq/synapse/issues/18909))
- Fix a performance regression related to the experimental Delayed Events ([MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140)) feature. ([\#18926](https://github.com/element-hq/synapse/issues/18926)) - Fix a performance regression related to the experimental Delayed Events ([MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140)) feature. ([\#18926](https://github.com/element-hq/synapse/issues/18926))
## Updates to the Docker image
- Suppress "Applying schema" log noise bulk when `SYNAPSE_LOG_TESTING` is set. ([\#18878](https://github.com/element-hq/synapse/issues/18878))
## Improved Documentation
- Clarify Python dependency constraints in our deprecation policy. ([\#18856](https://github.com/element-hq/synapse/issues/18856))
- Clarify necessary `jwt_config` parameter in OIDC documentation for authentik. Contributed by @maxkratz. ([\#18931](https://github.com/element-hq/synapse/issues/18931))
## Deprecations and Removals
- Remove obsolete and experimental `/sync/e2ee` endpoint. ([\#18583](https://github.com/element-hq/synapse/issues/18583))
## Internal Changes
- Fix `LaterGauge` metrics to collect from all servers. ([\#18791](https://github.com/element-hq/synapse/issues/18791))
- Configure Synapse to run [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-spec-proposals/pull/4306) Complement tests. ([\#18819](https://github.com/element-hq/synapse/issues/18819))
- Remove `sentinel` logcontext usage where we log in `setup`, `start` and `exit`. ([\#18870](https://github.com/element-hq/synapse/issues/18870))
- Use the `Enum`'s value for the dictionary key when responding to an admin request for experimental features. ([\#18874](https://github.com/element-hq/synapse/issues/18874))
- Start background tasks after we fork the process (daemonize). ([\#18886](https://github.com/element-hq/synapse/issues/18886))
- Better explain how we manage the logcontext in `run_in_background(...)` and `run_as_background_process(...)`. ([\#18900](https://github.com/element-hq/synapse/issues/18900), [\#18906](https://github.com/element-hq/synapse/issues/18906))
- Remove `sentinel` logcontext usage in `Clock` utilities like `looping_call` and `call_later`. ([\#18907](https://github.com/element-hq/synapse/issues/18907))
- Replace usages of the deprecated `pkg_resources` interface in preparation of setuptools dropping it soon. ([\#18910](https://github.com/element-hq/synapse/issues/18910))
- Split loading config from homeserver `setup`. ([\#18933](https://github.com/element-hq/synapse/issues/18933))
- Fix `run_in_background` not being awaited properly in some tests causing `LoggingContext` problems. ([\#18937](https://github.com/element-hq/synapse/issues/18937))
- Fix `run_as_background_process` not being awaited properly causing `LoggingContext` problems in experimental [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140): Delayed events implementation. ([\#18938](https://github.com/element-hq/synapse/issues/18938))
- Introduce `Clock.call_when_running(...)` to wrap startup code in a logcontext, ensuring we can identify which server generated the logs. ([\#18944](https://github.com/element-hq/synapse/issues/18944))
- Introduce `Clock.add_system_event_trigger(...)` to wrap system event callback code in a logcontext, ensuring we can identify which server generated the logs. ([\#18945](https://github.com/element-hq/synapse/issues/18945))
### Updates to locked dependencies
* Bump actions/setup-go from 5.5.0 to 6.0.0. ([\#18891](https://github.com/element-hq/synapse/issues/18891))
* Bump actions/setup-python from 5.6.0 to 6.0.0. ([\#18890](https://github.com/element-hq/synapse/issues/18890))
* Bump authlib from 1.6.1 to 1.6.3. ([\#18921](https://github.com/element-hq/synapse/issues/18921))
* Bump jsonschema from 4.25.0 to 4.25.1. ([\#18897](https://github.com/element-hq/synapse/issues/18897))
* Bump log from 0.4.27 to 0.4.28. ([\#18892](https://github.com/element-hq/synapse/issues/18892))
* Bump phonenumbers from 9.0.12 to 9.0.13. ([\#18893](https://github.com/element-hq/synapse/issues/18893))
* Bump pydantic from 2.11.7 to 2.11.9. ([\#18922](https://github.com/element-hq/synapse/issues/18922))
* Bump serde from 1.0.219 to 1.0.223. ([\#18920](https://github.com/element-hq/synapse/issues/18920))
* Bump serde_json from 1.0.143 to 1.0.145. ([\#18919](https://github.com/element-hq/synapse/issues/18919))
* Bump sigstore/cosign-installer from 3.9.2 to 3.10.0. ([\#18917](https://github.com/element-hq/synapse/issues/18917))
* Bump towncrier from 24.8.0 to 25.8.0. ([\#18894](https://github.com/element-hq/synapse/issues/18894))
* Bump types-psycopg2 from 2.9.21.20250809 to 2.9.21.20250915. ([\#18918](https://github.com/element-hq/synapse/issues/18918))
* Bump types-requests from 2.32.4.20250611 to 2.32.4.20250809. ([\#18895](https://github.com/element-hq/synapse/issues/18895))
* Bump types-setuptools from 80.9.0.20250809 to 80.9.0.20250822. ([\#18924](https://github.com/element-hq/synapse/issues/18924))
# Synapse 1.138.0 (2025-09-09) # Synapse 1.138.0 (2025-09-09)
No significant changes since 1.138.0rc1. No significant changes since 1.138.0rc1.

27
Cargo.lock generated
View File

@@ -753,9 +753,9 @@ checksum = "241eaef5fd12c88705a01fc1066c48c4b36e0dd4377dcdc7ec3942cea7a69956"
[[package]] [[package]]
name = "log" name = "log"
version = "0.4.27" version = "0.4.28"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "13dc2df351e3202783a1fe0d44375f7295ffb4049267b0f3018346dc122a1d94" checksum = "34080505efa8e45a4b816c349525ebe327ceaa8559756f0356cba97ef3bf7432"
[[package]] [[package]]
name = "lru-slab" name = "lru-slab"
@@ -1250,18 +1250,28 @@ dependencies = [
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.219" version = "1.0.224"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6" checksum = "6aaeb1e94f53b16384af593c71e20b095e958dab1d26939c1b70645c5cfbcc0b"
dependencies = [
"serde_core",
"serde_derive",
]
[[package]]
name = "serde_core"
version = "1.0.224"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32f39390fa6346e24defbcdd3d9544ba8a19985d0af74df8501fbfe9a64341ab"
dependencies = [ dependencies = [
"serde_derive", "serde_derive",
] ]
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.219" version = "1.0.224"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00" checksum = "87ff78ab5e8561c9a675bfc1785cb07ae721f0ee53329a595cefd8c04c2ac4e0"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@@ -1270,14 +1280,15 @@ dependencies = [
[[package]] [[package]]
name = "serde_json" name = "serde_json"
version = "1.0.143" version = "1.0.145"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d401abef1d108fbd9cbaebc3e46611f4b1021f714a0597a71f41ee463f5f4a5a" checksum = "402a6f66d8c709116cf22f558eab210f5a50187f702eb4d7e5ef38d9a7f1c79c"
dependencies = [ dependencies = [
"itoa", "itoa",
"memchr", "memchr",
"ryu", "ryu",
"serde", "serde",
"serde_core",
] ]
[[package]] [[package]]

36
debian/changelog vendored
View File

@@ -1,8 +1,38 @@
matrix-synapse-py3 (1.138.1) stable; urgency=medium matrix-synapse-py3 (1.139.2) stable; urgency=medium
* New Synapse release 1.138.1. * New Synapse release 1.139.2.
-- Synapse Packaging team <packages@matrix.org> Wed, 24 Sep 2025 11:32:38 +0100 -- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 16:29:47 +0100
matrix-synapse-py3 (1.139.1) stable; urgency=medium
* New Synapse release 1.139.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 11:46:51 +0100
matrix-synapse-py3 (1.139.0) stable; urgency=medium
* New Synapse release 1.139.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Sep 2025 11:58:55 +0100
matrix-synapse-py3 (1.139.0~rc3) stable; urgency=medium
* New Synapse release 1.139.0rc3.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Sep 2025 12:13:23 +0100
matrix-synapse-py3 (1.139.0~rc2) stable; urgency=medium
* New Synapse release 1.139.0rc2.
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Sep 2025 15:31:42 +0100
matrix-synapse-py3 (1.139.0~rc1) stable; urgency=medium
* New Synapse release 1.139.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Sep 2025 13:24:50 +0100
matrix-synapse-py3 (1.138.0) stable; urgency=medium matrix-synapse-py3 (1.138.0) stable; urgency=medium

View File

@@ -133,6 +133,8 @@ experimental_features:
msc3984_appservice_key_query: true msc3984_appservice_key_query: true
# Invite filtering # Invite filtering
msc4155_enabled: true msc4155_enabled: true
# Thread Subscriptions
msc4306_enabled: true
server_notices: server_notices:
system_mxid_localpart: _server system_mxid_localpart: _server

View File

@@ -77,6 +77,13 @@ loggers:
#} #}
synapse.visibility.filtered_event_debug: synapse.visibility.filtered_event_debug:
level: DEBUG level: DEBUG
{#
If Synapse is under test, we don't care about seeing the "Applying schema" log
lines at the INFO level every time we run the tests (it's 100 lines of bulk)
#}
synapse.storage.prepare_database:
level: WARN
{% endif %} {% endif %}
root: root:

View File

@@ -1,13 +1,11 @@
Deprecation Policy for Platform Dependencies # Deprecation Policy
============================================
Synapse has a number of platform dependencies, including Python, Rust, Synapse has a number of **platform dependencies** (Python, Rust, PostgreSQL, and SQLite)
PostgreSQL and SQLite. This document outlines the policy towards which versions and **application dependencies** (Python and Rust packages). This document outlines the
we support, and when we drop support for versions in the future. policy towards which versions we support, and when we drop support for versions in the
future.
## Platform Dependencies
Policy
------
Synapse follows the upstream support life cycles for Python and PostgreSQL, Synapse follows the upstream support life cycles for Python and PostgreSQL,
i.e. when a version reaches End of Life Synapse will withdraw support for that i.e. when a version reaches End of Life Synapse will withdraw support for that
@@ -26,8 +24,8 @@ The oldest supported version of SQLite is the version
[provided](https://packages.debian.org/bullseye/libsqlite3-0) by [provided](https://packages.debian.org/bullseye/libsqlite3-0) by
[Debian oldstable](https://wiki.debian.org/DebianOldStable). [Debian oldstable](https://wiki.debian.org/DebianOldStable).
Context
------- ### Context
It is important for system admins to have a clear understanding of the platform It is important for system admins to have a clear understanding of the platform
requirements of Synapse and its deprecation policies so that they can requirements of Synapse and its deprecation policies so that they can
@@ -50,4 +48,42 @@ the ecosystem.
On a similar note, SQLite does not generally have a concept of "supported On a similar note, SQLite does not generally have a concept of "supported
release"; bugfixes are published for the latest minor release only. We chose to release"; bugfixes are published for the latest minor release only. We chose to
track Debian's oldstable as this is relatively conservative, predictably updated track Debian's oldstable as this is relatively conservative, predictably updated
and is consistent with the `.deb` packages released by Matrix.org. and is consistent with the `.deb` packages released by Matrix.org.
## Application dependencies
For application-level Python dependencies, we often specify loose version constraints
(ex. `>=X.Y.Z`) to be forwards compatible with any new versions. Upper bounds (`<A.B.C`)
are only added when necessary to prevent known incompatibilities.
When selecting a minimum version, while we are mindful of the impact on downstream
package maintainers, our primary focus is on the maintainability and progress of Synapse
itself.
For developers, a Python dependency version can be considered a "no-brainer" upgrade once it is
available in both the latest [Debian Stable](https://packages.debian.org/stable/) and
[Ubuntu LTS](https://launchpad.net/ubuntu) repositories. No need to burden yourself with
extra scrutiny or consideration at this point.
We aggressively update Rust dependencies. Since these are statically linked and managed
entirely by `cargo` during build, they *can* pose no ongoing maintenance burden on others.
This allows us to freely upgrade to leverage the latest ecosystem advancements assuming
they don't have their own system-level dependencies.
### Context
Because Python dependencies can easily be managed in a virtual environment, we are less
concerned about the criteria for selecting minimum versions. The only thing of concern
is making sure we're not making it unnecessarily difficult for downstream package
maintainers. Generally, this just means avoiding the bleeding edge for a few months.
The situation for Rust dependencies is fundamentally different. For packagers, the
concerns around Python dependency versions do not apply. The `cargo` tool handles
downloading and building all libraries to satisfy dependencies, and these libraries are
statically linked into the final binary. This means that from a packager's perspective,
the Rust dependency versions are an internal build detail, not a runtime dependency to
be managed on the target system. Consequently, we have even greater flexibility to
upgrade Rust dependencies as needed for the project. Some distros (e.g. Fedora) do
package Rust libraries, but this appears to be the outlier rather than the norm.

View File

@@ -59,6 +59,28 @@ def do_request_handling():
logger.debug("phew") logger.debug("phew")
``` ```
### The `sentinel` context
The default logcontext is `synapse.logging.context.SENTINEL_CONTEXT`, which is an empty
sentinel value to represent the root logcontext. This is what is used when there is no
other logcontext set. The phrase "clear/reset the logcontext" means to set the current
logcontext to the `sentinel` logcontext.
No CPU/database usage metrics are recorded against the `sentinel` logcontext.
Ideally, nothing from the Synapse homeserver would be logged against the `sentinel`
logcontext as we want to know which server the logs came from. In practice, this is not
always the case yet especially outside of request handling.
Global things outside of Synapse (e.g. Twisted reactor code) should run in the
`sentinel` logcontext. It's only when it calls into application code that a logcontext
gets activated. This means the reactor should be started in the `sentinel` logcontext,
and any time an awaitable yields control back to the reactor, it should reset the
logcontext to be the `sentinel` logcontext. This is important to avoid leaking the
current logcontext to the reactor (which would then get picked up and associated with
the next thing the reactor does).
## Using logcontexts with awaitables ## Using logcontexts with awaitables
Awaitables break the linear flow of code so that there is no longer a single entry point Awaitables break the linear flow of code so that there is no longer a single entry point

View File

@@ -64,3 +64,68 @@ If multiple modules implement this callback, they will be considered in order. I
returns `True`, Synapse falls through to the next one. The value of the first callback that returns `True`, Synapse falls through to the next one. The value of the first callback that
returns `False` will be used. If this happens, Synapse will not call any of the subsequent returns `False` will be used. If this happens, Synapse will not call any of the subsequent
implementations of this callback. implementations of this callback.
### `get_media_upload_limits_for_user`
_First introduced in Synapse v1.139.0_
```python
async def get_media_upload_limits_for_user(user_id: str, size: int) -> Optional[List[synapse.module_api.MediaUploadLimit]]
```
**<span style="color:red">
Caution: This callback is currently experimental. The method signature or behaviour
may change without notice.
</span>**
Called when processing a request to store content in the media repository. This can be used to dynamically override
the [media upload limits configuration](../usage/configuration/config_documentation.html#media_upload_limits).
The arguments passed to this callback are:
* `user_id`: The Matrix user ID of the user (e.g. `@alice:example.com`) making the request.
If the callback returns a list then it will be used as the limits instead of those in the configuration (if any).
If an empty list is returned then no limits are applied (**warning:** users will be able
to upload as much data as they desire).
If multiple modules implement this callback, they will be considered in order. If a
callback returns `None`, Synapse falls through to the next one. The value of the first
callback that does not return `None` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback.
If there are no registered modules, or if all modules return `None`, then
the default
[media upload limits configuration](../usage/configuration/config_documentation.html#media_upload_limits)
will be used.
### `on_media_upload_limit_exceeded`
_First introduced in Synapse v1.139.0_
```python
async def on_media_upload_limit_exceeded(user_id: str, limit: synapse.module_api.MediaUploadLimit, sent_bytes: int, attempted_bytes: int) -> None
```
**<span style="color:red">
Caution: This callback is currently experimental. The method signature or behaviour
may change without notice.
</span>**
Called when a user attempts to upload media that would exceed a
[configured media upload limit](../usage/configuration/config_documentation.html#media_upload_limits).
This callback will only be called on workers which handle
[POST /_matrix/media/v3/upload](https://spec.matrix.org/v1.15/client-server-api/#post_matrixmediav3upload)
requests.
This could be used to inform the user that they have reached a media upload limit through
some external method.
The arguments passed to this callback are:
* `user_id`: The Matrix user ID of the user (e.g. `@alice:example.com`) making the request.
* `limit`: The `synapse.module_api.MediaUploadLimit` representing the limit that was reached.
* `sent_bytes`: The number of bytes already sent during the period of the limit.
* `attempted_bytes`: The number of bytes that the user attempted to send.

View File

@@ -186,6 +186,7 @@ oidc_providers:
4. Note the slug of your application, Client ID and Client Secret. 4. Note the slug of your application, Client ID and Client Secret.
Note: RSA keys must be used for signing for Authentik, ECC keys do not work. Note: RSA keys must be used for signing for Authentik, ECC keys do not work.
Note: The provider must have a signing key set and must not use an encryption key.
Synapse config: Synapse config:
```yaml ```yaml
@@ -204,6 +205,12 @@ oidc_providers:
config: config:
localpart_template: "{{ user.preferred_username }}" localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.preferred_username|capitalize }}" # TO BE FILLED: If your users have names in Authentik and you want those in Synapse, this should be replaced with user.name|capitalize. display_name_template: "{{ user.preferred_username|capitalize }}" # TO BE FILLED: If your users have names in Authentik and you want those in Synapse, this should be replaced with user.name|capitalize.
[...]
jwt_config:
enabled: true
secret: "your client secret" # TO BE FILLED (same as `client_secret` above)
algorithm: "RS256"
# (...other fields)
``` ```
### Dex ### Dex

View File

@@ -35,7 +35,7 @@ handlers:
loggers: loggers:
synapse: synapse:
level: INFO level: INFO
handlers: [remote] handlers: [file]
synapse.storage.SQL: synapse.storage.SQL:
level: WARNING level: WARNING
``` ```

View File

@@ -117,6 +117,29 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status). [the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.139.0
## Drop support for Ubuntu 24.10 Oracular Oriole, and add support for Ubuntu 25.04 Plucky Puffin
Ubuntu 24.10 Oracular Oriole [has been end-of-life since 10 Jul
2025](https://endoflife.date/ubuntu). This release drops support for Ubuntu
24.10, and in its place adds support for Ubuntu 25.04 Plucky Puffin.
## `/register` requests from old application service implementations may break when using MAS
Application Services that do not set `inhibit_login=true` when calling `POST
/_matrix/client/v3/register` will receive the error
`IO.ELEMENT.MSC4190.M_APPSERVICE_LOGIN_UNSUPPORTED` in response. This is a
result of [MSC4190: Device management for application
services](https://github.com/matrix-org/matrix-spec-proposals/pull/4190) which
adds new endpoints for application services to create encryption-ready devices
with other than `/login` or `/register` without `inhibit_login=true`.
If an application service you use starts to fail with the mentioned error,
ensure it is up to date. If it is, then kindly let the author know that they
need to update their implementation to call `/register` with
`inhibit_login=true`.
# Upgrading to v1.136.0 # Upgrading to v1.136.0
## Deprecate `run_as_background_process` exported as part of the module API interface in favor of `ModuleApi.run_as_background_process` ## Deprecate `run_as_background_process` exported as part of the module API interface in favor of `ModuleApi.run_as_background_process`

View File

@@ -2168,9 +2168,12 @@ max_upload_size: 60M
### `media_upload_limits` ### `media_upload_limits`
*(array)* A list of media upload limits defining how much data a given user can upload in a given time period. *(array)* A list of media upload limits defining how much data a given user can upload in a given time period.
These limits are applied in addition to the `max_upload_size` limit above (which applies to individual uploads).
An empty list means no limits are applied. An empty list means no limits are applied.
These settings can be overridden using the `get_media_upload_limits_for_user` module API [callback](../../modules/media_repository_callbacks.md#get_media_upload_limits_for_user).
Defaults to `[]`. Defaults to `[]`.
Example configuration: Example configuration:

50
poetry.lock generated
View File

@@ -34,15 +34,15 @@ tests-mypy = ["mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" a
[[package]] [[package]]
name = "authlib" name = "authlib"
version = "1.6.1" version = "1.6.3"
description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients." description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients."
optional = true optional = true
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
markers = "extra == \"all\" or extra == \"jwt\" or extra == \"oidc\"" markers = "extra == \"all\" or extra == \"jwt\" or extra == \"oidc\""
files = [ files = [
{file = "authlib-1.6.1-py2.py3-none-any.whl", hash = "sha256:e9d2031c34c6309373ab845afc24168fe9e93dc52d252631f52642f21f5ed06e"}, {file = "authlib-1.6.3-py2.py3-none-any.whl", hash = "sha256:7ea0f082edd95a03b7b72edac65ec7f8f68d703017d7e37573aee4fc603f2a48"},
{file = "authlib-1.6.1.tar.gz", hash = "sha256:4dffdbb1460ba6ec8c17981a4c67af7d8af131231b5a36a88a1e8c80c111cdfd"}, {file = "authlib-1.6.3.tar.gz", hash = "sha256:9f7a982cc395de719e4c2215c5707e7ea690ecf84f1ab126f28c053f4219e610"},
] ]
[package.dependencies] [package.dependencies]
@@ -919,14 +919,14 @@ i18n = ["Babel (>=2.7)"]
[[package]] [[package]]
name = "jsonschema" name = "jsonschema"
version = "4.25.0" version = "4.25.1"
description = "An implementation of JSON Schema validation for Python" description = "An implementation of JSON Schema validation for Python"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "jsonschema-4.25.0-py3-none-any.whl", hash = "sha256:24c2e8da302de79c8b9382fee3e76b355e44d2a4364bb207159ce10b517bd716"}, {file = "jsonschema-4.25.1-py3-none-any.whl", hash = "sha256:3fba0169e345c7175110351d456342c364814cfcf3b964ba4587f22915230a63"},
{file = "jsonschema-4.25.0.tar.gz", hash = "sha256:e63acf5c11762c0e6672ffb61482bdf57f0876684d8d249c0fe2d730d48bc55f"}, {file = "jsonschema-4.25.1.tar.gz", hash = "sha256:e4a9655ce0da0c0b67a085847e00a3a51449e1157f4f75e9fb5aa545e122eb85"},
] ]
[package.dependencies] [package.dependencies]
@@ -1531,14 +1531,14 @@ files = [
[[package]] [[package]]
name = "phonenumbers" name = "phonenumbers"
version = "9.0.12" version = "9.0.13"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers." description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false optional = false
python-versions = "*" python-versions = "*"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "phonenumbers-9.0.12-py2.py3-none-any.whl", hash = "sha256:900633afc3e12191458d710262df5efc117838bd1e2e613b64fa254a86bb20a1"}, {file = "phonenumbers-9.0.13-py2.py3-none-any.whl", hash = "sha256:b97661e177773e7509c6d503e0f537cd0af22aa3746231654590876eb9430915"},
{file = "phonenumbers-9.0.12.tar.gz", hash = "sha256:ccadff6b949494bd606836d8c9678bee5b55cb1cbad1e98bf7adae108e6fd0be"}, {file = "phonenumbers-9.0.13.tar.gz", hash = "sha256:eca06e01382412c45316868f86a44bb217c02f9ee7196589041556a2f54a7639"},
] ]
[[package]] [[package]]
@@ -1774,14 +1774,14 @@ files = [
[[package]] [[package]]
name = "pydantic" name = "pydantic"
version = "2.11.7" version = "2.11.9"
description = "Data validation using Python type hints" description = "Data validation using Python type hints"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main", "dev"] groups = ["main", "dev"]
files = [ files = [
{file = "pydantic-2.11.7-py3-none-any.whl", hash = "sha256:dde5df002701f6de26248661f6835bbe296a47bf73990135c7d07ce741b9623b"}, {file = "pydantic-2.11.9-py3-none-any.whl", hash = "sha256:c42dd626f5cfc1c6950ce6205ea58c93efa406da65f479dcb4029d5934857da2"},
{file = "pydantic-2.11.7.tar.gz", hash = "sha256:d989c3c6cb79469287b1569f7447a17848c998458d49ebe294e975b9baf0f0db"}, {file = "pydantic-2.11.9.tar.gz", hash = "sha256:6b8ffda597a14812a7975c90b82a8a2e777d9257aba3453f973acd3c032a18e2"},
] ]
[package.dependencies] [package.dependencies]
@@ -2747,14 +2747,14 @@ files = [
[[package]] [[package]]
name = "towncrier" name = "towncrier"
version = "24.8.0" version = "25.8.0"
description = "Building newsfiles for your project." description = "Building newsfiles for your project."
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.9"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "towncrier-24.8.0-py3-none-any.whl", hash = "sha256:9343209592b839209cdf28c339ba45792fbfe9775b5f9c177462fd693e127d8d"}, {file = "towncrier-25.8.0-py3-none-any.whl", hash = "sha256:b953d133d98f9aeae9084b56a3563fd2519dfc6ec33f61c9cd2c61ff243fb513"},
{file = "towncrier-24.8.0.tar.gz", hash = "sha256:013423ee7eed102b2f393c287d22d95f66f1a3ea10a4baa82d298001a7f18af3"}, {file = "towncrier-25.8.0.tar.gz", hash = "sha256:eef16d29f831ad57abb3ae32a0565739866219f1ebfbdd297d32894eb9940eb1"},
] ]
[package.dependencies] [package.dependencies]
@@ -2971,14 +2971,14 @@ files = [
[[package]] [[package]]
name = "types-psycopg2" name = "types-psycopg2"
version = "2.9.21.20250809" version = "2.9.21.20250915"
description = "Typing stubs for psycopg2" description = "Typing stubs for psycopg2"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "types_psycopg2-2.9.21.20250809-py3-none-any.whl", hash = "sha256:59b7b0ed56dcae9efae62b8373497274fc1a0484bdc5135cdacbe5a8f44e1d7b"}, {file = "types_psycopg2-2.9.21.20250915-py3-none-any.whl", hash = "sha256:eefe5ccdc693fc086146e84c9ba437bb278efe1ef330b299a0cb71169dc6c55f"},
{file = "types_psycopg2-2.9.21.20250809.tar.gz", hash = "sha256:b7c2cbdcf7c0bd16240f59ba694347329b0463e43398de69784ea4dee45f3c6d"}, {file = "types_psycopg2-2.9.21.20250915.tar.gz", hash = "sha256:bfeb8f54c32490e7b5edc46215ab4163693192bc90407b4a023822de9239f5c8"},
] ]
[[package]] [[package]]
@@ -3011,14 +3011,14 @@ files = [
[[package]] [[package]]
name = "types-requests" name = "types-requests"
version = "2.32.4.20250611" version = "2.32.4.20250809"
description = "Typing stubs for requests" description = "Typing stubs for requests"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "types_requests-2.32.4.20250611-py3-none-any.whl", hash = "sha256:ad2fe5d3b0cb3c2c902c8815a70e7fb2302c4b8c1f77bdcd738192cdb3878072"}, {file = "types_requests-2.32.4.20250809-py3-none-any.whl", hash = "sha256:f73d1832fb519ece02c85b1f09d5f0dd3108938e7d47e7f94bbfa18a6782b163"},
{file = "types_requests-2.32.4.20250611.tar.gz", hash = "sha256:741c8777ed6425830bf51e54d6abe245f79b4dcb9019f1622b773463946bf826"}, {file = "types_requests-2.32.4.20250809.tar.gz", hash = "sha256:d8060de1c8ee599311f56ff58010fb4902f462a1470802cf9f6ed27bc46c4df3"},
] ]
[package.dependencies] [package.dependencies]
@@ -3026,14 +3026,14 @@ urllib3 = ">=2"
[[package]] [[package]]
name = "types-setuptools" name = "types-setuptools"
version = "80.9.0.20250809" version = "80.9.0.20250822"
description = "Typing stubs for setuptools" description = "Typing stubs for setuptools"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "types_setuptools-80.9.0.20250809-py3-none-any.whl", hash = "sha256:7c6539b4c7ac7b4ab4db2be66d8a58fb1e28affa3ee3834be48acafd94f5976a"}, {file = "types_setuptools-80.9.0.20250822-py3-none-any.whl", hash = "sha256:53bf881cb9d7e46ed12c76ef76c0aaf28cfe6211d3fab12e0b83620b1a8642c3"},
{file = "types_setuptools-80.9.0.20250809.tar.gz", hash = "sha256:e986ba37ffde364073d76189e1d79d9928fb6f5278c7d07589cde353d0218864"}, {file = "types_setuptools-80.9.0.20250822.tar.gz", hash = "sha256:070ea7716968ec67a84c7f7768d9952ff24d28b65b6594797a464f1b3066f965"},
] ]
[[package]] [[package]]

View File

@@ -101,7 +101,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry] [tool.poetry]
name = "matrix-synapse" name = "matrix-synapse"
version = "1.138.1" version = "1.139.2"
description = "Homeserver for the Matrix decentralised comms protocol" description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"] authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later" license = "AGPL-3.0-or-later"

View File

@@ -289,10 +289,10 @@ pub const BASE_APPEND_CONTENT_RULES: &[PushRule] = &[PushRule {
default_enabled: true, default_enabled: true,
}]; }];
pub const BASE_APPEND_UNDERRIDE_RULES: &[PushRule] = &[ pub const BASE_APPEND_POSTCONTENT_RULES: &[PushRule] = &[
PushRule { PushRule {
rule_id: Cow::Borrowed("global/content/.io.element.msc4306.rule.unsubscribed_thread"), rule_id: Cow::Borrowed("global/postcontent/.io.element.msc4306.rule.unsubscribed_thread"),
priority_class: 1, priority_class: 6,
conditions: Cow::Borrowed(&[Condition::Known( conditions: Cow::Borrowed(&[Condition::Known(
KnownCondition::Msc4306ThreadSubscription { subscribed: false }, KnownCondition::Msc4306ThreadSubscription { subscribed: false },
)]), )]),
@@ -301,8 +301,8 @@ pub const BASE_APPEND_UNDERRIDE_RULES: &[PushRule] = &[
default_enabled: true, default_enabled: true,
}, },
PushRule { PushRule {
rule_id: Cow::Borrowed("global/content/.io.element.msc4306.rule.subscribed_thread"), rule_id: Cow::Borrowed("global/postcontent/.io.element.msc4306.rule.subscribed_thread"),
priority_class: 1, priority_class: 6,
conditions: Cow::Borrowed(&[Condition::Known( conditions: Cow::Borrowed(&[Condition::Known(
KnownCondition::Msc4306ThreadSubscription { subscribed: true }, KnownCondition::Msc4306ThreadSubscription { subscribed: true },
)]), )]),
@@ -310,6 +310,9 @@ pub const BASE_APPEND_UNDERRIDE_RULES: &[PushRule] = &[
default: true, default: true,
default_enabled: true, default_enabled: true,
}, },
];
pub const BASE_APPEND_UNDERRIDE_RULES: &[PushRule] = &[
PushRule { PushRule {
rule_id: Cow::Borrowed("global/underride/.m.rule.call"), rule_id: Cow::Borrowed("global/underride/.m.rule.call"),
priority_class: 1, priority_class: 1,
@@ -726,6 +729,7 @@ lazy_static! {
.iter() .iter()
.chain(BASE_APPEND_OVERRIDE_RULES.iter()) .chain(BASE_APPEND_OVERRIDE_RULES.iter())
.chain(BASE_APPEND_CONTENT_RULES.iter()) .chain(BASE_APPEND_CONTENT_RULES.iter())
.chain(BASE_APPEND_POSTCONTENT_RULES.iter())
.chain(BASE_APPEND_UNDERRIDE_RULES.iter()) .chain(BASE_APPEND_UNDERRIDE_RULES.iter())
.map(|rule| { (&*rule.rule_id, rule) }) .map(|rule| { (&*rule.rule_id, rule) })
.collect(); .collect();

View File

@@ -527,6 +527,7 @@ impl PushRules {
.chain(base_rules::BASE_APPEND_OVERRIDE_RULES.iter()) .chain(base_rules::BASE_APPEND_OVERRIDE_RULES.iter())
.chain(self.content.iter()) .chain(self.content.iter())
.chain(base_rules::BASE_APPEND_CONTENT_RULES.iter()) .chain(base_rules::BASE_APPEND_CONTENT_RULES.iter())
.chain(base_rules::BASE_APPEND_POSTCONTENT_RULES.iter())
.chain(self.room.iter()) .chain(self.room.iter())
.chain(self.sender.iter()) .chain(self.sender.iter())
.chain(self.underride.iter()) .chain(self.underride.iter())

View File

@@ -1,5 +1,5 @@
$schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json $schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.138/synapse-config.schema.json $id: https://element-hq.github.io/synapse/schema/synapse/v1.139/synapse-config.schema.json
type: object type: object
properties: properties:
modules: modules:
@@ -2415,8 +2415,15 @@ properties:
A list of media upload limits defining how much data a given user can A list of media upload limits defining how much data a given user can
upload in a given time period. upload in a given time period.
These limits are applied in addition to the `max_upload_size` limit above
(which applies to individual uploads).
An empty list means no limits are applied. An empty list means no limits are applied.
These settings can be overridden using the `get_media_upload_limits_for_user`
module API [callback](../../modules/media_repository_callbacks.md#get_media_upload_limits_for_user).
default: [] default: []
items: items:
time_period: time_period:

View File

@@ -32,7 +32,7 @@ DISTS = (
"debian:sid", # (rolling distro, no EOL) "debian:sid", # (rolling distro, no EOL)
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04) "ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)
"ubuntu:noble", # 24.04 LTS (EOL 2029-06) "ubuntu:noble", # 24.04 LTS (EOL 2029-06)
"ubuntu:oracular", # 24.10 (EOL 2025-07) "ubuntu:plucky", # 25.04 (EOL 2026-01)
"debian:trixie", # (EOL not specified yet) "debian:trixie", # (EOL not specified yet)
) )

View File

@@ -230,6 +230,7 @@ test_packages=(
./tests/msc3967 ./tests/msc3967
./tests/msc4140 ./tests/msc4140
./tests/msc4155 ./tests/msc4155
./tests/msc4306
) )
# Enable dirty runs, so tests will reuse the same container where possible. # Enable dirty runs, so tests will reuse the same container where possible.

View File

@@ -68,6 +68,18 @@ PROMETHEUS_METRIC_MISSING_FROM_LIST_TO_CHECK = ErrorCode(
category="per-homeserver-tenant-metrics", category="per-homeserver-tenant-metrics",
) )
PREFER_SYNAPSE_CLOCK_CALL_WHEN_RUNNING = ErrorCode(
"prefer-synapse-clock-call-when-running",
"`synapse.util.Clock.call_when_running` should be used instead of `reactor.callWhenRunning`",
category="synapse-reactor-clock",
)
PREFER_SYNAPSE_CLOCK_ADD_SYSTEM_EVENT_TRIGGER = ErrorCode(
"prefer-synapse-clock-add-system-event-trigger",
"`synapse.util.Clock.add_system_event_trigger` should be used instead of `reactor.addSystemEventTrigger`",
category="synapse-reactor-clock",
)
class Sentinel(enum.Enum): class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the # defining a sentinel in this way allows mypy to correctly handle the
@@ -229,9 +241,77 @@ class SynapsePlugin(Plugin):
): ):
return check_is_cacheable_wrapper return check_is_cacheable_wrapper
if fullname in (
"twisted.internet.interfaces.IReactorCore.callWhenRunning",
"synapse.types.ISynapseThreadlessReactor.callWhenRunning",
"synapse.types.ISynapseReactor.callWhenRunning",
):
return check_call_when_running
if fullname in (
"twisted.internet.interfaces.IReactorCore.addSystemEventTrigger",
"synapse.types.ISynapseThreadlessReactor.addSystemEventTrigger",
"synapse.types.ISynapseReactor.addSystemEventTrigger",
):
return check_add_system_event_trigger
return None return None
def check_call_when_running(ctx: MethodSigContext) -> CallableType:
"""
Ensure that the `reactor.callWhenRunning` callsites aren't used.
`synapse.util.Clock.call_when_running` should always be used instead of
`reactor.callWhenRunning`.
Since `reactor.callWhenRunning` is a reactor callback, the callback will start out
with the sentinel logcontext. `synapse.util.Clock` starts a default logcontext as we
want to know which server the logs came from.
Args:
ctx: The `FunctionSigContext` from mypy.
"""
signature: CallableType = ctx.default_signature
ctx.api.fail(
(
"Expected all `reactor.callWhenRunning` calls to use `synapse.util.Clock.call_when_running` instead. "
"This is so all Synapse code runs with a logcontext as we want to know which server the logs came from."
),
ctx.context,
code=PREFER_SYNAPSE_CLOCK_CALL_WHEN_RUNNING,
)
return signature
def check_add_system_event_trigger(ctx: MethodSigContext) -> CallableType:
"""
Ensure that the `reactor.addSystemEventTrigger` callsites aren't used.
`synapse.util.Clock.add_system_event_trigger` should always be used instead of
`reactor.addSystemEventTrigger`.
Since `reactor.addSystemEventTrigger` is a reactor callback, the callback will start out
with the sentinel logcontext. `synapse.util.Clock` starts a default logcontext as we
want to know which server the logs came from.
Args:
ctx: The `FunctionSigContext` from mypy.
"""
signature: CallableType = ctx.default_signature
ctx.api.fail(
(
"Expected all `reactor.addSystemEventTrigger` calls to use `synapse.util.Clock.add_system_event_trigger` instead. "
"This is so all Synapse code runs with a logcontext as we want to know which server the logs came from."
),
ctx.context,
code=PREFER_SYNAPSE_CLOCK_ADD_SYSTEM_EVENT_TRIGGER,
)
return signature
def analyze_prometheus_metric_classes(ctx: ClassDefContext) -> None: def analyze_prometheus_metric_classes(ctx: ClassDefContext) -> None:
""" """
Cross-check the list of Prometheus metric classes against the Cross-check the list of Prometheus metric classes against the

View File

@@ -30,7 +30,7 @@ from signedjson.sign import sign_json
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.crypto.event_signing import add_hashes_and_signatures from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.util import json_encoder from synapse.util.json import json_encoder
def main() -> None: def main() -> None:

View File

@@ -153,9 +153,13 @@ def get_registered_paths_for_default(
""" """
hs = MockHomeserver(base_config, worker_app) hs = MockHomeserver(base_config, worker_app)
# TODO We only do this to avoid an error, but don't need the database etc # TODO We only do this to avoid an error, but don't need the database etc
hs.setup() hs.setup()
return get_registered_paths_for_hs(hs) registered_paths = get_registered_paths_for_hs(hs)
hs.cleanup()
return registered_paths
def elide_http_methods_if_unconflicting( def elide_http_methods_if_unconflicting(

View File

@@ -54,11 +54,11 @@ from twisted.internet import defer, reactor as reactor_
from synapse.config.database import DatabaseConnectionConfig from synapse.config.database import DatabaseConnectionConfig
from synapse.config.homeserver import HomeServerConfig from synapse.config.homeserver import HomeServerConfig
from synapse.logging.context import ( from synapse.logging.context import (
LoggingContext,
make_deferred_yieldable, make_deferred_yieldable,
run_in_background, run_in_background,
) )
from synapse.notifier import ReplicationNotifier from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn
from synapse.storage.databases.main import FilteringWorkerStore from synapse.storage.databases.main import FilteringWorkerStore
from synapse.storage.databases.main.account_data import AccountDataWorkerStore from synapse.storage.databases.main.account_data import AccountDataWorkerStore
@@ -98,7 +98,7 @@ from synapse.storage.databases.state.bg_updates import StateBackgroundUpdateStor
from synapse.storage.engines import create_engine from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database from synapse.storage.prepare_database import prepare_database
from synapse.types import ISynapseReactor from synapse.types import ISynapseReactor
from synapse.util import SYNAPSE_VERSION, Clock from synapse.util import SYNAPSE_VERSION
# Cast safety: Twisted does some naughty magic which replaces the # Cast safety: Twisted does some naughty magic which replaces the
# twisted.internet.reactor module with a Reactor instance at runtime. # twisted.internet.reactor module with a Reactor instance at runtime.
@@ -317,27 +317,16 @@ class Store(
) )
class MockHomeserver: class MockHomeserver(HomeServer):
DATASTORE_CLASS = DataStore
def __init__(self, config: HomeServerConfig): def __init__(self, config: HomeServerConfig):
self.clock = Clock(reactor) super().__init__(
self.config = config hostname=config.server.server_name,
self.hostname = config.server.server_name config=config,
self.version_string = SYNAPSE_VERSION reactor=reactor,
version_string=f"Synapse/{SYNAPSE_VERSION}",
def get_clock(self) -> Clock: )
return self.clock
def get_reactor(self) -> ISynapseReactor:
return reactor
def get_instance_name(self) -> str:
return "master"
def should_send_federation(self) -> bool:
return False
def get_replication_notifier(self) -> ReplicationNotifier:
return ReplicationNotifier()
class Porter: class Porter:
@@ -346,12 +335,12 @@ class Porter:
sqlite_config: Dict[str, Any], sqlite_config: Dict[str, Any],
progress: "Progress", progress: "Progress",
batch_size: int, batch_size: int,
hs_config: HomeServerConfig, hs: HomeServer,
): ):
self.sqlite_config = sqlite_config self.sqlite_config = sqlite_config
self.progress = progress self.progress = progress
self.batch_size = batch_size self.batch_size = batch_size
self.hs_config = hs_config self.hs = hs
async def setup_table(self, table: str) -> Tuple[str, int, int, int, int]: async def setup_table(self, table: str) -> Tuple[str, int, int, int, int]:
if table in APPEND_ONLY_TABLES: if table in APPEND_ONLY_TABLES:
@@ -671,8 +660,7 @@ class Porter:
engine = create_engine(db_config.config) engine = create_engine(db_config.config)
hs = MockHomeserver(self.hs_config) server_name = self.hs.hostname
server_name = hs.hostname
with make_conn( with make_conn(
db_config=db_config, db_config=db_config,
@@ -683,9 +671,17 @@ class Porter:
engine.check_database( engine.check_database(
db_conn, allow_outdated_version=allow_outdated_version db_conn, allow_outdated_version=allow_outdated_version
) )
prepare_database(db_conn, engine, config=self.hs_config) prepare_database(db_conn, engine, config=self.hs.config)
# Type safety: ignore that we're using Mock homeservers here. # Type safety: ignore that we're using Mock homeservers here.
store = Store(DatabasePool(hs, db_config, engine), db_conn, hs) # type: ignore[arg-type] store = Store(
DatabasePool(
self.hs,
db_config,
engine,
),
db_conn,
self.hs,
)
db_conn.commit() db_conn.commit()
return store return store
@@ -782,7 +778,7 @@ class Porter:
return return
self.postgres_store = self.build_db_store( self.postgres_store = self.build_db_store(
self.hs_config.database.get_single_database() self.hs.config.database.get_single_database()
) )
await self.remove_ignored_background_updates_from_database() await self.remove_ignored_background_updates_from_database()
@@ -1571,6 +1567,8 @@ def main() -> None:
config = HomeServerConfig() config = HomeServerConfig()
config.parse_config_dict(hs_config, "", "") config.parse_config_dict(hs_config, "", "")
hs = MockHomeserver(config)
def start(stdscr: Optional["curses.window"] = None) -> None: def start(stdscr: Optional["curses.window"] = None) -> None:
progress: Progress progress: Progress
if stdscr: if stdscr:
@@ -1582,15 +1580,14 @@ def main() -> None:
sqlite_config=sqlite_config, sqlite_config=sqlite_config,
progress=progress, progress=progress,
batch_size=args.batch_size, batch_size=args.batch_size,
hs_config=config, hs=hs,
) )
@defer.inlineCallbacks @defer.inlineCallbacks
def run() -> Generator["defer.Deferred[Any]", Any, None]: def run() -> Generator["defer.Deferred[Any]", Any, None]:
with LoggingContext("synapse_port_db_run"): yield defer.ensureDeferred(porter.run())
yield defer.ensureDeferred(porter.run())
reactor.callWhenRunning(run) hs.get_clock().call_when_running(run)
reactor.run() reactor.run()

View File

@@ -74,7 +74,7 @@ def run_background_updates(hs: HomeServer) -> None:
) )
) )
reactor.callWhenRunning(run) hs.get_clock().call_when_running(run)
reactor.run() reactor.run()
@@ -120,6 +120,13 @@ def main() -> None:
# DB. # DB.
hs.setup() hs.setup()
# This will cause all of the relevant storage classes to be instantiated and call
# `register_background_update_handler(...)`,
# `register_background_index_update(...)`,
# `register_background_validate_constraint(...)`, etc so they are available to use
# if we are asked to run those background updates.
hs.get_storage_controllers()
if args.run_background_updates: if args.run_background_updates:
run_background_updates(hs) run_background_updates(hs)

View File

@@ -43,9 +43,9 @@ from synapse.logging.opentracing import (
from synapse.metrics import SERVER_NAME_LABEL from synapse.metrics import SERVER_NAME_LABEL
from synapse.synapse_rust.http_client import HttpClient from synapse.synapse_rust.http_client import HttpClient
from synapse.types import JsonDict, Requester, UserID, create_requester from synapse.types import JsonDict, Requester, UserID, create_requester
from synapse.util import json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
from synapse.util.json import json_decoder
from . import introspection_response_timer from . import introspection_response_timer

View File

@@ -48,9 +48,9 @@ from synapse.logging.opentracing import (
from synapse.metrics import SERVER_NAME_LABEL from synapse.metrics import SERVER_NAME_LABEL
from synapse.synapse_rust.http_client import HttpClient from synapse.synapse_rust.http_client import HttpClient
from synapse.types import Requester, UserID, create_requester from synapse.types import Requester, UserID, create_requester
from synapse.util import json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
from synapse.util.json import json_decoder
from . import introspection_response_timer from . import introspection_response_timer

View File

@@ -30,7 +30,7 @@ from typing import Any, Dict, List, Optional, Union
from twisted.web import http from twisted.web import http
from synapse.util import json_decoder from synapse.util.json import json_decoder
if typing.TYPE_CHECKING: if typing.TYPE_CHECKING:
from synapse.config.homeserver import HomeServerConfig from synapse.config.homeserver import HomeServerConfig
@@ -140,6 +140,9 @@ class Codes(str, Enum):
# Part of MSC4155 # Part of MSC4155
INVITE_BLOCKED = "ORG.MATRIX.MSC4155.M_INVITE_BLOCKED" INVITE_BLOCKED = "ORG.MATRIX.MSC4155.M_INVITE_BLOCKED"
# Part of MSC4190
APPSERVICE_LOGIN_UNSUPPORTED = "IO.ELEMENT.MSC4190.M_APPSERVICE_LOGIN_UNSUPPORTED"
# Part of MSC4306: Thread Subscriptions # Part of MSC4306: Thread Subscriptions
MSC4306_CONFLICTING_UNSUBSCRIPTION = ( MSC4306_CONFLICTING_UNSUBSCRIPTION = (
"IO.ELEMENT.MSC4306.M_CONFLICTING_UNSUBSCRIPTION" "IO.ELEMENT.MSC4306.M_CONFLICTING_UNSUBSCRIPTION"

View File

@@ -26,7 +26,7 @@ from synapse.api.errors import LimitExceededError
from synapse.config.ratelimiting import RatelimitSettings from synapse.config.ratelimiting import RatelimitSettings
from synapse.storage.databases.main import DataStore from synapse.storage.databases.main import DataStore
from synapse.types import Requester from synapse.types import Requester
from synapse.util import Clock from synapse.util.clock import Clock
if TYPE_CHECKING: if TYPE_CHECKING:
# To avoid circular imports: # To avoid circular imports:

View File

@@ -22,6 +22,7 @@
"""Contains the URL paths to prefix various aspects of the server with.""" """Contains the URL paths to prefix various aspects of the server with."""
import hmac import hmac
import urllib.parse
from hashlib import sha256 from hashlib import sha256
from typing import Optional from typing import Optional
from urllib.parse import urlencode, urljoin from urllib.parse import urlencode, urljoin
@@ -96,11 +97,21 @@ class LoginSSORedirectURIBuilder:
serialized_query_parameters = urlencode({"redirectUrl": client_redirect_url}) serialized_query_parameters = urlencode({"redirectUrl": client_redirect_url})
if idp_id: if idp_id:
# Since this is a user-controlled string, make it safe to include in a URL path.
url_encoded_idp_id = urllib.parse.quote(
idp_id,
# Since this defaults to `safe="/"`, we have to override it. We're
# working with an individual URL path parameter so there shouldn't be
# any slashes in it which could change the request path.
safe="",
encoding="utf8",
)
resultant_url = urljoin( resultant_url = urljoin(
# We have to add a trailing slash to the base URL to ensure that the # We have to add a trailing slash to the base URL to ensure that the
# last path segment is not stripped away when joining with another path. # last path segment is not stripped away when joining with another path.
f"{base_url}/", f"{base_url}/",
f"{idp_id}?{serialized_query_parameters}", f"{url_encoded_idp_id}?{serialized_query_parameters}",
) )
else: else:
resultant_url = f"{base_url}?{serialized_query_parameters}" resultant_url = f"{base_url}?{serialized_query_parameters}"

View File

@@ -72,7 +72,7 @@ from synapse.events.auto_accept_invites import InviteAutoAccepter
from synapse.events.presence_router import load_legacy_presence_router from synapse.events.presence_router import load_legacy_presence_router
from synapse.handlers.auth import load_legacy_password_auth_providers from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import PreserveLoggingContext from synapse.logging.context import LoggingContext, PreserveLoggingContext
from synapse.logging.opentracing import init_tracer from synapse.logging.opentracing import init_tracer
from synapse.metrics import install_gc_manager, register_threadpool from synapse.metrics import install_gc_manager, register_threadpool
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
@@ -183,25 +183,23 @@ def start_reactor(
if gc_thresholds: if gc_thresholds:
gc.set_threshold(*gc_thresholds) gc.set_threshold(*gc_thresholds)
install_gc_manager() install_gc_manager()
run_command()
# make sure that we run the reactor with the sentinel log context, # Reset the logging context when we start the reactor (whenever we yield control
# otherwise other PreserveLoggingContext instances will get confused # to the reactor, the `sentinel` logging context needs to be set so we don't
# and complain when they see the logcontext arbitrarily swapping # leak the current logging context and erroneously apply it to the next task the
# between the sentinel and `run` logcontexts. # reactor event loop picks up)
# with PreserveLoggingContext():
# We also need to drop the logcontext before forking if we're daemonizing, run_command()
# otherwise the cputime metrics get confused about the per-thread resource usage
# appearing to go backwards.
with PreserveLoggingContext():
if daemonize:
assert pid_file is not None
if print_pidfile: if daemonize:
print(pid_file) assert pid_file is not None
daemonize_process(pid_file, logger) if print_pidfile:
run() print(pid_file)
daemonize_process(pid_file, logger)
run()
def quit_with_error(error_string: str) -> NoReturn: def quit_with_error(error_string: str) -> NoReturn:
@@ -243,7 +241,7 @@ def redirect_stdio_to_logs() -> None:
def register_start( def register_start(
cb: Callable[P, Awaitable], *args: P.args, **kwargs: P.kwargs hs: "HomeServer", cb: Callable[P, Awaitable], *args: P.args, **kwargs: P.kwargs
) -> None: ) -> None:
"""Register a callback with the reactor, to be called once it is running """Register a callback with the reactor, to be called once it is running
@@ -280,7 +278,8 @@ def register_start(
# on as normal. # on as normal.
os._exit(1) os._exit(1)
reactor.callWhenRunning(lambda: defer.ensureDeferred(wrapper())) clock = hs.get_clock()
clock.call_when_running(lambda: defer.ensureDeferred(wrapper()))
def listen_metrics(bind_addresses: StrCollection, port: int) -> None: def listen_metrics(bind_addresses: StrCollection, port: int) -> None:
@@ -519,7 +518,9 @@ async def start(hs: "HomeServer") -> None:
# numbers of DNS requests don't starve out other users of the threadpool. # numbers of DNS requests don't starve out other users of the threadpool.
resolver_threadpool = ThreadPool(name="gai_resolver") resolver_threadpool = ThreadPool(name="gai_resolver")
resolver_threadpool.start() resolver_threadpool.start()
reactor.addSystemEventTrigger("during", "shutdown", resolver_threadpool.stop) hs.get_clock().add_system_event_trigger(
"during", "shutdown", resolver_threadpool.stop
)
reactor.installNameResolver( reactor.installNameResolver(
GAIResolver(reactor, getThreadPool=lambda: resolver_threadpool) GAIResolver(reactor, getThreadPool=lambda: resolver_threadpool)
) )
@@ -601,18 +602,38 @@ async def start(hs: "HomeServer") -> None:
hs.get_datastores().main.db_pool.start_profiling() hs.get_datastores().main.db_pool.start_profiling()
hs.get_pusherpool().start() hs.get_pusherpool().start()
def log_shutdown() -> None:
with LoggingContext("log_shutdown"):
logger.info("Shutting down...")
# Log when we start the shut down process. # Log when we start the shut down process.
hs.get_reactor().addSystemEventTrigger( hs.get_clock().add_system_event_trigger("before", "shutdown", log_shutdown)
"before", "shutdown", logger.info, "Shutting down..."
)
setup_sentry(hs) setup_sentry(hs)
setup_sdnotify(hs) setup_sdnotify(hs)
# If background tasks are running on the main process or this is the worker in # Register background tasks required by this server. This must be done
# charge of them, start collecting the phone home stats and shared usage metrics. # somewhat manually due to the background tasks not being registered
# unless handlers are instantiated.
#
# While we could "start" these before the reactor runs, nothing will happen until
# the reactor is running, so we may as well do it here in `start`.
#
# Additionally, this means we also start them after we daemonize and fork the
# process which means we can avoid any potential problems with cputime metrics
# getting confused about the per-thread resource usage appearing to go backwards
# because we're comparing the resource usage (`rusage`) from the original process to
# the forked process.
if hs.config.worker.run_background_tasks: if hs.config.worker.run_background_tasks:
hs.start_background_tasks()
# TODO: This should be moved to same pattern we use for other background tasks:
# Add to `REQUIRED_ON_BACKGROUND_TASK_STARTUP` and rely on
# `start_background_tasks` to start it.
await hs.get_common_usage_metrics_manager().setup() await hs.get_common_usage_metrics_manager().setup()
# TODO: This feels like another pattern that should refactored as one of the
# `REQUIRED_ON_BACKGROUND_TASK_STARTUP`
start_phone_stats_home(hs) start_phone_stats_home(hs)
# We now freeze all allocated objects in the hopes that (almost) # We now freeze all allocated objects in the hopes that (almost)
@@ -701,7 +722,7 @@ def setup_sdnotify(hs: "HomeServer") -> None:
# we're not using systemd. # we're not using systemd.
sdnotify(b"READY=1\nMAINPID=%i" % (os.getpid(),)) sdnotify(b"READY=1\nMAINPID=%i" % (os.getpid(),))
hs.get_reactor().addSystemEventTrigger( hs.get_clock().add_system_event_trigger(
"before", "shutdown", sdnotify, b"STOPPING=1" "before", "shutdown", sdnotify, b"STOPPING=1"
) )

View File

@@ -24,7 +24,7 @@ import logging
import os import os
import sys import sys
import tempfile import tempfile
from typing import List, Mapping, Optional, Sequence from typing import List, Mapping, Optional, Sequence, Tuple
from twisted.internet import defer, task from twisted.internet import defer, task
@@ -256,7 +256,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
return self.base_directory return self.base_directory
def start(config_options: List[str]) -> None: def load_config(argv_options: List[str]) -> Tuple[HomeServerConfig, argparse.Namespace]:
parser = argparse.ArgumentParser(description="Synapse Admin Command") parser = argparse.ArgumentParser(description="Synapse Admin Command")
HomeServerConfig.add_arguments_to_parser(parser) HomeServerConfig.add_arguments_to_parser(parser)
@@ -282,11 +282,15 @@ def start(config_options: List[str]) -> None:
export_data_parser.set_defaults(func=export_data_command) export_data_parser.set_defaults(func=export_data_command)
try: try:
config, args = HomeServerConfig.load_config_with_parser(parser, config_options) config, args = HomeServerConfig.load_config_with_parser(parser, argv_options)
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
return config, args
def start(config: HomeServerConfig, args: argparse.Namespace) -> None:
if config.worker.worker_app is not None: if config.worker.worker_app is not None:
assert config.worker.worker_app == "synapse.app.admin_cmd" assert config.worker.worker_app == "synapse.app.admin_cmd"
@@ -325,7 +329,7 @@ def start(config_options: List[str]) -> None:
# command. # command.
async def run() -> None: async def run() -> None:
with LoggingContext("command"): with LoggingContext(name="command"):
await _base.start(ss) await _base.start(ss)
await args.func(ss, args) await args.func(ss, args)
@@ -337,5 +341,6 @@ def start(config_options: List[str]) -> None:
if __name__ == "__main__": if __name__ == "__main__":
with LoggingContext("main"): homeserver_config, args = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config, args)

View File

@@ -21,13 +21,14 @@
import sys import sys
from synapse.app.generic_worker import start from synapse.app.generic_worker import load_config, start
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
def main() -> None: def main() -> None:
with LoggingContext("main"): homeserver_config = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -21,13 +21,14 @@
import sys import sys
from synapse.app.generic_worker import start from synapse.app.generic_worker import load_config, start
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
def main() -> None: def main() -> None:
with LoggingContext("main"): homeserver_config = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -20,13 +20,14 @@
import sys import sys
from synapse.app.generic_worker import start from synapse.app.generic_worker import load_config, start
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
def main() -> None: def main() -> None:
with LoggingContext("main"): homeserver_config = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -21,13 +21,14 @@
import sys import sys
from synapse.app.generic_worker import start from synapse.app.generic_worker import load_config, start
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
def main() -> None: def main() -> None:
with LoggingContext("main"): homeserver_config = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -21,13 +21,14 @@
import sys import sys
from synapse.app.generic_worker import start from synapse.app.generic_worker import load_config, start
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
def main() -> None: def main() -> None:
with LoggingContext("main"): homeserver_config = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -21,13 +21,14 @@
import sys import sys
from synapse.app.generic_worker import start from synapse.app.generic_worker import load_config, start
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
def main() -> None: def main() -> None:
with LoggingContext("main"): homeserver_config = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -310,13 +310,26 @@ class GenericWorkerServer(HomeServer):
self.get_replication_command_handler().start_replication(self) self.get_replication_command_handler().start_replication(self)
def start(config_options: List[str]) -> None: def load_config(argv_options: List[str]) -> HomeServerConfig:
"""
Parse the commandline and config files (does not generate config)
Args:
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
Returns:
Config object.
"""
try: try:
config = HomeServerConfig.load_config("Synapse worker", config_options) config = HomeServerConfig.load_config("Synapse worker", argv_options)
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n") sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1) sys.exit(1)
return config
def start(config: HomeServerConfig) -> None:
# For backwards compatibility let any of the old app names. # For backwards compatibility let any of the old app names.
assert config.worker.worker_app in ( assert config.worker.worker_app in (
"synapse.app.appservice", "synapse.app.appservice",
@@ -355,7 +368,10 @@ def start(config_options: List[str]) -> None:
except Exception as e: except Exception as e:
handle_startup_exception(e) handle_startup_exception(e)
register_start(_base.start, hs) async def start() -> None:
await _base.start(hs)
register_start(hs, start)
# redirect stdio to the logs, if configured. # redirect stdio to the logs, if configured.
if not hs.config.logging.no_redirect_stdio: if not hs.config.logging.no_redirect_stdio:
@@ -365,8 +381,9 @@ def start(config_options: List[str]) -> None:
def main() -> None: def main() -> None:
with LoggingContext("main"): homeserver_config = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -308,17 +308,21 @@ class SynapseHomeServer(HomeServer):
logger.warning("Unrecognized listener type: %s", listener.type) logger.warning("Unrecognized listener type: %s", listener.type)
def setup(config_options: List[str]) -> SynapseHomeServer: def load_or_generate_config(argv_options: List[str]) -> HomeServerConfig:
""" """
Parse the commandline and config files
Supports generation of config files, so is used for the main homeserver app.
Args: Args:
config_options_options: The options passed to Synapse. Usually `sys.argv[1:]`. argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
Returns: Returns:
A homeserver instance. A homeserver instance.
""" """
try: try:
config = HomeServerConfig.load_or_generate_config( config = HomeServerConfig.load_or_generate_config(
"Synapse Homeserver", config_options "Synapse Homeserver", argv_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\n") sys.stderr.write("\n")
@@ -332,6 +336,20 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
# generating config files and shouldn't try to continue. # generating config files and shouldn't try to continue.
sys.exit(0) sys.exit(0)
return config
def setup(config: HomeServerConfig) -> SynapseHomeServer:
"""
Create and setup a Synapse homeserver instance given a configuration.
Args:
config: The configuration for the homeserver.
Returns:
A homeserver instance.
"""
if config.worker.worker_app: if config.worker.worker_app:
raise ConfigError( raise ConfigError(
"You have specified `worker_app` in the config but are attempting to start a non-worker " "You have specified `worker_app` in the config but are attempting to start a non-worker "
@@ -387,7 +405,7 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
hs.get_datastores().main.db_pool.updates.start_doing_background_updates() hs.get_datastores().main.db_pool.updates.start_doing_background_updates()
register_start(start) register_start(hs, start)
return hs return hs
@@ -405,10 +423,12 @@ def run(hs: HomeServer) -> None:
def main() -> None: def main() -> None:
homeserver_config = load_or_generate_config(sys.argv[1:])
with LoggingContext("main"): with LoggingContext("main"):
# check base requirements # check base requirements
check_requirements() check_requirements()
hs = setup(sys.argv[1:]) hs = setup(homeserver_config)
# redirect stdio to the logs, if configured. # redirect stdio to the logs, if configured.
if not hs.config.logging.no_redirect_stdio: if not hs.config.logging.no_redirect_stdio:

View File

@@ -21,13 +21,14 @@
import sys import sys
from synapse.app.generic_worker import start from synapse.app.generic_worker import load_config, start
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
def main() -> None: def main() -> None:
with LoggingContext("main"): homeserver_config = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -21,13 +21,14 @@
import sys import sys
from synapse.app.generic_worker import start from synapse.app.generic_worker import load_config, start
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
def main() -> None: def main() -> None:
with LoggingContext("main"): homeserver_config = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -21,13 +21,14 @@
import sys import sys
from synapse.app.generic_worker import start from synapse.app.generic_worker import load_config, start
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
def main() -> None: def main() -> None:
with LoggingContext("main"): homeserver_config = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -21,13 +21,14 @@
import sys import sys
from synapse.app.generic_worker import start from synapse.app.generic_worker import load_config, start
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
def main() -> None: def main() -> None:
with LoggingContext("main"): homeserver_config = load_config(sys.argv[1:])
start(sys.argv[1:]) with LoggingContext(name="main"):
start(homeserver_config)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -84,7 +84,7 @@ from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.databases.main import DataStore from synapse.storage.databases.main import DataStore
from synapse.types import DeviceListUpdates, JsonMapping from synapse.types import DeviceListUpdates, JsonMapping
from synapse.util import Clock from synapse.util.clock import Clock
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer

View File

@@ -22,6 +22,7 @@
import argparse import argparse
import errno import errno
import importlib.resources as importlib_resources
import logging import logging
import os import os
import re import re
@@ -46,7 +47,6 @@ from typing import (
import attr import attr
import jinja2 import jinja2
import pkg_resources
import yaml import yaml
from synapse.types import StrSequence from synapse.types import StrSequence
@@ -174,8 +174,8 @@ class Config:
self.root = root_config self.root = root_config
# Get the path to the default Synapse template directory # Get the path to the default Synapse template directory
self.default_template_dir = pkg_resources.resource_filename( self.default_template_dir = str(
"synapse", "res/templates" importlib_resources.files("synapse").joinpath("res").joinpath("templates")
) )
@staticmethod @staticmethod
@@ -646,12 +646,16 @@ class RootConfig:
@classmethod @classmethod
def load_or_generate_config( def load_or_generate_config(
cls: Type[TRootConfig], description: str, argv: List[str] cls: Type[TRootConfig], description: str, argv_options: List[str]
) -> Optional[TRootConfig]: ) -> Optional[TRootConfig]:
"""Parse the commandline and config files """Parse the commandline and config files
Supports generation of config files, so is used for the main homeserver app. Supports generation of config files, so is used for the main homeserver app.
Args:
description: TODO
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
Returns: Returns:
Config object, or None if --generate-config or --generate-keys was set Config object, or None if --generate-config or --generate-keys was set
""" """
@@ -747,7 +751,7 @@ class RootConfig:
) )
cls.invoke_all_static("add_arguments", parser) cls.invoke_all_static("add_arguments", parser)
config_args = parser.parse_args(argv) config_args = parser.parse_args(argv_options)
config_files = find_config_files(search_paths=config_args.config_path) config_files = find_config_files(search_paths=config_args.config_path)

View File

@@ -556,6 +556,9 @@ class ExperimentalConfig(Config):
# MSC4133: Custom profile fields # MSC4133: Custom profile fields
self.msc4133_enabled: bool = experimental.get("msc4133_enabled", False) self.msc4133_enabled: bool = experimental.get("msc4133_enabled", False)
# MSC4169: Backwards-compatible redaction sending using `/send`
self.msc4169_enabled: bool = experimental.get("msc4169_enabled", False)
# MSC4210: Remove legacy mentions # MSC4210: Remove legacy mentions
self.msc4210_enabled: bool = experimental.get("msc4210_enabled", False) self.msc4210_enabled: bool = experimental.get("msc4210_enabled", False)
@@ -590,5 +593,5 @@ class ExperimentalConfig(Config):
self.msc4293_enabled: bool = experimental.get("msc4293_enabled", False) self.msc4293_enabled: bool = experimental.get("msc4293_enabled", False)
# MSC4306: Thread Subscriptions # MSC4306: Thread Subscriptions
# (and MSC4308: sliding sync extension for thread subscriptions) # (and MSC4308: Thread Subscriptions extension to Sliding Sync)
self.msc4306_enabled: bool = experimental.get("msc4306_enabled", False) self.msc4306_enabled: bool = experimental.get("msc4306_enabled", False)

View File

@@ -18,13 +18,13 @@
# [This file includes modifications made by New Vector Limited] # [This file includes modifications made by New Vector Limited]
# #
# #
import importlib.resources as importlib_resources
import json import json
import re import re
from typing import Any, Dict, Iterable, List, Optional, Pattern from typing import Any, Dict, Iterable, List, Optional, Pattern
from urllib import parse as urlparse from urllib import parse as urlparse
import attr import attr
import pkg_resources
from synapse.types import JsonDict, StrSequence from synapse.types import JsonDict, StrSequence
@@ -64,7 +64,12 @@ class OembedConfig(Config):
""" """
# Whether to use the packaged providers.json file. # Whether to use the packaged providers.json file.
if not oembed_config.get("disable_default_providers") or False: if not oembed_config.get("disable_default_providers") or False:
with pkg_resources.resource_stream("synapse", "res/providers.json") as s: path = (
importlib_resources.files("synapse")
.joinpath("res")
.joinpath("providers.json")
)
with path.open("r", encoding="utf-8") as s:
providers = json.load(s) providers = json.load(s)
yield from self._parse_and_validate_provider( yield from self._parse_and_validate_provider(

View File

@@ -120,11 +120,19 @@ def parse_thumbnail_requirements(
@attr.s(auto_attribs=True, slots=True, frozen=True) @attr.s(auto_attribs=True, slots=True, frozen=True)
class MediaUploadLimit: class MediaUploadLimit:
"""A limit on the amount of data a user can upload in a given time """
period.""" Represents a limit on the amount of data a user can upload in a given time
period.
These can be configured through the `media_upload_limits` [config option](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#media_upload_limits)
or via the `get_media_upload_limits_for_user` module API [callback](https://element-hq.github.io/synapse/latest/modules/media_repository_callbacks.html#get_media_upload_limits_for_user).
"""
max_bytes: int max_bytes: int
"""The maximum number of bytes that can be uploaded in the given time period."""
time_period_ms: int time_period_ms: int
"""The time period in milliseconds."""
class ContentRepositoryConfig(Config): class ContentRepositoryConfig(Config):

View File

@@ -38,7 +38,7 @@ from synapse.storage.databases.main import DataStore
from synapse.synapse_rust.events import EventInternalMetadata from synapse.synapse_rust.events import EventInternalMetadata
from synapse.types import EventID, JsonDict, StrCollection from synapse.types import EventID, JsonDict, StrCollection
from synapse.types.state import StateFilter from synapse.types.state import StateFilter
from synapse.util import Clock from synapse.util.clock import Clock
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
if TYPE_CHECKING: if TYPE_CHECKING:

View File

@@ -37,6 +37,7 @@ Events are replicated via a separate events stream.
""" """
import logging import logging
from enum import Enum
from typing import ( from typing import (
TYPE_CHECKING, TYPE_CHECKING,
Dict, Dict,
@@ -67,6 +68,25 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class QueueNames(str, Enum):
PRESENCE_MAP = "presence_map"
KEYED_EDU = "keyed_edu"
KEYED_EDU_CHANGED = "keyed_edu_changed"
EDUS = "edus"
POS_TIME = "pos_time"
PRESENCE_DESTINATIONS = "presence_destinations"
queue_name_to_gauge_map: Dict[QueueNames, LaterGauge] = {}
for queue_name in QueueNames:
queue_name_to_gauge_map[queue_name] = LaterGauge(
name=f"synapse_federation_send_queue_{queue_name.value}_size",
desc="",
labelnames=[SERVER_NAME_LABEL],
)
class FederationRemoteSendQueue(AbstractFederationSender): class FederationRemoteSendQueue(AbstractFederationSender):
"""A drop in replacement for FederationSender""" """A drop in replacement for FederationSender"""
@@ -111,23 +131,16 @@ class FederationRemoteSendQueue(AbstractFederationSender):
# we make a new function, so we need to make a new function so the inner # we make a new function, so we need to make a new function so the inner
# lambda binds to the queue rather than to the name of the queue which # lambda binds to the queue rather than to the name of the queue which
# changes. ARGH. # changes. ARGH.
def register(name: str, queue: Sized) -> None: def register(queue_name: QueueNames, queue: Sized) -> None:
LaterGauge( queue_name_to_gauge_map[queue_name].register_hook(
name="synapse_federation_send_queue_%s_size" % (queue_name,), homeserver_instance_id=hs.get_instance_id(),
desc="", hook=lambda: {(self.server_name,): len(queue)},
labelnames=[SERVER_NAME_LABEL],
caller=lambda: {(self.server_name,): len(queue)},
) )
for queue_name in [ for queue_name in QueueNames:
"presence_map", queue = getattr(self, queue_name.value)
"keyed_edu", assert isinstance(queue, Sized)
"keyed_edu_changed", register(queue_name, queue=queue)
"edus",
"pos_time",
"presence_destinations",
]:
register(queue_name, getattr(self, queue_name))
self.clock.looping_call(self._clear_queue, 30 * 1000) self.clock.looping_call(self._clear_queue, 30 * 1000)

View File

@@ -150,6 +150,7 @@ from prometheus_client import Counter
from twisted.internet import defer from twisted.internet import defer
import synapse.metrics import synapse.metrics
from synapse.api.constants import EventTypes, Membership
from synapse.api.presence import UserPresenceState from synapse.api.presence import UserPresenceState
from synapse.events import EventBase from synapse.events import EventBase
from synapse.federation.sender.per_destination_queue import ( from synapse.federation.sender.per_destination_queue import (
@@ -177,7 +178,7 @@ from synapse.types import (
StrCollection, StrCollection,
get_domain_from_id, get_domain_from_id,
) )
from synapse.util import Clock from synapse.util.clock import Clock
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.retryutils import filter_destinations_by_retry_limiter from synapse.util.retryutils import filter_destinations_by_retry_limiter
@@ -199,6 +200,24 @@ sent_pdus_destination_dist_total = Counter(
labelnames=[SERVER_NAME_LABEL], labelnames=[SERVER_NAME_LABEL],
) )
transaction_queue_pending_destinations_gauge = LaterGauge(
name="synapse_federation_transaction_queue_pending_destinations",
desc="",
labelnames=[SERVER_NAME_LABEL],
)
transaction_queue_pending_pdus_gauge = LaterGauge(
name="synapse_federation_transaction_queue_pending_pdus",
desc="",
labelnames=[SERVER_NAME_LABEL],
)
transaction_queue_pending_edus_gauge = LaterGauge(
name="synapse_federation_transaction_queue_pending_edus",
desc="",
labelnames=[SERVER_NAME_LABEL],
)
# Time (in s) to wait before trying to wake up destinations that have # Time (in s) to wait before trying to wake up destinations that have
# catch-up outstanding. # catch-up outstanding.
# Please note that rate limiting still applies, so while the loop is # Please note that rate limiting still applies, so while the loop is
@@ -398,11 +417,9 @@ class FederationSender(AbstractFederationSender):
# map from destination to PerDestinationQueue # map from destination to PerDestinationQueue
self._per_destination_queues: Dict[str, PerDestinationQueue] = {} self._per_destination_queues: Dict[str, PerDestinationQueue] = {}
LaterGauge( transaction_queue_pending_destinations_gauge.register_hook(
name="synapse_federation_transaction_queue_pending_destinations", homeserver_instance_id=hs.get_instance_id(),
desc="", hook=lambda: {
labelnames=[SERVER_NAME_LABEL],
caller=lambda: {
(self.server_name,): sum( (self.server_name,): sum(
1 1
for d in self._per_destination_queues.values() for d in self._per_destination_queues.values()
@@ -410,22 +427,17 @@ class FederationSender(AbstractFederationSender):
) )
}, },
) )
transaction_queue_pending_pdus_gauge.register_hook(
LaterGauge( homeserver_instance_id=hs.get_instance_id(),
name="synapse_federation_transaction_queue_pending_pdus", hook=lambda: {
desc="",
labelnames=[SERVER_NAME_LABEL],
caller=lambda: {
(self.server_name,): sum( (self.server_name,): sum(
d.pending_pdu_count() for d in self._per_destination_queues.values() d.pending_pdu_count() for d in self._per_destination_queues.values()
) )
}, },
) )
LaterGauge( transaction_queue_pending_edus_gauge.register_hook(
name="synapse_federation_transaction_queue_pending_edus", homeserver_instance_id=hs.get_instance_id(),
desc="", hook=lambda: {
labelnames=[SERVER_NAME_LABEL],
caller=lambda: {
(self.server_name,): sum( (self.server_name,): sum(
d.pending_edu_count() for d in self._per_destination_queues.values() d.pending_edu_count() for d in self._per_destination_queues.values()
) )
@@ -644,6 +656,31 @@ class FederationSender(AbstractFederationSender):
) )
return return
# If we've rescinded an invite then we want to tell the
# other server.
if (
event.type == EventTypes.Member
and event.membership == Membership.LEAVE
and event.sender != event.state_key
):
# We check if this leave event is rescinding an invite
# by looking if there is an invite event for the user in
# the auth events. It could otherwise be a kick or
# unban, which we don't want to send (if the user wasn't
# already in the room).
auth_events = await self.store.get_events_as_list(
event.auth_event_ids()
)
for auth_event in auth_events:
if (
auth_event.type == EventTypes.Member
and auth_event.state_key == event.state_key
and auth_event.membership == Membership.INVITE
):
destinations = set(destinations)
destinations.add(get_domain_from_id(event.state_key))
break
sharded_destinations = { sharded_destinations = {
d d
for d in destinations for d in destinations

View File

@@ -26,7 +26,7 @@ from synapse.api.constants import EduTypes
from synapse.api.errors import HttpResponseException from synapse.api.errors import HttpResponseException
from synapse.events import EventBase from synapse.events import EventBase
from synapse.federation.persistence import TransactionActions from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction from synapse.federation.units import Edu, Transaction, serialize_and_filter_pdus
from synapse.logging.opentracing import ( from synapse.logging.opentracing import (
extract_text_map, extract_text_map,
set_tag, set_tag,
@@ -36,7 +36,7 @@ from synapse.logging.opentracing import (
) )
from synapse.metrics import SERVER_NAME_LABEL from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import JsonDict from synapse.types import JsonDict
from synapse.util import json_decoder from synapse.util.json import json_decoder
from synapse.util.metrics import measure_func from synapse.util.metrics import measure_func
if TYPE_CHECKING: if TYPE_CHECKING:
@@ -119,7 +119,7 @@ class TransactionManager:
transaction_id=txn_id, transaction_id=txn_id,
origin=self.server_name, origin=self.server_name,
destination=destination, destination=destination,
pdus=[p.get_pdu_json() for p in pdus], pdus=serialize_and_filter_pdus(pdus),
edus=[edu.get_dict() for edu in edus], edus=[edu.get_dict() for edu in edus],
) )

View File

@@ -135,7 +135,7 @@ class PublicRoomList(BaseFederationServlet):
if not self.allow_access: if not self.allow_access:
raise FederationDeniedError(origin) raise FederationDeniedError(origin)
limit = parse_integer_from_args(query, "limit", 0) limit: Optional[int] = parse_integer_from_args(query, "limit", 0)
since_token = parse_string_from_args(query, "since", None) since_token = parse_string_from_args(query, "since", None)
include_all_networks = parse_boolean_from_args( include_all_networks = parse_boolean_from_args(
query, "include_all_networks", default=False query, "include_all_networks", default=False

View File

@@ -62,7 +62,7 @@ class DeactivateAccountHandler:
# Start the user parter loop so it can resume parting users from rooms where # Start the user parter loop so it can resume parting users from rooms where
# it left off (if it has work left to do). # it left off (if it has work left to do).
if hs.config.worker.worker_app is None: if hs.config.worker.worker_app is None:
hs.get_reactor().callWhenRunning(self._start_user_parting) hs.get_clock().call_when_running(self._start_user_parting)
else: else:
self._notify_account_deactivated_client = ( self._notify_account_deactivated_client = (
ReplicationNotifyAccountDeactivatedServlet.make_client(hs) ReplicationNotifyAccountDeactivatedServlet.make_client(hs)

View File

@@ -21,9 +21,12 @@ from synapse.api.constants import EventTypes
from synapse.api.errors import ShadowBanError, SynapseError from synapse.api.errors import ShadowBanError, SynapseError
from synapse.api.ratelimiting import Ratelimiter from synapse.api.ratelimiting import Ratelimiter
from synapse.config.workers import MAIN_PROCESS_INSTANCE_NAME from synapse.config.workers import MAIN_PROCESS_INSTANCE_NAME
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.opentracing import set_tag from synapse.logging.opentracing import set_tag
from synapse.metrics import SERVER_NAME_LABEL, event_processing_positions from synapse.metrics import SERVER_NAME_LABEL, event_processing_positions
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import (
run_as_background_process,
)
from synapse.replication.http.delayed_events import ( from synapse.replication.http.delayed_events import (
ReplicationAddedDelayedEventRestServlet, ReplicationAddedDelayedEventRestServlet,
) )
@@ -414,7 +417,7 @@ class DelayedEventsHandler:
requester, requester,
(requester.user.to_string(), requester.device_id), (requester.user.to_string(), requester.device_id),
) )
await self._initialized_from_db await make_deferred_yieldable(self._initialized_from_db)
next_send_ts = await self._store.cancel_delayed_event( next_send_ts = await self._store.cancel_delayed_event(
delay_id=delay_id, delay_id=delay_id,
@@ -440,7 +443,7 @@ class DelayedEventsHandler:
requester, requester,
(requester.user.to_string(), requester.device_id), (requester.user.to_string(), requester.device_id),
) )
await self._initialized_from_db await make_deferred_yieldable(self._initialized_from_db)
next_send_ts = await self._store.restart_delayed_event( next_send_ts = await self._store.restart_delayed_event(
delay_id=delay_id, delay_id=delay_id,
@@ -466,7 +469,7 @@ class DelayedEventsHandler:
# Use standard request limiter for sending delayed events on-demand, # Use standard request limiter for sending delayed events on-demand,
# as an on-demand send is similar to sending a regular event. # as an on-demand send is similar to sending a regular event.
await self._request_ratelimiter.ratelimit(requester) await self._request_ratelimiter.ratelimit(requester)
await self._initialized_from_db await make_deferred_yieldable(self._initialized_from_db)
event, next_send_ts = await self._store.process_target_delayed_event( event, next_send_ts = await self._store.process_target_delayed_event(
delay_id=delay_id, delay_id=delay_id,

View File

@@ -1002,7 +1002,7 @@ class DeviceWriterHandler(DeviceHandler):
# rolling-restarting Synapse. # rolling-restarting Synapse.
if self._is_main_device_list_writer: if self._is_main_device_list_writer:
# On start up check if there are any updates pending. # On start up check if there are any updates pending.
hs.get_reactor().callWhenRunning(self._handle_new_device_update_async) hs.get_clock().call_when_running(self._handle_new_device_update_async)
self.device_list_updater = DeviceListUpdater(hs, self) self.device_list_updater = DeviceListUpdater(hs, self)
hs.get_federation_registry().register_edu_handler( hs.get_federation_registry().register_edu_handler(
EduTypes.DEVICE_LIST_UPDATE, EduTypes.DEVICE_LIST_UPDATE,

View File

@@ -34,7 +34,7 @@ from synapse.logging.opentracing import (
set_tag, set_tag,
) )
from synapse.types import JsonDict, Requester, StreamKeyType, UserID, get_domain_from_id from synapse.types import JsonDict, Requester, StreamKeyType, UserID, get_domain_from_id
from synapse.util import json_encoder from synapse.util.json import json_encoder
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
if TYPE_CHECKING: if TYPE_CHECKING:

View File

@@ -44,9 +44,9 @@ from synapse.types import (
get_domain_from_id, get_domain_from_id,
get_verify_key_from_cross_signing_key, get_verify_key_from_cross_signing_key,
) )
from synapse.util import json_decoder
from synapse.util.async_helpers import Linearizer, concurrently_execute from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.cancellation import cancellable from synapse.util.cancellation import cancellable
from synapse.util.json import json_decoder
from synapse.util.retryutils import ( from synapse.util.retryutils import (
NotRetryingDestination, NotRetryingDestination,
filter_destinations_by_retry_limiter, filter_destinations_by_retry_limiter,
@@ -57,7 +57,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
ONE_TIME_KEY_UPLOAD = "one_time_key_upload_lock" ONE_TIME_KEY_UPLOAD = "one_time_key_upload_lock"
@@ -848,14 +847,22 @@ class E2eKeysHandler:
""" """
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
# TODO: Validate the JSON to make sure it has the right keys.
device_keys = keys.get("device_keys", None) device_keys = keys.get("device_keys", None)
if device_keys: if device_keys:
log_kv(
{
"message": "Updating device_keys for user.",
"user_id": user_id,
"device_id": device_id,
}
)
await self.upload_device_keys_for_user( await self.upload_device_keys_for_user(
user_id=user_id, user_id=user_id,
device_id=device_id, device_id=device_id,
keys={"device_keys": device_keys}, keys={"device_keys": device_keys},
) )
else:
log_kv({"message": "Did not update device_keys", "reason": "not a dict"})
one_time_keys = keys.get("one_time_keys", None) one_time_keys = keys.get("one_time_keys", None)
if one_time_keys: if one_time_keys:
@@ -873,10 +880,9 @@ class E2eKeysHandler:
log_kv( log_kv(
{"message": "Did not update one_time_keys", "reason": "no keys given"} {"message": "Did not update one_time_keys", "reason": "no keys given"}
) )
fallback_keys = keys.get("fallback_keys") or keys.get(
"org.matrix.msc2732.fallback_keys" fallback_keys = keys.get("fallback_keys")
) if fallback_keys:
if fallback_keys and isinstance(fallback_keys, dict):
log_kv( log_kv(
{ {
"message": "Updating fallback_keys for device.", "message": "Updating fallback_keys for device.",
@@ -885,8 +891,6 @@ class E2eKeysHandler:
} }
) )
await self.store.set_e2e_fallback_keys(user_id, device_id, fallback_keys) await self.store.set_e2e_fallback_keys(user_id, device_id, fallback_keys)
elif fallback_keys:
log_kv({"message": "Did not update fallback_keys", "reason": "not a dict"})
else: else:
log_kv( log_kv(
{"message": "Did not update fallback_keys", "reason": "no keys given"} {"message": "Did not update fallback_keys", "reason": "no keys given"}

View File

@@ -248,9 +248,10 @@ class FederationEventHandler:
self.room_queues[room_id].append((pdu, origin)) self.room_queues[room_id].append((pdu, origin))
return return
# If we're not in the room just ditch the event entirely. This is # If we're not in the room just ditch the event entirely (and not
# probably an old server that has come back and thinks we're still in # invited). This is probably an old server that has come back and thinks
# the room (or we've been rejoined to the room by a state reset). # we're still in the room (or we've been rejoined to the room by a state
# reset).
# #
# Note that if we were never in the room then we would have already # Note that if we were never in the room then we would have already
# dropped the event, since we wouldn't know the room version. # dropped the event, since we wouldn't know the room version.
@@ -258,6 +259,43 @@ class FederationEventHandler:
room_id, self.server_name room_id, self.server_name
) )
if not is_in_room: if not is_in_room:
# Check if this is a leave event rescinding an invite
if (
pdu.type == EventTypes.Member
and pdu.membership == Membership.LEAVE
and pdu.state_key != pdu.sender
and self._is_mine_id(pdu.state_key)
):
(
membership,
membership_event_id,
) = await self._store.get_local_current_membership_for_user_in_room(
pdu.state_key, pdu.room_id
)
if (
membership == Membership.INVITE
and membership_event_id
and membership_event_id
in pdu.auth_event_ids() # The invite should be in the auth events of the rescission.
):
invite_event = await self._store.get_event(
membership_event_id, allow_none=True
)
# We cannot fully auth the rescission event, but we can
# check if the sender of the leave event is the same as the
# invite.
#
# Technically, a room admin could rescind the invite, but we
# have no way of knowing who is and isn't a room admin.
if invite_event and pdu.sender == invite_event.sender:
# Handle the rescission event
pdu.internal_metadata.outlier = True
pdu.internal_metadata.out_of_band_membership = True
context = EventContext.for_outlier(self._storage_controllers)
await self.persist_events_and_notify(room_id, [(pdu, context)])
return
logger.info( logger.info(
"Ignoring PDU from %s as we're not in the room", "Ignoring PDU from %s as we're not in the room",
origin, origin,

View File

@@ -39,8 +39,8 @@ from synapse.http import RequestTimedOutError
from synapse.http.client import SimpleHttpClient from synapse.http.client import SimpleHttpClient
from synapse.http.site import SynapseRequest from synapse.http.site import SynapseRequest
from synapse.types import JsonDict, Requester from synapse.types import JsonDict, Requester
from synapse.util import json_decoder
from synapse.util.hash import sha256_and_url_safe_base64 from synapse.util.hash import sha256_and_url_safe_base64
from synapse.util.json import json_decoder
from synapse.util.stringutils import ( from synapse.util.stringutils import (
assert_valid_client_secret, assert_valid_client_secret,
random_string, random_string,

View File

@@ -81,9 +81,10 @@ from synapse.types import (
create_requester, create_requester,
) )
from synapse.types.state import StateFilter from synapse.types.state import StateFilter
from synapse.util import json_decoder, json_encoder, log_failure, unwrapFirstError from synapse.util import log_failure, unwrapFirstError
from synapse.util.async_helpers import Linearizer, gather_results from synapse.util.async_helpers import Linearizer, gather_results
from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.json import json_decoder, json_encoder
from synapse.util.metrics import measure_func from synapse.util.metrics import measure_func
from synapse.visibility import get_effective_room_visibility_from_state from synapse.visibility import get_effective_room_visibility_from_state
@@ -1013,14 +1014,37 @@ class EventCreationHandler:
await self.clock.sleep(random.randint(1, 10)) await self.clock.sleep(random.randint(1, 10))
raise ShadowBanError() raise ShadowBanError()
if ratelimit: room_version = None
if (
event_dict["type"] == EventTypes.Redaction
and "redacts" in event_dict["content"]
and self.hs.config.experimental.msc4169_enabled
):
room_id = event_dict["room_id"] room_id = event_dict["room_id"]
try: try:
room_version = await self.store.get_room_version(room_id) room_version = await self.store.get_room_version(room_id)
except NotFoundError: except NotFoundError:
# The room doesn't exist.
raise AuthError(403, f"User {requester.user} not in room {room_id}") raise AuthError(403, f"User {requester.user} not in room {room_id}")
if not room_version.updated_redaction_rules:
# Legacy room versions need the "redacts" field outside of the event's
# content. However clients may still send it within the content, so move
# the field if necessary for compatibility.
redacts = event_dict.get("redacts") or event_dict["content"].pop(
"redacts", None
)
if redacts is not None and "redacts" not in event_dict:
event_dict["redacts"] = redacts
if ratelimit:
if room_version is None:
room_id = event_dict["room_id"]
try:
room_version = await self.store.get_room_version(room_id)
except NotFoundError:
raise AuthError(403, f"User {requester.user} not in room {room_id}")
if room_version.updated_redaction_rules: if room_version.updated_redaction_rules:
redacts = event_dict["content"].get("redacts") redacts = event_dict["content"].get("redacts")
else: else:

View File

@@ -67,8 +67,9 @@ from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable from synapse.logging.context import make_deferred_yieldable
from synapse.module_api import ModuleApi from synapse.module_api import ModuleApi
from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart
from synapse.util import Clock, json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.clock import Clock
from synapse.util.json import json_decoder
from synapse.util.macaroons import MacaroonGenerator, OidcSessionData from synapse.util.macaroons import MacaroonGenerator, OidcSessionData
from synapse.util.templates import _localpart_from_email_filter from synapse.util.templates import _localpart_from_email_filter

View File

@@ -173,6 +173,18 @@ state_transition_counter = Counter(
labelnames=["locality", "from", "to", SERVER_NAME_LABEL], labelnames=["locality", "from", "to", SERVER_NAME_LABEL],
) )
presence_user_to_current_state_size_gauge = LaterGauge(
name="synapse_handlers_presence_user_to_current_state_size",
desc="",
labelnames=[SERVER_NAME_LABEL],
)
presence_wheel_timer_size_gauge = LaterGauge(
name="synapse_handlers_presence_wheel_timer_size",
desc="",
labelnames=[SERVER_NAME_LABEL],
)
# If a user was last active in the last LAST_ACTIVE_GRANULARITY, consider them # If a user was last active in the last LAST_ACTIVE_GRANULARITY, consider them
# "currently_active" # "currently_active"
LAST_ACTIVE_GRANULARITY = 60 * 1000 LAST_ACTIVE_GRANULARITY = 60 * 1000
@@ -529,7 +541,7 @@ class WorkerPresenceHandler(BasePresenceHandler):
self.send_stop_syncing, UPDATE_SYNCING_USERS_MS self.send_stop_syncing, UPDATE_SYNCING_USERS_MS
) )
hs.get_reactor().addSystemEventTrigger( hs.get_clock().add_system_event_trigger(
"before", "before",
"shutdown", "shutdown",
run_as_background_process, run_as_background_process,
@@ -779,11 +791,9 @@ class PresenceHandler(BasePresenceHandler):
EduTypes.PRESENCE, self.incoming_presence EduTypes.PRESENCE, self.incoming_presence
) )
LaterGauge( presence_user_to_current_state_size_gauge.register_hook(
name="synapse_handlers_presence_user_to_current_state_size", homeserver_instance_id=hs.get_instance_id(),
desc="", hook=lambda: {(self.server_name,): len(self.user_to_current_state)},
labelnames=[SERVER_NAME_LABEL],
caller=lambda: {(self.server_name,): len(self.user_to_current_state)},
) )
# The per-device presence state, maps user to devices to per-device presence state. # The per-device presence state, maps user to devices to per-device presence state.
@@ -832,7 +842,7 @@ class PresenceHandler(BasePresenceHandler):
# have not yet been persisted # have not yet been persisted
self.unpersisted_users_changes: Set[str] = set() self.unpersisted_users_changes: Set[str] = set()
hs.get_reactor().addSystemEventTrigger( hs.get_clock().add_system_event_trigger(
"before", "before",
"shutdown", "shutdown",
run_as_background_process, run_as_background_process,
@@ -882,11 +892,9 @@ class PresenceHandler(BasePresenceHandler):
60 * 1000, 60 * 1000,
) )
LaterGauge( presence_wheel_timer_size_gauge.register_hook(
name="synapse_handlers_presence_wheel_timer_size", homeserver_instance_id=hs.get_instance_id(),
desc="", hook=lambda: {(self.server_name,): len(self.wheel_timer)},
labelnames=[SERVER_NAME_LABEL],
caller=lambda: {(self.server_name,): len(self.wheel_timer)},
) )
# Used to handle sending of presence to newly joined users/servers # Used to handle sending of presence to newly joined users/servers

View File

@@ -211,7 +211,7 @@ class SlidingSyncHandler:
Args: Args:
sync_config: Sync configuration sync_config: Sync configuration
to_token: The point in the stream to sync up to. to_token: The latest point in the stream to sync up to.
from_token: The point in the stream to sync from. Token of the end of the from_token: The point in the stream to sync from. Token of the end of the
previous batch. May be `None` if this is the initial sync request. previous batch. May be `None` if this is the initial sync request.
""" """

View File

@@ -27,7 +27,7 @@ from typing import (
cast, cast,
) )
from typing_extensions import assert_never from typing_extensions import TypeAlias, assert_never
from synapse.api.constants import AccountDataTypes, EduTypes from synapse.api.constants import AccountDataTypes, EduTypes
from synapse.handlers.receipts import ReceiptEventSource from synapse.handlers.receipts import ReceiptEventSource
@@ -40,6 +40,7 @@ from synapse.types import (
SlidingSyncStreamToken, SlidingSyncStreamToken,
StrCollection, StrCollection,
StreamToken, StreamToken,
ThreadSubscriptionsToken,
) )
from synapse.types.handlers.sliding_sync import ( from synapse.types.handlers.sliding_sync import (
HaveSentRoomFlag, HaveSentRoomFlag,
@@ -54,6 +55,13 @@ from synapse.util.async_helpers import (
gather_optional_coroutines, gather_optional_coroutines,
) )
_ThreadSubscription: TypeAlias = (
SlidingSyncResult.Extensions.ThreadSubscriptionsExtension.ThreadSubscription
)
_ThreadUnsubscription: TypeAlias = (
SlidingSyncResult.Extensions.ThreadSubscriptionsExtension.ThreadUnsubscription
)
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@@ -68,6 +76,7 @@ class SlidingSyncExtensionHandler:
self.event_sources = hs.get_event_sources() self.event_sources = hs.get_event_sources()
self.device_handler = hs.get_device_handler() self.device_handler = hs.get_device_handler()
self.push_rules_handler = hs.get_push_rules_handler() self.push_rules_handler = hs.get_push_rules_handler()
self._enable_thread_subscriptions = hs.config.experimental.msc4306_enabled
@trace @trace
async def get_extensions_response( async def get_extensions_response(
@@ -93,7 +102,7 @@ class SlidingSyncExtensionHandler:
actual_room_ids: The actual room IDs in the the Sliding Sync response. actual_room_ids: The actual room IDs in the the Sliding Sync response.
actual_room_response_map: A map of room ID to room results in the the actual_room_response_map: A map of room ID to room results in the the
Sliding Sync response. Sliding Sync response.
to_token: The point in the stream to sync up to. to_token: The latest point in the stream to sync up to.
from_token: The point in the stream to sync from. from_token: The point in the stream to sync from.
""" """
@@ -156,18 +165,32 @@ class SlidingSyncExtensionHandler:
from_token=from_token, from_token=from_token,
) )
thread_subs_coro = None
if (
sync_config.extensions.thread_subscriptions is not None
and self._enable_thread_subscriptions
):
thread_subs_coro = self.get_thread_subscriptions_extension_response(
sync_config=sync_config,
thread_subscriptions_request=sync_config.extensions.thread_subscriptions,
to_token=to_token,
from_token=from_token,
)
( (
to_device_response, to_device_response,
e2ee_response, e2ee_response,
account_data_response, account_data_response,
receipts_response, receipts_response,
typing_response, typing_response,
thread_subs_response,
) = await gather_optional_coroutines( ) = await gather_optional_coroutines(
to_device_coro, to_device_coro,
e2ee_coro, e2ee_coro,
account_data_coro, account_data_coro,
receipts_coro, receipts_coro,
typing_coro, typing_coro,
thread_subs_coro,
) )
return SlidingSyncResult.Extensions( return SlidingSyncResult.Extensions(
@@ -176,6 +199,7 @@ class SlidingSyncExtensionHandler:
account_data=account_data_response, account_data=account_data_response,
receipts=receipts_response, receipts=receipts_response,
typing=typing_response, typing=typing_response,
thread_subscriptions=thread_subs_response,
) )
def find_relevant_room_ids_for_extension( def find_relevant_room_ids_for_extension(
@@ -877,3 +901,72 @@ class SlidingSyncExtensionHandler:
return SlidingSyncResult.Extensions.TypingExtension( return SlidingSyncResult.Extensions.TypingExtension(
room_id_to_typing_map=room_id_to_typing_map, room_id_to_typing_map=room_id_to_typing_map,
) )
async def get_thread_subscriptions_extension_response(
self,
sync_config: SlidingSyncConfig,
thread_subscriptions_request: SlidingSyncConfig.Extensions.ThreadSubscriptionsExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.ThreadSubscriptionsExtension]:
"""Handle Thread Subscriptions extension (MSC4308)
Args:
sync_config: Sync configuration
thread_subscriptions_request: The thread_subscriptions extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
Returns:
the response (None if empty or thread subscriptions are disabled)
"""
if not thread_subscriptions_request.enabled:
return None
limit = thread_subscriptions_request.limit
if from_token:
from_stream_id = from_token.stream_token.thread_subscriptions_key
else:
from_stream_id = StreamToken.START.thread_subscriptions_key
to_stream_id = to_token.thread_subscriptions_key
updates = await self.store.get_latest_updated_thread_subscriptions_for_user(
user_id=sync_config.user.to_string(),
from_id=from_stream_id,
to_id=to_stream_id,
limit=limit,
)
if len(updates) == 0:
return None
subscribed_threads: Dict[str, Dict[str, _ThreadSubscription]] = {}
unsubscribed_threads: Dict[str, Dict[str, _ThreadUnsubscription]] = {}
for stream_id, room_id, thread_root_id, subscribed, automatic in updates:
if subscribed:
subscribed_threads.setdefault(room_id, {})[thread_root_id] = (
_ThreadSubscription(
automatic=automatic,
bump_stamp=stream_id,
)
)
else:
unsubscribed_threads.setdefault(room_id, {})[thread_root_id] = (
_ThreadUnsubscription(bump_stamp=stream_id)
)
prev_batch = None
if len(updates) == limit:
# Tell the client about a potential gap where there may be more
# thread subscriptions for it to backpaginate.
# We subtract one because the 'later in the stream' bound is inclusive,
# and we already saw the element at index 0.
prev_batch = ThreadSubscriptionsToken(updates[0][0] - 1)
return SlidingSyncResult.Extensions.ThreadSubscriptionsExtension(
subscribed=subscribed_threads,
unsubscribed=unsubscribed_threads,
prev_batch=prev_batch,
)

View File

@@ -20,7 +20,6 @@
# #
import itertools import itertools
import logging import logging
from enum import Enum
from typing import ( from typing import (
TYPE_CHECKING, TYPE_CHECKING,
AbstractSet, AbstractSet,
@@ -28,14 +27,11 @@ from typing import (
Dict, Dict,
FrozenSet, FrozenSet,
List, List,
Literal,
Mapping, Mapping,
Optional, Optional,
Sequence, Sequence,
Set, Set,
Tuple, Tuple,
Union,
overload,
) )
import attr import attr
@@ -120,25 +116,6 @@ LAZY_LOADED_MEMBERS_CACHE_MAX_SIZE = 100
SyncRequestKey = Tuple[Any, ...] SyncRequestKey = Tuple[Any, ...]
class SyncVersion(Enum):
"""
Enum for specifying the version of sync request. This is used to key which type of
sync response that we are generating.
This is different than the `sync_type` you might see used in other code below; which
specifies the sub-type sync request (e.g. initial_sync, full_state_sync,
incremental_sync) and is really only relevant for the `/sync` v2 endpoint.
"""
# These string values are semantically significant because they are used in the the
# metrics
# Traditional `/sync` endpoint
SYNC_V2 = "sync_v2"
# Part of MSC3575 Sliding Sync
E2EE_SYNC = "e2ee_sync"
@attr.s(slots=True, frozen=True, auto_attribs=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class SyncConfig: class SyncConfig:
user: UserID user: UserID
@@ -308,26 +285,6 @@ class SyncResult:
) )
@attr.s(slots=True, frozen=True, auto_attribs=True)
class E2eeSyncResult:
"""
Attributes:
next_batch: Token for the next sync
to_device: List of direct messages for the device.
device_lists: List of user_ids whose devices have changed
device_one_time_keys_count: Dict of algorithm to count for one time keys
for this device
device_unused_fallback_key_types: List of key types that have an unused fallback
key
"""
next_batch: StreamToken
to_device: List[JsonDict]
device_lists: DeviceListUpdates
device_one_time_keys_count: JsonMapping
device_unused_fallback_key_types: List[str]
class SyncHandler: class SyncHandler:
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
self.server_name = hs.hostname self.server_name = hs.hostname
@@ -373,52 +330,15 @@ class SyncHandler:
self.rooms_to_exclude_globally = hs.config.server.rooms_to_exclude_from_sync self.rooms_to_exclude_globally = hs.config.server.rooms_to_exclude_from_sync
@overload
async def wait_for_sync_for_user( async def wait_for_sync_for_user(
self, self,
requester: Requester, requester: Requester,
sync_config: SyncConfig, sync_config: SyncConfig,
sync_version: Literal[SyncVersion.SYNC_V2],
request_key: SyncRequestKey, request_key: SyncRequestKey,
since_token: Optional[StreamToken] = None, since_token: Optional[StreamToken] = None,
timeout: int = 0, timeout: int = 0,
full_state: bool = False, full_state: bool = False,
) -> SyncResult: ... ) -> SyncResult:
@overload
async def wait_for_sync_for_user(
self,
requester: Requester,
sync_config: SyncConfig,
sync_version: Literal[SyncVersion.E2EE_SYNC],
request_key: SyncRequestKey,
since_token: Optional[StreamToken] = None,
timeout: int = 0,
full_state: bool = False,
) -> E2eeSyncResult: ...
@overload
async def wait_for_sync_for_user(
self,
requester: Requester,
sync_config: SyncConfig,
sync_version: SyncVersion,
request_key: SyncRequestKey,
since_token: Optional[StreamToken] = None,
timeout: int = 0,
full_state: bool = False,
) -> Union[SyncResult, E2eeSyncResult]: ...
async def wait_for_sync_for_user(
self,
requester: Requester,
sync_config: SyncConfig,
sync_version: SyncVersion,
request_key: SyncRequestKey,
since_token: Optional[StreamToken] = None,
timeout: int = 0,
full_state: bool = False,
) -> Union[SyncResult, E2eeSyncResult]:
"""Get the sync for a client if we have new data for it now. Otherwise """Get the sync for a client if we have new data for it now. Otherwise
wait for new data to arrive on the server. If the timeout expires, then wait for new data to arrive on the server. If the timeout expires, then
return an empty sync result. return an empty sync result.
@@ -433,8 +353,7 @@ class SyncHandler:
full_state: Whether to return the full state for each room. full_state: Whether to return the full state for each room.
Returns: Returns:
When `SyncVersion.SYNC_V2`, returns a full `SyncResult`. returns a full `SyncResult`.
When `SyncVersion.E2EE_SYNC`, returns a `E2eeSyncResult`.
""" """
# If the user is not part of the mau group, then check that limits have # If the user is not part of the mau group, then check that limits have
# not been exceeded (if not part of the group by this point, almost certain # not been exceeded (if not part of the group by this point, almost certain
@@ -446,7 +365,6 @@ class SyncHandler:
request_key, request_key,
self._wait_for_sync_for_user, self._wait_for_sync_for_user,
sync_config, sync_config,
sync_version,
since_token, since_token,
timeout, timeout,
full_state, full_state,
@@ -455,48 +373,14 @@ class SyncHandler:
logger.debug("Returning sync response for %s", user_id) logger.debug("Returning sync response for %s", user_id)
return res return res
@overload
async def _wait_for_sync_for_user( async def _wait_for_sync_for_user(
self, self,
sync_config: SyncConfig, sync_config: SyncConfig,
sync_version: Literal[SyncVersion.SYNC_V2],
since_token: Optional[StreamToken], since_token: Optional[StreamToken],
timeout: int, timeout: int,
full_state: bool, full_state: bool,
cache_context: ResponseCacheContext[SyncRequestKey], cache_context: ResponseCacheContext[SyncRequestKey],
) -> SyncResult: ... ) -> SyncResult:
@overload
async def _wait_for_sync_for_user(
self,
sync_config: SyncConfig,
sync_version: Literal[SyncVersion.E2EE_SYNC],
since_token: Optional[StreamToken],
timeout: int,
full_state: bool,
cache_context: ResponseCacheContext[SyncRequestKey],
) -> E2eeSyncResult: ...
@overload
async def _wait_for_sync_for_user(
self,
sync_config: SyncConfig,
sync_version: SyncVersion,
since_token: Optional[StreamToken],
timeout: int,
full_state: bool,
cache_context: ResponseCacheContext[SyncRequestKey],
) -> Union[SyncResult, E2eeSyncResult]: ...
async def _wait_for_sync_for_user(
self,
sync_config: SyncConfig,
sync_version: SyncVersion,
since_token: Optional[StreamToken],
timeout: int,
full_state: bool,
cache_context: ResponseCacheContext[SyncRequestKey],
) -> Union[SyncResult, E2eeSyncResult]:
"""The start of the machinery that produces a /sync response. """The start of the machinery that produces a /sync response.
See https://spec.matrix.org/v1.1/client-server-api/#syncing for full details. See https://spec.matrix.org/v1.1/client-server-api/#syncing for full details.
@@ -517,7 +401,7 @@ class SyncHandler:
else: else:
sync_type = "incremental_sync" sync_type = "incremental_sync"
sync_label = f"{sync_version}:{sync_type}" sync_label = f"sync_v2:{sync_type}"
context = current_context() context = current_context()
if context: if context:
@@ -578,19 +462,15 @@ class SyncHandler:
if timeout == 0 or since_token is None or full_state: if timeout == 0 or since_token is None or full_state:
# we are going to return immediately, so don't bother calling # we are going to return immediately, so don't bother calling
# notifier.wait_for_events. # notifier.wait_for_events.
result: Union[ result = await self.current_sync_for_user(
SyncResult, E2eeSyncResult sync_config, since_token, full_state=full_state
] = await self.current_sync_for_user(
sync_config, sync_version, since_token, full_state=full_state
) )
else: else:
# Otherwise, we wait for something to happen and report it to the user. # Otherwise, we wait for something to happen and report it to the user.
async def current_sync_callback( async def current_sync_callback(
before_token: StreamToken, after_token: StreamToken before_token: StreamToken, after_token: StreamToken
) -> Union[SyncResult, E2eeSyncResult]: ) -> SyncResult:
return await self.current_sync_for_user( return await self.current_sync_for_user(sync_config, since_token)
sync_config, sync_version, since_token
)
result = await self.notifier.wait_for_events( result = await self.notifier.wait_for_events(
sync_config.user.to_string(), sync_config.user.to_string(),
@@ -623,43 +503,15 @@ class SyncHandler:
return result return result
@overload
async def current_sync_for_user( async def current_sync_for_user(
self, self,
sync_config: SyncConfig, sync_config: SyncConfig,
sync_version: Literal[SyncVersion.SYNC_V2],
since_token: Optional[StreamToken] = None, since_token: Optional[StreamToken] = None,
full_state: bool = False, full_state: bool = False,
) -> SyncResult: ... ) -> SyncResult:
@overload
async def current_sync_for_user(
self,
sync_config: SyncConfig,
sync_version: Literal[SyncVersion.E2EE_SYNC],
since_token: Optional[StreamToken] = None,
full_state: bool = False,
) -> E2eeSyncResult: ...
@overload
async def current_sync_for_user(
self,
sync_config: SyncConfig,
sync_version: SyncVersion,
since_token: Optional[StreamToken] = None,
full_state: bool = False,
) -> Union[SyncResult, E2eeSyncResult]: ...
async def current_sync_for_user(
self,
sync_config: SyncConfig,
sync_version: SyncVersion,
since_token: Optional[StreamToken] = None,
full_state: bool = False,
) -> Union[SyncResult, E2eeSyncResult]:
""" """
Generates the response body of a sync result, represented as a Generates the response body of a sync result, represented as a
`SyncResult`/`E2eeSyncResult`. `SyncResult`.
This is a wrapper around `generate_sync_result` which starts an open tracing This is a wrapper around `generate_sync_result` which starts an open tracing
span to track the sync. See `generate_sync_result` for the next part of your span to track the sync. See `generate_sync_result` for the next part of your
@@ -672,28 +524,15 @@ class SyncHandler:
full_state: Whether to return the full state for each room. full_state: Whether to return the full state for each room.
Returns: Returns:
When `SyncVersion.SYNC_V2`, returns a full `SyncResult`. returns a full `SyncResult`.
When `SyncVersion.E2EE_SYNC`, returns a `E2eeSyncResult`.
""" """
with start_active_span("sync.current_sync_for_user"): with start_active_span("sync.current_sync_for_user"):
log_kv({"since_token": since_token}) log_kv({"since_token": since_token})
# Go through the `/sync` v2 path # Go through the `/sync` v2 path
if sync_version == SyncVersion.SYNC_V2: sync_result = await self.generate_sync_result(
sync_result: Union[ sync_config, since_token, full_state
SyncResult, E2eeSyncResult )
] = await self.generate_sync_result(
sync_config, since_token, full_state
)
# Go through the MSC3575 Sliding Sync `/sync/e2ee` path
elif sync_version == SyncVersion.E2EE_SYNC:
sync_result = await self.generate_e2ee_sync_result(
sync_config, since_token
)
else:
raise Exception(
f"Unknown sync_version (this is a Synapse problem): {sync_version}"
)
set_tag(SynapseTags.SYNC_RESULT, bool(sync_result)) set_tag(SynapseTags.SYNC_RESULT, bool(sync_result))
return sync_result return sync_result
@@ -1968,102 +1807,6 @@ class SyncHandler:
next_batch=sync_result_builder.now_token, next_batch=sync_result_builder.now_token,
) )
async def generate_e2ee_sync_result(
self,
sync_config: SyncConfig,
since_token: Optional[StreamToken] = None,
) -> E2eeSyncResult:
"""
Generates the response body of a MSC3575 Sliding Sync `/sync/e2ee` result.
This is represented by a `E2eeSyncResult` struct, which is built from small
pieces using a `SyncResultBuilder`. The `sync_result_builder` is passed as a
mutable ("inout") parameter to various helper functions. These retrieve and
process the data which forms the sync body, often writing to the
`sync_result_builder` to store their output.
At the end, we transfer data from the `sync_result_builder` to a new `E2eeSyncResult`
instance to signify that the sync calculation is complete.
"""
user_id = sync_config.user.to_string()
app_service = self.store.get_app_service_by_user_id(user_id)
if app_service:
# We no longer support AS users using /sync directly.
# See https://github.com/matrix-org/matrix-doc/issues/1144
raise NotImplementedError()
sync_result_builder = await self.get_sync_result_builder(
sync_config,
since_token,
full_state=False,
)
# 1. Calculate `to_device` events
await self._generate_sync_entry_for_to_device(sync_result_builder)
# 2. Calculate `device_lists`
# Device list updates are sent if a since token is provided.
device_lists = DeviceListUpdates()
include_device_list_updates = bool(since_token and since_token.device_list_key)
if include_device_list_updates:
# Note that _generate_sync_entry_for_rooms sets sync_result_builder.joined, which
# is used in calculate_user_changes below.
#
# TODO: Running `_generate_sync_entry_for_rooms()` is a lot of work just to
# figure out the membership changes/derived info needed for
# `_generate_sync_entry_for_device_list()`. In the future, we should try to
# refactor this away.
(
newly_joined_rooms,
newly_left_rooms,
) = await self._generate_sync_entry_for_rooms(sync_result_builder)
# This uses the sync_result_builder.joined which is set in
# `_generate_sync_entry_for_rooms`, if that didn't find any joined
# rooms for some reason it is a no-op.
(
newly_joined_or_invited_or_knocked_users,
newly_left_users,
) = sync_result_builder.calculate_user_changes()
# include_device_list_updates can only be True if we have a
# since token.
assert since_token is not None
device_lists = await self._device_handler.generate_sync_entry_for_device_list(
user_id=user_id,
since_token=since_token,
now_token=sync_result_builder.now_token,
joined_room_ids=sync_result_builder.joined_room_ids,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
newly_left_users=newly_left_users,
)
# 3. Calculate `device_one_time_keys_count` and `device_unused_fallback_key_types`
device_id = sync_config.device_id
one_time_keys_count: JsonMapping = {}
unused_fallback_key_types: List[str] = []
if device_id:
# TODO: We should have a way to let clients differentiate between the states of:
# * no change in OTK count since the provided since token
# * the server has zero OTKs left for this device
# Spec issue: https://github.com/matrix-org/matrix-doc/issues/3298
one_time_keys_count = await self.store.count_e2e_one_time_keys(
user_id, device_id
)
unused_fallback_key_types = list(
await self.store.get_e2e_unused_fallback_key_types(user_id, device_id)
)
return E2eeSyncResult(
to_device=sync_result_builder.to_device,
device_lists=device_lists,
device_one_time_keys_count=one_time_keys_count,
device_unused_fallback_key_types=unused_fallback_key_types,
next_batch=sync_result_builder.now_token,
)
async def get_sync_result_builder( async def get_sync_result_builder(
self, self,
sync_config: SyncConfig, sync_config: SyncConfig,

View File

@@ -9,7 +9,7 @@ from synapse.storage.databases.main.thread_subscriptions import (
AutomaticSubscriptionConflicted, AutomaticSubscriptionConflicted,
ThreadSubscription, ThreadSubscription,
) )
from synapse.types import EventOrderings, UserID from synapse.types import EventOrderings, StreamKeyType, UserID
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@@ -22,6 +22,7 @@ class ThreadSubscriptionsHandler:
self.store = hs.get_datastores().main self.store = hs.get_datastores().main
self.event_handler = hs.get_event_handler() self.event_handler = hs.get_event_handler()
self.auth = hs.get_auth() self.auth = hs.get_auth()
self._notifier = hs.get_notifier()
async def get_thread_subscription_settings( async def get_thread_subscription_settings(
self, self,
@@ -132,6 +133,15 @@ class ThreadSubscriptionsHandler:
errcode=Codes.MSC4306_CONFLICTING_UNSUBSCRIPTION, errcode=Codes.MSC4306_CONFLICTING_UNSUBSCRIPTION,
) )
if outcome is not None:
# wake up user streams (e.g. sliding sync) on the same worker
self._notifier.on_new_event(
StreamKeyType.THREAD_SUBSCRIPTIONS,
# outcome is a stream_id
outcome,
users=[user_id.to_string()],
)
return outcome return outcome
async def unsubscribe_user_from_thread( async def unsubscribe_user_from_thread(
@@ -162,8 +172,19 @@ class ThreadSubscriptionsHandler:
logger.info("rejecting thread subscriptions change (thread not accessible)") logger.info("rejecting thread subscriptions change (thread not accessible)")
raise NotFoundError("No such thread root") raise NotFoundError("No such thread root")
return await self.store.unsubscribe_user_from_thread( outcome = await self.store.unsubscribe_user_from_thread(
user_id.to_string(), user_id.to_string(),
event.room_id, event.room_id,
thread_root_event_id, thread_root_event_id,
) )
if outcome is not None:
# wake up user streams (e.g. sliding sync) on the same worker
self._notifier.on_new_event(
StreamKeyType.THREAD_SUBSCRIPTIONS,
# outcome is a stream_id
outcome,
users=[user_id.to_string()],
)
return outcome

View File

@@ -27,7 +27,7 @@ from twisted.web.client import PartialDownloadError
from synapse.api.constants import LoginType from synapse.api.constants import LoginType
from synapse.api.errors import Codes, LoginError, SynapseError from synapse.api.errors import Codes, LoginError, SynapseError
from synapse.util import json_decoder from synapse.util.json import json_decoder
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer

View File

@@ -87,8 +87,8 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import set_tag, start_active_span, tags from synapse.logging.opentracing import set_tag, start_active_span, tags
from synapse.metrics import SERVER_NAME_LABEL from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import ISynapseReactor, StrSequence from synapse.types import ISynapseReactor, StrSequence
from synapse.util import json_decoder
from synapse.util.async_helpers import timeout_deferred from synapse.util.async_helpers import timeout_deferred
from synapse.util.json import json_decoder
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer

View File

@@ -49,7 +49,7 @@ from synapse.http.federation.well_known_resolver import WellKnownResolver
from synapse.http.proxyagent import ProxyAgent from synapse.http.proxyagent import ProxyAgent
from synapse.logging.context import make_deferred_yieldable, run_in_background from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.types import ISynapseReactor from synapse.types import ISynapseReactor
from synapse.util import Clock from synapse.util.clock import Clock
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@@ -27,7 +27,6 @@ from typing import Callable, Dict, Optional, Tuple
import attr import attr
from twisted.internet import defer from twisted.internet import defer
from twisted.internet.interfaces import IReactorTime
from twisted.web.client import RedirectAgent from twisted.web.client import RedirectAgent
from twisted.web.http import stringToDatetime from twisted.web.http import stringToDatetime
from twisted.web.http_headers import Headers from twisted.web.http_headers import Headers
@@ -35,8 +34,10 @@ from twisted.web.iweb import IAgent, IResponse
from synapse.http.client import BodyExceededMaxSize, read_body_with_max_size from synapse.http.client import BodyExceededMaxSize, read_body_with_max_size
from synapse.logging.context import make_deferred_yieldable from synapse.logging.context import make_deferred_yieldable
from synapse.util import Clock, json_decoder from synapse.types import ISynapseThreadlessReactor
from synapse.util.caches.ttlcache import TTLCache from synapse.util.caches.ttlcache import TTLCache
from synapse.util.clock import Clock
from synapse.util.json import json_decoder
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
# period to cache .well-known results for by default # period to cache .well-known results for by default
@@ -88,7 +89,7 @@ class WellKnownResolver:
def __init__( def __init__(
self, self,
server_name: str, server_name: str,
reactor: IReactorTime, reactor: ISynapseThreadlessReactor,
agent: IAgent, agent: IAgent,
user_agent: bytes, user_agent: bytes,
well_known_cache: Optional[TTLCache[bytes, Optional[bytes]]] = None, well_known_cache: Optional[TTLCache[bytes, Optional[bytes]]] = None,

View File

@@ -89,8 +89,8 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import set_tag, start_active_span, tags from synapse.logging.opentracing import set_tag, start_active_span, tags
from synapse.metrics import SERVER_NAME_LABEL from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import JsonDict from synapse.types import JsonDict
from synapse.util import json_decoder
from synapse.util.async_helpers import AwakenableSleeper, Linearizer, timeout_deferred from synapse.util.async_helpers import AwakenableSleeper, Linearizer, timeout_deferred
from synapse.util.json import json_decoder
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.stringutils import parse_and_validate_server_name from synapse.util.stringutils import parse_and_validate_server_name

View File

@@ -164,11 +164,13 @@ def _get_in_flight_counts() -> Mapping[Tuple[str, ...], int]:
return counts return counts
LaterGauge( in_flight_requests = LaterGauge(
name="synapse_http_server_in_flight_requests_count", name="synapse_http_server_in_flight_requests_count",
desc="", desc="",
labelnames=["method", "servlet", SERVER_NAME_LABEL], labelnames=["method", "servlet", SERVER_NAME_LABEL],
caller=_get_in_flight_counts, )
in_flight_requests.register_hook(
homeserver_instance_id=None, hook=_get_in_flight_counts
) )

View File

@@ -52,10 +52,11 @@ from zope.interface import implementer
from twisted.internet import defer, interfaces, reactor from twisted.internet import defer, interfaces, reactor
from twisted.internet.defer import CancelledError from twisted.internet.defer import CancelledError
from twisted.internet.interfaces import IReactorTime
from twisted.python import failure from twisted.python import failure
from twisted.web import resource from twisted.web import resource
from synapse.types import ISynapseThreadlessReactor
try: try:
from twisted.web.pages import notFound from twisted.web.pages import notFound
except ImportError: except ImportError:
@@ -77,10 +78,11 @@ from synapse.api.errors import (
from synapse.config.homeserver import HomeServerConfig from synapse.config.homeserver import HomeServerConfig
from synapse.logging.context import defer_to_thread, preserve_fn, run_in_background from synapse.logging.context import defer_to_thread, preserve_fn, run_in_background
from synapse.logging.opentracing import active_span, start_active_span, trace_servlet from synapse.logging.opentracing import active_span, start_active_span, trace_servlet
from synapse.util import Clock, json_encoder
from synapse.util.caches import intern_dict from synapse.util.caches import intern_dict
from synapse.util.cancellation import is_function_cancellable from synapse.util.cancellation import is_function_cancellable
from synapse.util.clock import Clock
from synapse.util.iterutils import chunk_seq from synapse.util.iterutils import chunk_seq
from synapse.util.json import json_encoder
if TYPE_CHECKING: if TYPE_CHECKING:
import opentracing import opentracing
@@ -410,7 +412,7 @@ class DirectServeJsonResource(_AsyncResource):
clock: Optional[Clock] = None, clock: Optional[Clock] = None,
): ):
if clock is None: if clock is None:
clock = Clock(cast(IReactorTime, reactor)) clock = Clock(cast(ISynapseThreadlessReactor, reactor))
super().__init__(clock, extract_context) super().__init__(clock, extract_context)
self.canonical_json = canonical_json self.canonical_json = canonical_json
@@ -589,7 +591,7 @@ class DirectServeHtmlResource(_AsyncResource):
clock: Optional[Clock] = None, clock: Optional[Clock] = None,
): ):
if clock is None: if clock is None:
clock = Clock(cast(IReactorTime, reactor)) clock = Clock(cast(ISynapseThreadlessReactor, reactor))
super().__init__(clock, extract_context) super().__init__(clock, extract_context)

View File

@@ -51,7 +51,7 @@ from synapse.api.errors import Codes, SynapseError
from synapse.http import redact_uri from synapse.http import redact_uri
from synapse.http.server import HttpServer from synapse.http.server import HttpServer
from synapse.types import JsonDict, RoomAlias, RoomID, StrCollection from synapse.types import JsonDict, RoomAlias, RoomID, StrCollection
from synapse.util import json_decoder from synapse.util.json import json_decoder
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@@ -130,6 +130,16 @@ def parse_integer(
return parse_integer_from_args(args, name, default, required, negative) return parse_integer_from_args(args, name, default, required, negative)
@overload
def parse_integer_from_args(
args: Mapping[bytes, Sequence[bytes]],
name: str,
default: int,
required: Literal[False] = False,
negative: bool = False,
) -> int: ...
@overload @overload
def parse_integer_from_args( def parse_integer_from_args(
args: Mapping[bytes, Sequence[bytes]], args: Mapping[bytes, Sequence[bytes]],

View File

@@ -227,7 +227,16 @@ LoggingContextOrSentinel = Union["LoggingContext", "_Sentinel"]
class _Sentinel: class _Sentinel:
"""Sentinel to represent the root context""" """
Sentinel to represent the root context
This should only be used for tasks outside of Synapse like when we yield control
back to the Twisted reactor (event loop) so we don't leak the current logging
context to other tasks that are scheduled next in the event loop.
Nothing from the Synapse homeserver should be logged with the sentinel context. i.e.
we should always know which server the logs are coming from.
"""
__slots__ = ["previous_context", "finished", "request", "tag"] __slots__ = ["previous_context", "finished", "request", "tag"]
@@ -616,9 +625,17 @@ class LoggingContextFilter(logging.Filter):
class PreserveLoggingContext: class PreserveLoggingContext:
"""Context manager which replaces the logging context """
Context manager which replaces the logging context
The previous logging context is restored on exit.""" The previous logging context is restored on exit.
`make_deferred_yieldable` is pretty equivalent to using `with
PreserveLoggingContext():` (using the default sentinel context), i.e. it clears the
logcontext before awaiting (and so before execution passes back to the reactor) and
restores the old context once the awaitable completes (execution passes from the
reactor back to the code).
"""
__slots__ = ["_old_context", "_new_context"] __slots__ = ["_old_context", "_new_context"]
@@ -784,6 +801,17 @@ def run_in_background(
return from the function, and that the sentinel context is set once the return from the function, and that the sentinel context is set once the
deferred returned by the function completes. deferred returned by the function completes.
To explain how the log contexts work here:
- When `run_in_background` is called, the calling logcontext is stored
("original"), we kick off the background task in the current context, and we
restore that original context before returning.
- For a completed deferred, that's the end of the story.
- For an incomplete deferred, when the background task finishes, we don't want to
leak our context into the reactor which would erroneously get attached to the
next operation picked up by the event loop. We add a callback to the deferred
which will clear the logging context after it finishes and yields control back to
the reactor.
Useful for wrapping functions that return a deferred or coroutine, which you don't Useful for wrapping functions that return a deferred or coroutine, which you don't
yield or await on (for instance because you want to pass it to yield or await on (for instance because you want to pass it to
deferred.gatherResults()). deferred.gatherResults()).
@@ -795,9 +823,15 @@ def run_in_background(
`f` doesn't raise any deferred exceptions, otherwise a scary-looking `f` doesn't raise any deferred exceptions, otherwise a scary-looking
CRITICAL error about an unhandled error will be logged without much CRITICAL error about an unhandled error will be logged without much
indication about where it came from. indication about where it came from.
Returns:
Deferred which returns the result of func, or `None` if func raises.
Note that the returned Deferred does not follow the synapse logcontext
rules.
""" """
current = current_context() calling_context = current_context()
try: try:
# (kick off the task in the current context)
res = f(*args, **kwargs) res = f(*args, **kwargs)
except Exception: except Exception:
# the assumption here is that the caller doesn't want to be disturbed # the assumption here is that the caller doesn't want to be disturbed
@@ -806,6 +840,9 @@ def run_in_background(
# `res` may be a coroutine, `Deferred`, some other kind of awaitable, or a plain # `res` may be a coroutine, `Deferred`, some other kind of awaitable, or a plain
# value. Convert it to a `Deferred`. # value. Convert it to a `Deferred`.
#
# Wrapping the value in a deferred has the side effect of executing the coroutine,
# if it is one. If it's already a deferred, then we can just use that.
d: "defer.Deferred[R]" d: "defer.Deferred[R]"
if isinstance(res, typing.Coroutine): if isinstance(res, typing.Coroutine):
# Wrap the coroutine in a `Deferred`. # Wrap the coroutine in a `Deferred`.
@@ -820,20 +857,38 @@ def run_in_background(
# `res` is a plain value. Wrap it in a `Deferred`. # `res` is a plain value. Wrap it in a `Deferred`.
d = defer.succeed(res) d = defer.succeed(res)
# The deferred has already completed
if d.called and not d.paused: if d.called and not d.paused:
# The function should have maintained the logcontext, so we can # If the function messes with logcontexts, we can assume it follows the Synapse
# optimise out the messing about # logcontext rules (Rules for functions returning awaitables: "If the awaitable
# is already complete, the function returns with the same logcontext it started
# with."). If it function doesn't touch logcontexts at all, we can also assume
# the logcontext is unchanged.
#
# Either way, the function should have maintained the calling logcontext, so we
# can avoid messing with it further. Additionally, if the deferred has already
# completed, then it would be a mistake to then add a deferred callback (below)
# to reset the logcontext to the sentinel logcontext as that would run
# immediately (remember our goal is to maintain the calling logcontext when we
# return).
return d return d
# The function may have reset the context before returning, so # Since the function we called may follow the Synapse logcontext rules (Rules for
# we need to restore it now. # functions returning awaitables: "If the awaitable is incomplete, the function
ctx = set_current_context(current) # clears the logcontext before returning"), the function may have reset the
# logcontext before returning, so we need to restore the calling logcontext now
# before we return ourselves.
#
# Our goal is to have the caller logcontext unchanged after firing off the
# background task and returning.
set_current_context(calling_context)
# The original context will be restored when the deferred # If the function we called is playing nice and following the Synapse logcontext
# completes, but there is nothing waiting for it, so it will # rules, it will restore original calling logcontext when the deferred completes;
# get leaked into the reactor or some other function which # but there is nothing waiting for it, so it will get leaked into the reactor (which
# wasn't expecting it. We therefore need to reset the context # would then get picked up by the next thing the reactor does). We therefore need to
# here. # reset the logcontext here (set the `sentinel` logcontext) before yielding control
# back to the reactor.
# #
# (If this feels asymmetric, consider it this way: we are # (If this feels asymmetric, consider it this way: we are
# effectively forking a new thread of execution. We are # effectively forking a new thread of execution. We are
@@ -841,7 +896,7 @@ def run_in_background(
# which is supposed to have a single entry and exit point. But # which is supposed to have a single entry and exit point. But
# by spawning off another deferred, we are effectively # by spawning off another deferred, we are effectively
# adding a new exit point.) # adding a new exit point.)
d.addBoth(_set_context_cb, ctx) d.addBoth(_set_context_cb, SENTINEL_CONTEXT)
return d return d
@@ -855,57 +910,63 @@ def run_coroutine_in_background(
Useful for wrapping coroutines that you don't yield or await on (for Useful for wrapping coroutines that you don't yield or await on (for
instance because you want to pass it to deferred.gatherResults()). instance because you want to pass it to deferred.gatherResults()).
This is a special case of `run_in_background` where we can accept a This is a special case of `run_in_background` where we can accept a coroutine
coroutine directly rather than a function. We can do this because coroutines directly rather than a function. We can do this because coroutines do not continue
do not run until called, and so calling an async function without awaiting running once they have yielded.
cannot change the log contexts.
This is an ergonomic helper so we can do this:
```python
run_coroutine_in_background(func1(arg1))
```
Rather than having to do this:
```python
run_in_background(lambda: func1(arg1))
```
""" """
return run_in_background(lambda: coroutine)
current = current_context()
d = defer.ensureDeferred(coroutine)
# The function may have reset the context before returning, so
# we need to restore it now.
ctx = set_current_context(current)
# The original context will be restored when the deferred
# completes, but there is nothing waiting for it, so it will
# get leaked into the reactor or some other function which
# wasn't expecting it. We therefore need to reset the context
# here.
#
# (If this feels asymmetric, consider it this way: we are
# effectively forking a new thread of execution. We are
# probably currently within a ``with LoggingContext()`` block,
# which is supposed to have a single entry and exit point. But
# by spawning off another deferred, we are effectively
# adding a new exit point.)
d.addBoth(_set_context_cb, ctx)
return d
T = TypeVar("T") T = TypeVar("T")
def make_deferred_yieldable(deferred: "defer.Deferred[T]") -> "defer.Deferred[T]": def make_deferred_yieldable(deferred: "defer.Deferred[T]") -> "defer.Deferred[T]":
"""Given a deferred, make it follow the Synapse logcontext rules:
If the deferred has completed, essentially does nothing (just returns another
completed deferred with the result/failure).
If the deferred has not yet completed, resets the logcontext before
returning a deferred. Then, when the deferred completes, restores the
current logcontext before running callbacks/errbacks.
(This is more-or-less the opposite operation to run_in_background.)
""" """
Given a deferred, make it follow the Synapse logcontext rules:
- If the deferred has completed, essentially does nothing (just returns another
completed deferred with the result/failure).
- If the deferred has not yet completed, resets the logcontext before returning a
incomplete deferred. Then, when the deferred completes, restores the current
logcontext before running callbacks/errbacks.
This means the resultant deferred can be awaited without leaking the current
logcontext to the reactor (which would then get erroneously picked up by the next
thing the reactor does), and also means that the logcontext is preserved when the
deferred completes.
(This is more-or-less the opposite operation to run_in_background in terms of how it
handles log contexts.)
Pretty much equivalent to using `with PreserveLoggingContext():`, i.e. it clears the
logcontext before awaiting (and so before execution passes back to the reactor) and
restores the old context once the awaitable completes (execution passes from the
reactor back to the code).
"""
# The deferred has already completed
if deferred.called and not deferred.paused: if deferred.called and not deferred.paused:
# it looks like this deferred is ready to run any callbacks we give it # it looks like this deferred is ready to run any callbacks we give it
# immediately. We may as well optimise out the logcontext faffery. # immediately. We may as well optimise out the logcontext faffery.
return deferred return deferred
# ok, we can't be sure that a yield won't block, so let's reset the # Our goal is to have the caller logcontext unchanged after they yield/await the
# logcontext, and add a callback to the deferred to restore it. # returned deferred.
#
# When the caller yield/await's the returned deferred, it may yield
# control back to the reactor. To avoid leaking the current logcontext to the
# reactor (which would then get erroneously picked up by the next thing the reactor
# does) while the deferred runs in the reactor event loop, we reset the logcontext
# and add a callback to the deferred to restore it so the caller's logcontext is
# active when the deferred completes.
prev_context = set_current_context(SENTINEL_CONTEXT) prev_context = set_current_context(SENTINEL_CONTEXT)
deferred.addBoth(_set_context_cb, prev_context) deferred.addBoth(_set_context_cb, prev_context)
return deferred return deferred

View File

@@ -60,8 +60,18 @@ class PeriodicallyFlushingMemoryHandler(MemoryHandler):
else: else:
reactor_to_use = reactor reactor_to_use = reactor
# call our hook when the reactor start up # Call our hook when the reactor start up
reactor_to_use.callWhenRunning(on_reactor_running) #
# type-ignore: Ideally, we'd use `Clock.call_when_running(...)`, but
# `PeriodicallyFlushingMemoryHandler` is instantiated via Python logging
# configuration, so it's not straightforward to pass in the homeserver's clock
# (and we don't want to burden other peoples logging config with the details).
#
# The important reason why we want to use `Clock.call_when_running` is so that
# the callback runs with a logcontext as we want to know which server the logs
# came from. But since we don't log anything in the callback, it's safe to
# ignore the lint here.
reactor_to_use.callWhenRunning(on_reactor_running) # type: ignore[prefer-synapse-clock-call-when-running]
def shouldFlush(self, record: LogRecord) -> bool: def shouldFlush(self, record: LogRecord) -> bool:
""" """

View File

@@ -204,7 +204,7 @@ from twisted.web.http import Request
from twisted.web.http_headers import Headers from twisted.web.http_headers import Headers
from synapse.config import ConfigError from synapse.config import ConfigError
from synapse.util import json_decoder, json_encoder from synapse.util.json import json_decoder, json_encoder
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.http.site import SynapseRequest from synapse.http.site import SynapseRequest

View File

@@ -54,8 +54,8 @@ from synapse.logging.context import (
make_deferred_yieldable, make_deferred_yieldable,
run_in_background, run_in_background,
) )
from synapse.util import Clock
from synapse.util.async_helpers import DeferredEvent from synapse.util.async_helpers import DeferredEvent
from synapse.util.clock import Clock
from synapse.util.stringutils import is_ascii from synapse.util.stringutils import is_ascii
if TYPE_CHECKING: if TYPE_CHECKING:

View File

@@ -179,11 +179,13 @@ class MediaRepository:
# We get the media upload limits and sort them in descending order of # We get the media upload limits and sort them in descending order of
# time period, so that we can apply some optimizations. # time period, so that we can apply some optimizations.
self.media_upload_limits = hs.config.media.media_upload_limits self.default_media_upload_limits = hs.config.media.media_upload_limits
self.media_upload_limits.sort( self.default_media_upload_limits.sort(
key=lambda limit: limit.time_period_ms, reverse=True key=lambda limit: limit.time_period_ms, reverse=True
) )
self.media_repository_callbacks = hs.get_module_api_callbacks().media_repository
def _start_update_recently_accessed(self) -> Deferred: def _start_update_recently_accessed(self) -> Deferred:
return run_as_background_process( return run_as_background_process(
"update_recently_accessed_media", "update_recently_accessed_media",
@@ -340,16 +342,27 @@ class MediaRepository:
# Check that the user has not exceeded any of the media upload limits. # Check that the user has not exceeded any of the media upload limits.
# Use limits from module API if provided
media_upload_limits = (
await self.media_repository_callbacks.get_media_upload_limits_for_user(
auth_user.to_string()
)
)
# Otherwise use the default limits from config
if media_upload_limits is None:
# Note: the media upload limits are sorted so larger time periods are
# first.
media_upload_limits = self.default_media_upload_limits
# This is the total size of media uploaded by the user in the last # This is the total size of media uploaded by the user in the last
# `time_period_ms` milliseconds, or None if we haven't checked yet. # `time_period_ms` milliseconds, or None if we haven't checked yet.
uploaded_media_size: Optional[int] = None uploaded_media_size: Optional[int] = None
# Note: the media upload limits are sorted so larger time periods are for limit in media_upload_limits:
# first.
for limit in self.media_upload_limits:
# We only need to check the amount of media uploaded by the user in # We only need to check the amount of media uploaded by the user in
# this latest (smaller) time period if the amount of media uploaded # this latest (smaller) time period if the amount of media uploaded
# in a previous (larger) time period is above the limit. # in a previous (larger) time period is below the limit.
# #
# This optimization means that in the common case where the user # This optimization means that in the common case where the user
# hasn't uploaded much media, we only need to query the database # hasn't uploaded much media, we only need to query the database
@@ -363,6 +376,12 @@ class MediaRepository:
) )
if uploaded_media_size + content_length > limit.max_bytes: if uploaded_media_size + content_length > limit.max_bytes:
await self.media_repository_callbacks.on_media_upload_limit_exceeded(
user_id=auth_user.to_string(),
limit=limit,
sent_bytes=uploaded_media_size,
attempted_bytes=content_length,
)
raise SynapseError( raise SynapseError(
400, "Media upload limit exceeded", Codes.RESOURCE_LIMIT_EXCEEDED 400, "Media upload limit exceeded", Codes.RESOURCE_LIMIT_EXCEEDED
) )

View File

@@ -55,7 +55,7 @@ from synapse.api.errors import NotFoundError
from synapse.logging.context import defer_to_thread, run_in_background from synapse.logging.context import defer_to_thread, run_in_background
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
from synapse.media._base import ThreadedFileSender from synapse.media._base import ThreadedFileSender
from synapse.util import Clock from synapse.util.clock import Clock
from synapse.util.file_consumer import BackgroundFileConsumer from synapse.util.file_consumer import BackgroundFileConsumer
from ..types import JsonDict from ..types import JsonDict

View File

@@ -27,7 +27,7 @@ import attr
from synapse.media.preview_html import parse_html_description from synapse.media.preview_html import parse_html_description
from synapse.types import JsonDict from synapse.types import JsonDict
from synapse.util import json_decoder from synapse.util.json import json_decoder
if TYPE_CHECKING: if TYPE_CHECKING:
from lxml import etree from lxml import etree

View File

@@ -46,9 +46,9 @@ from synapse.media.oembed import OEmbedProvider
from synapse.media.preview_html import decode_body, parse_html_to_open_graph from synapse.media.preview_html import decode_body, parse_html_to_open_graph
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import JsonDict, UserID from synapse.types import JsonDict, UserID
from synapse.util import json_encoder
from synapse.util.async_helpers import ObservableDeferred from synapse.util.async_helpers import ObservableDeferred
from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.json import json_encoder
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
if TYPE_CHECKING: if TYPE_CHECKING:

View File

@@ -43,7 +43,7 @@ from typing import (
) )
import attr import attr
from pkg_resources import parse_version from packaging.version import parse as parse_version
from prometheus_client import ( from prometheus_client import (
CollectorRegistry, CollectorRegistry,
Counter, Counter,
@@ -73,8 +73,6 @@ logger = logging.getLogger(__name__)
METRICS_PREFIX = "/_synapse/metrics" METRICS_PREFIX = "/_synapse/metrics"
all_gauges: Dict[str, Collector] = {}
HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat") HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
SERVER_NAME_LABEL = "server_name" SERVER_NAME_LABEL = "server_name"
@@ -163,42 +161,110 @@ class LaterGauge(Collector):
name: str name: str
desc: str desc: str
labelnames: Optional[StrSequence] = attr.ib(hash=False) labelnames: Optional[StrSequence] = attr.ib(hash=False)
# callback: should either return a value (if there are no labels for this metric), _instance_id_to_hook_map: Dict[
# or dict mapping from a label tuple to a value Optional[str], # instance_id
caller: Callable[ Callable[
[], Union[Mapping[Tuple[str, ...], Union[int, float]], Union[int, float]] [], Union[Mapping[Tuple[str, ...], Union[int, float]], Union[int, float]]
] ],
] = attr.ib(factory=dict, hash=False)
"""
Map from homeserver instance_id to a callback. Each callback should either return a
value (if there are no labels for this metric), or dict mapping from a label tuple
to a value.
We use `instance_id` instead of `server_name` because it's possible to have multiple
workers running in the same process with the same `server_name`.
"""
def collect(self) -> Iterable[Metric]: def collect(self) -> Iterable[Metric]:
# The decision to add `SERVER_NAME_LABEL` is from the `LaterGauge` usage itself # The decision to add `SERVER_NAME_LABEL` is from the `LaterGauge` usage itself
# (we don't enforce it here, one level up). # (we don't enforce it here, one level up).
g = GaugeMetricFamily(self.name, self.desc, labels=self.labelnames) # type: ignore[missing-server-name-label] g = GaugeMetricFamily(self.name, self.desc, labels=self.labelnames) # type: ignore[missing-server-name-label]
try: for homeserver_instance_id, hook in self._instance_id_to_hook_map.items():
calls = self.caller() try:
except Exception: hook_result = hook()
logger.exception("Exception running callback for LaterGauge(%s)", self.name) except Exception:
yield g logger.exception(
return "Exception running callback for LaterGauge(%s) for homeserver_instance_id=%s",
self.name,
homeserver_instance_id,
)
# Continue to return the rest of the metrics that aren't broken
continue
if isinstance(calls, (int, float)): if isinstance(hook_result, (int, float)):
g.add_metric([], calls) g.add_metric([], hook_result)
else: else:
for k, v in calls.items(): for k, v in hook_result.items():
g.add_metric(k, v) g.add_metric(k, v)
yield g yield g
def register_hook(
self,
*,
homeserver_instance_id: Optional[str],
hook: Callable[
[], Union[Mapping[Tuple[str, ...], Union[int, float]], Union[int, float]]
],
) -> None:
"""
Register a callback/hook that will be called to generate a metric samples for
the gauge.
Args:
homeserver_instance_id: The unique ID for this Synapse process instance
(`hs.get_instance_id()`) that this hook is associated with. This can be used
later to lookup all hooks associated with a given server name in order to
unregister them. This should only be omitted for global hooks that work
across all homeservers.
hook: A callback that should either return a value (if there are no
labels for this metric), or dict mapping from a label tuple to a value
"""
# We shouldn't have multiple hooks registered for the same homeserver `instance_id`.
existing_hook = self._instance_id_to_hook_map.get(homeserver_instance_id)
assert existing_hook is None, (
f"LaterGauge(name={self.name}) hook already registered for homeserver_instance_id={homeserver_instance_id}. "
"This is likely a Synapse bug and you forgot to unregister the previous hooks for "
"the server (especially in tests)."
)
self._instance_id_to_hook_map[homeserver_instance_id] = hook
def unregister_hooks_for_homeserver_instance_id(
self, homeserver_instance_id: str
) -> None:
"""
Unregister all hooks associated with the given homeserver `instance_id`. This should be
called when a homeserver is shutdown to avoid extra hooks sitting around.
Args:
homeserver_instance_id: The unique ID for this Synapse process instance to
unregister hooks for (`hs.get_instance_id()`).
"""
self._instance_id_to_hook_map.pop(homeserver_instance_id, None)
def __attrs_post_init__(self) -> None: def __attrs_post_init__(self) -> None:
self._register()
def _register(self) -> None:
if self.name in all_gauges.keys():
logger.warning("%s already registered, reregistering", self.name)
REGISTRY.unregister(all_gauges.pop(self.name))
REGISTRY.register(self) REGISTRY.register(self)
all_gauges[self.name] = self
# We shouldn't have multiple metrics with the same name. Typically, metrics
# should be created globally so you shouldn't be running into this and this will
# catch any stupid mistakes. The `REGISTRY.register(self)` call above will also
# raise an error if the metric already exists but to make things explicit, we'll
# also check here.
existing_gauge = all_later_gauges_to_clean_up_on_shutdown.get(self.name)
assert existing_gauge is None, f"LaterGauge(name={self.name}) already exists. "
# Keep track of the gauge so we can clean it up later.
all_later_gauges_to_clean_up_on_shutdown[self.name] = self
all_later_gauges_to_clean_up_on_shutdown: Dict[str, LaterGauge] = {}
"""
Track all `LaterGauge` instances so we can remove any associated hooks during homeserver
shutdown.
"""
# `MetricsEntry` only makes sense when it is a `Protocol`, # `MetricsEntry` only makes sense when it is a `Protocol`,
@@ -250,7 +316,7 @@ class InFlightGauge(Generic[MetricsEntry], Collector):
# Protects access to _registrations # Protects access to _registrations
self._lock = threading.Lock() self._lock = threading.Lock()
self._register_with_collector() REGISTRY.register(self)
def register( def register(
self, self,
@@ -341,14 +407,6 @@ class InFlightGauge(Generic[MetricsEntry], Collector):
gauge.add_metric(labels=key, value=getattr(metrics, name)) gauge.add_metric(labels=key, value=getattr(metrics, name))
yield gauge yield gauge
def _register_with_collector(self) -> None:
if self.name in all_gauges.keys():
logger.warning("%s already registered, reregistering", self.name)
REGISTRY.unregister(all_gauges.pop(self.name))
REGISTRY.register(self)
all_gauges[self.name] = self
class GaugeHistogramMetricFamilyWithLabels(GaugeHistogramMetricFamily): class GaugeHistogramMetricFamilyWithLabels(GaugeHistogramMetricFamily):
""" """

View File

@@ -228,6 +228,11 @@ def run_as_background_process(
clock.looping_call and friends (or for firing-and-forgetting in the middle of a clock.looping_call and friends (or for firing-and-forgetting in the middle of a
normal synapse async function). normal synapse async function).
Because the returned Deferred does not follow the synapse logcontext rules, awaiting
the result of this function will result in the log context being cleared (bad). In
order to properly await the result of this function and maintain the current log
context, use `make_deferred_yieldable`.
Args: Args:
desc: a description for this background process type desc: a description for this background process type
server_name: The homeserver name that this background process is being run for server_name: The homeserver name that this background process is being run for
@@ -280,6 +285,20 @@ def run_as_background_process(
name=desc, **{SERVER_NAME_LABEL: server_name} name=desc, **{SERVER_NAME_LABEL: server_name}
).dec() ).dec()
# To explain how the log contexts work here:
# - When `run_as_background_process` is called, the current context is stored
# (using `PreserveLoggingContext`), we kick off the background task, and we
# restore the original context before returning (also part of
# `PreserveLoggingContext`).
# - The background task runs in its own new logcontext named after `desc`
# - When the background task finishes, we don't want to leak our background context
# into the reactor which would erroneously get attached to the next operation
# picked up by the event loop. We use `PreserveLoggingContext` to set the
# `sentinel` context and means the new `BackgroundProcessLoggingContext` will
# remember the `sentinel` context as its previous context to return to when it
# exits and yields control back to the reactor.
#
# TODO: Why can't we simplify to using `return run_in_background(run)`?
with PreserveLoggingContext(): with PreserveLoggingContext():
# Note that we return a Deferred here so that it can be used in a # Note that we return a Deferred here so that it can be used in a
# looping_call and other places that expect a Deferred. # looping_call and other places that expect a Deferred.

View File

@@ -50,6 +50,7 @@ from synapse.api.constants import ProfileFields
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.api.presence import UserPresenceState from synapse.api.presence import UserPresenceState
from synapse.config import ConfigError from synapse.config import ConfigError
from synapse.config.repository import MediaUploadLimit
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.presence_router import ( from synapse.events.presence_router import (
GET_INTERESTED_USERS_CALLBACK, GET_INTERESTED_USERS_CALLBACK,
@@ -94,7 +95,9 @@ from synapse.module_api.callbacks.account_validity_callbacks import (
) )
from synapse.module_api.callbacks.media_repository_callbacks import ( from synapse.module_api.callbacks.media_repository_callbacks import (
GET_MEDIA_CONFIG_FOR_USER_CALLBACK, GET_MEDIA_CONFIG_FOR_USER_CALLBACK,
GET_MEDIA_UPLOAD_LIMITS_FOR_USER_CALLBACK,
IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK, IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK,
ON_MEDIA_UPLOAD_LIMIT_EXCEEDED_CALLBACK,
) )
from synapse.module_api.callbacks.ratelimit_callbacks import ( from synapse.module_api.callbacks.ratelimit_callbacks import (
GET_RATELIMIT_OVERRIDE_FOR_USER_CALLBACK, GET_RATELIMIT_OVERRIDE_FOR_USER_CALLBACK,
@@ -155,9 +158,9 @@ from synapse.types import (
create_requester, create_requester,
) )
from synapse.types.state import StateFilter from synapse.types.state import StateFilter
from synapse.util import Clock
from synapse.util.async_helpers import maybe_awaitable from synapse.util.async_helpers import maybe_awaitable
from synapse.util.caches.descriptors import CachedFunction, cached as _cached from synapse.util.caches.descriptors import CachedFunction, cached as _cached
from synapse.util.clock import Clock
from synapse.util.frozenutils import freeze from synapse.util.frozenutils import freeze
if TYPE_CHECKING: if TYPE_CHECKING:
@@ -205,6 +208,7 @@ __all__ = [
"RoomAlias", "RoomAlias",
"UserProfile", "UserProfile",
"RatelimitOverride", "RatelimitOverride",
"MediaUploadLimit",
] ]
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -462,6 +466,12 @@ class ModuleApi:
is_user_allowed_to_upload_media_of_size: Optional[ is_user_allowed_to_upload_media_of_size: Optional[
IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK
] = None, ] = None,
get_media_upload_limits_for_user: Optional[
GET_MEDIA_UPLOAD_LIMITS_FOR_USER_CALLBACK
] = None,
on_media_upload_limit_exceeded: Optional[
ON_MEDIA_UPLOAD_LIMIT_EXCEEDED_CALLBACK
] = None,
) -> None: ) -> None:
"""Registers callbacks for media repository capabilities. """Registers callbacks for media repository capabilities.
Added in Synapse v1.132.0. Added in Synapse v1.132.0.
@@ -469,6 +479,8 @@ class ModuleApi:
return self._callbacks.media_repository.register_callbacks( return self._callbacks.media_repository.register_callbacks(
get_media_config_for_user=get_media_config_for_user, get_media_config_for_user=get_media_config_for_user,
is_user_allowed_to_upload_media_of_size=is_user_allowed_to_upload_media_of_size, is_user_allowed_to_upload_media_of_size=is_user_allowed_to_upload_media_of_size,
get_media_upload_limits_for_user=get_media_upload_limits_for_user,
on_media_upload_limit_exceeded=on_media_upload_limit_exceeded,
) )
def register_third_party_rules_callbacks( def register_third_party_rules_callbacks(

View File

@@ -15,6 +15,7 @@
import logging import logging
from typing import TYPE_CHECKING, Awaitable, Callable, List, Optional from typing import TYPE_CHECKING, Awaitable, Callable, List, Optional
from synapse.config.repository import MediaUploadLimit
from synapse.types import JsonDict from synapse.types import JsonDict
from synapse.util.async_helpers import delay_cancellation from synapse.util.async_helpers import delay_cancellation
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
@@ -28,6 +29,14 @@ GET_MEDIA_CONFIG_FOR_USER_CALLBACK = Callable[[str], Awaitable[Optional[JsonDict
IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK = Callable[[str, int], Awaitable[bool]] IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK = Callable[[str, int], Awaitable[bool]]
GET_MEDIA_UPLOAD_LIMITS_FOR_USER_CALLBACK = Callable[
[str], Awaitable[Optional[List[MediaUploadLimit]]]
]
ON_MEDIA_UPLOAD_LIMIT_EXCEEDED_CALLBACK = Callable[
[str, MediaUploadLimit, int, int], Awaitable[None]
]
class MediaRepositoryModuleApiCallbacks: class MediaRepositoryModuleApiCallbacks:
def __init__(self, hs: "HomeServer") -> None: def __init__(self, hs: "HomeServer") -> None:
@@ -39,6 +48,12 @@ class MediaRepositoryModuleApiCallbacks:
self._is_user_allowed_to_upload_media_of_size_callbacks: List[ self._is_user_allowed_to_upload_media_of_size_callbacks: List[
IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK
] = [] ] = []
self._get_media_upload_limits_for_user_callbacks: List[
GET_MEDIA_UPLOAD_LIMITS_FOR_USER_CALLBACK
] = []
self._on_media_upload_limit_exceeded_callbacks: List[
ON_MEDIA_UPLOAD_LIMIT_EXCEEDED_CALLBACK
] = []
def register_callbacks( def register_callbacks(
self, self,
@@ -46,6 +61,12 @@ class MediaRepositoryModuleApiCallbacks:
is_user_allowed_to_upload_media_of_size: Optional[ is_user_allowed_to_upload_media_of_size: Optional[
IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK IS_USER_ALLOWED_TO_UPLOAD_MEDIA_OF_SIZE_CALLBACK
] = None, ] = None,
get_media_upload_limits_for_user: Optional[
GET_MEDIA_UPLOAD_LIMITS_FOR_USER_CALLBACK
] = None,
on_media_upload_limit_exceeded: Optional[
ON_MEDIA_UPLOAD_LIMIT_EXCEEDED_CALLBACK
] = None,
) -> None: ) -> None:
"""Register callbacks from module for each hook.""" """Register callbacks from module for each hook."""
if get_media_config_for_user is not None: if get_media_config_for_user is not None:
@@ -56,6 +77,16 @@ class MediaRepositoryModuleApiCallbacks:
is_user_allowed_to_upload_media_of_size is_user_allowed_to_upload_media_of_size
) )
if get_media_upload_limits_for_user is not None:
self._get_media_upload_limits_for_user_callbacks.append(
get_media_upload_limits_for_user
)
if on_media_upload_limit_exceeded is not None:
self._on_media_upload_limit_exceeded_callbacks.append(
on_media_upload_limit_exceeded
)
async def get_media_config_for_user(self, user_id: str) -> Optional[JsonDict]: async def get_media_config_for_user(self, user_id: str) -> Optional[JsonDict]:
for callback in self._get_media_config_for_user_callbacks: for callback in self._get_media_config_for_user_callbacks:
with Measure( with Measure(
@@ -83,3 +114,47 @@ class MediaRepositoryModuleApiCallbacks:
return res return res
return True return True
async def get_media_upload_limits_for_user(
self, user_id: str
) -> Optional[List[MediaUploadLimit]]:
"""
Get the first non-None list of MediaUploadLimits for the user from the registered callbacks.
If a list is returned it will be sorted in descending order of duration.
"""
for callback in self._get_media_upload_limits_for_user_callbacks:
with Measure(
self.clock,
name=f"{callback.__module__}.{callback.__qualname__}",
server_name=self.server_name,
):
res: Optional[List[MediaUploadLimit]] = await delay_cancellation(
callback(user_id)
)
if res is not None: # to allow [] to be returned meaning no limit
# We sort them in descending order of time period
res.sort(key=lambda limit: limit.time_period_ms, reverse=True)
return res
return None
async def on_media_upload_limit_exceeded(
self,
user_id: str,
limit: MediaUploadLimit,
sent_bytes: int,
attempted_bytes: int,
) -> None:
for callback in self._on_media_upload_limit_exceeded_callbacks:
with Measure(
self.clock,
name=f"{callback.__module__}.{callback.__qualname__}",
server_name=self.server_name,
):
# Use a copy of the data in case the module modifies it
limit_copy = MediaUploadLimit(
max_bytes=limit.max_bytes, time_period_ms=limit.time_period_ms
)
await delay_cancellation(
callback(user_id, limit_copy, sent_bytes, attempted_bytes)
)

View File

@@ -86,6 +86,24 @@ users_woken_by_stream_counter = Counter(
labelnames=["stream", SERVER_NAME_LABEL], labelnames=["stream", SERVER_NAME_LABEL],
) )
notifier_listeners_gauge = LaterGauge(
name="synapse_notifier_listeners",
desc="",
labelnames=[SERVER_NAME_LABEL],
)
notifier_rooms_gauge = LaterGauge(
name="synapse_notifier_rooms",
desc="",
labelnames=[SERVER_NAME_LABEL],
)
notifier_users_gauge = LaterGauge(
name="synapse_notifier_users",
desc="",
labelnames=[SERVER_NAME_LABEL],
)
T = TypeVar("T") T = TypeVar("T")
@@ -281,28 +299,20 @@ class Notifier:
) )
} }
LaterGauge( notifier_listeners_gauge.register_hook(
name="synapse_notifier_listeners", homeserver_instance_id=hs.get_instance_id(), hook=count_listeners
desc="",
labelnames=[SERVER_NAME_LABEL],
caller=count_listeners,
) )
notifier_rooms_gauge.register_hook(
LaterGauge( homeserver_instance_id=hs.get_instance_id(),
name="synapse_notifier_rooms", hook=lambda: {
desc="",
labelnames=[SERVER_NAME_LABEL],
caller=lambda: {
(self.server_name,): count( (self.server_name,): count(
bool, list(self.room_to_user_streams.values()) bool, list(self.room_to_user_streams.values())
) )
}, },
) )
LaterGauge( notifier_users_gauge.register_hook(
name="synapse_notifier_users", homeserver_instance_id=hs.get_instance_id(),
desc="", hook=lambda: {(self.server_name,): len(self.user_to_user_stream)},
labelnames=[SERVER_NAME_LABEL],
caller=lambda: {(self.server_name,): len(self.user_to_user_stream)},
) )
def add_replication_callback(self, cb: Callable[[], None]) -> None: def add_replication_callback(self, cb: Callable[[], None]) -> None:
@@ -522,6 +532,7 @@ class Notifier:
StreamKeyType.TO_DEVICE, StreamKeyType.TO_DEVICE,
StreamKeyType.TYPING, StreamKeyType.TYPING,
StreamKeyType.UN_PARTIAL_STATED_ROOMS, StreamKeyType.UN_PARTIAL_STATED_ROOMS,
StreamKeyType.THREAD_SUBSCRIPTIONS,
], ],
new_token: int, new_token: int,
users: Optional[Collection[Union[str, UserID]]] = None, users: Optional[Collection[Union[str, UserID]]] = None,

View File

@@ -91,7 +91,7 @@ def _rule_to_template(rule: PushRule) -> Optional[Dict[str, Any]]:
unscoped_rule_id = _rule_id_from_namespaced(rule.rule_id) unscoped_rule_id = _rule_id_from_namespaced(rule.rule_id)
template_name = _priority_class_to_template_name(rule.priority_class) template_name = _priority_class_to_template_name(rule.priority_class)
if template_name in ["override", "underride"]: if template_name in ["override", "underride", "postcontent"]:
templaterule = {"conditions": rule.conditions, "actions": rule.actions} templaterule = {"conditions": rule.conditions, "actions": rule.actions}
elif template_name in ["sender", "room"]: elif template_name in ["sender", "room"]:
templaterule = {"actions": rule.actions} templaterule = {"actions": rule.actions}

View File

@@ -19,10 +19,14 @@
# #
# #
# Integer literals for push rule `kind`s
# This is used to store them in the database.
PRIORITY_CLASS_MAP = { PRIORITY_CLASS_MAP = {
"underride": 1, "underride": 1,
"sender": 2, "sender": 2,
"room": 3, "room": 3,
# MSC4306
"postcontent": 6,
"content": 4, "content": 4,
"override": 5, "override": 5,
} }

View File

@@ -44,6 +44,7 @@ from synapse.replication.tcp.streams import (
UnPartialStatedEventStream, UnPartialStatedEventStream,
UnPartialStatedRoomStream, UnPartialStatedRoomStream,
) )
from synapse.replication.tcp.streams._base import ThreadSubscriptionsStream
from synapse.replication.tcp.streams.events import ( from synapse.replication.tcp.streams.events import (
EventsStream, EventsStream,
EventsStreamEventRow, EventsStreamEventRow,
@@ -255,6 +256,12 @@ class ReplicationDataHandler:
self._state_storage_controller.notify_event_un_partial_stated( self._state_storage_controller.notify_event_un_partial_stated(
row.event_id row.event_id
) )
elif stream_name == ThreadSubscriptionsStream.NAME:
self.notifier.on_new_event(
StreamKeyType.THREAD_SUBSCRIPTIONS,
token,
users=[row.user_id for row in rows],
)
await self._presence_handler.process_replication_rows( await self._presence_handler.process_replication_rows(
stream_name, instance_name, token, rows stream_name, instance_name, token, rows

Some files were not shown because too many files have changed in this diff Show More