Compare commits

...

94 Commits

Author SHA1 Message Date
Half-Shot
0ce56923da Update delayed events to support no tokens 2025-10-27 16:46:00 +00:00
Shay
f1695ac20e Add an admin API to get the space hierarchy (#19021)
It is often useful when investigating a space to get information about
that space and it's children. This PR adds an Admin API to return
information about a space and it's children, regardless of room
membership. Will not fetch information over federation about remote
rooms that the server is not participating in.
2025-10-24 15:32:16 -05:00
Andrew Ferrazzutti
9d81bb703c Always treat RETURNING as supported by SQL engines (#19047)
Can do this now that SQLite 3.35.0 added support for `RETURNING`.

> The RETURNING syntax has been supported by SQLite since version 3.35.0
(2021-03-12).
>
> *-- https://sqlite.org/lang_returning.html*

This also bumps the minimum supported SQLite version according to
Synapse's [deprecation
policy](https://element-hq.github.io/synapse/latest/deprecation_policy.html#platform-dependencies).

Fix https://github.com/element-hq/synapse/issues/17577
2025-10-24 13:21:49 -05:00
dependabot[bot]
40893be93c Bump idna from 3.10 to 3.11 (#19053)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-24 13:24:16 +01:00
dependabot[bot]
1419b35a40 Bump ijson from 3.4.0 to 3.4.0.post0 (#19051)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-24 13:18:25 +01:00
dependabot[bot]
a2fa61d1b5 Bump msgpack from 1.1.1 to 1.1.2 (#19050)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-24 13:17:35 +01:00
Andrew Morgan
123eff1bc0 Update poetry dev dependencies name (#19081) 2025-10-24 11:19:40 +01:00
Andrew Morgan
a092d2053a Fix deprecation warning in release script (#19080) 2025-10-24 11:19:04 +01:00
Andrew Morgan
45a042ae88 Remove cibuildwheel pp38* skip selector (#19085) 2025-10-24 10:39:29 +01:00
Andrew Morgan
72d0de9f30 Don't exit the release script if there are uncommitted changes (#19088) 2025-10-24 10:39:06 +01:00
Andrew Morgan
5556b491c1 Spruce up generated announcement text in the release script (#19089) 2025-10-24 10:19:44 +01:00
Bryce Servis
b835eb253c Make optional networking and security settings for Redis more apparent in workers.md (#19073)
I couldn't really find any documentation regarding how to setup TLS
communication between Synapse and Redis, so I looked through the source
code and found it. I figured I should go ahead and document it here.
2025-10-23 10:10:10 -05:00
Andrew Ferrazzutti
fc244bb592 Use type hinting generics in standard collections (#19046)
aka PEP 585, added in Python 3.9

 - https://peps.python.org/pep-0585/
 - https://docs.astral.sh/ruff/rules/non-pep585-annotation/
2025-10-22 16:48:19 -05:00
Eric Eastwood
cba3a814c6 Fix lints on develop (#19092)
Snuck in with
ff242faad0
2025-10-22 10:39:04 -05:00
Andrew Morgan
3b59ac3b69 Merge branch 'release-v1.141' into develop 2025-10-21 16:48:09 +01:00
Andrew Morgan
ff242faad0 Don't exit the release script if there are uncommitted changes
Instead, all the user to fix them and retry.
2025-10-21 16:40:26 +01:00
Andrew Morgan
6c16734cf3 Revert "newsfile"
This reverts commit 4427908340.

This should not have been committed to `develop`.
2025-10-21 14:18:40 +01:00
Andrew Morgan
4427908340 newsfile 2025-10-21 14:17:53 +01:00
Kieran Lane
2f65b9e001 Update oidc_session_no_samesite cookie to be Secure (#19079) 2025-10-21 13:35:55 +01:00
Andrew Morgan
1271e896b5 1.141.0rc1 2025-10-21 11:12:59 +01:00
Andrew Morgan
418c9f3fe5 Prevent bcrypt from raising a ValueError and log (#19078) 2025-10-21 10:52:28 +01:00
Eric Eastwood
eac862629f Revert "Move start_doing_background_updates() to SynapseHomeServer.start_background_tasks() (#19036)" (#19059)
### Why

See
https://github.com/element-hq/synapse/pull/19036#discussion_r2427070612

Revert while I figure out the tests in
https://github.com/element-hq/synapse/pull/19057
2025-10-20 10:55:41 -05:00
Ben Banfield-Zanin
67f22a200d Update Docker images to use Debian trixie (13) and thus Python 3.13 (#19064) 2025-10-20 16:49:17 +01:00
Andrew Morgan
da6c0cae96 Merge branch 'master' into develop 2025-10-14 16:58:19 +01:00
Andrew Morgan
b8f6ad2736 Move storage provider compatibility notice to the top of the changelog 2025-10-14 15:27:34 +01:00
Andrew Morgan
ecc90593cb 1.140.0 2025-10-14 15:26:15 +01:00
Andrew Morgan
a4f9274107 Fix indentation of sighup handler calling code (#19060) 2025-10-14 15:10:48 +01:00
Tulir Asokan
ec7554b768 Stabilize support for MSC4326: Device masquerading for appservices (#19033)
Note: the code references MSC3202, which is what MSC4326 was split off
from. Only MSC4326 was accepted, MSC3202 wasn't yet.
2025-10-13 11:13:07 -05:00
Eric Eastwood
d2c582ef3c Move unique snowflake homeserver background tasks to start_background_tasks (#19037)
(the standard pattern for this kind of thing)
2025-10-13 10:19:09 -05:00
Eric Eastwood
2d07bd7fd2 Update TODO list of conflicting areas where we encounter metrics being clobbered (ApplicationService) (#19040)
These errors are harmless and are a long-standing issue that is just now
being logged, see https://github.com/element-hq/synapse/issues/19042

```
2025-10-10 15:30:00,026 - synapse.util.metrics - 330 - ERROR - notify_interested_services-0 - Metric named cache_lru_cache__matches_user_in_member_list_example.com already registered for server example.com
	2025-10-10 16:30:00.167
2025-10-10 15:30:00,026 - synapse.util.metrics - 330 - ERROR - notify_interested_services-0 - Metric named cache_lru_cache_is_interested_in_room_example.com already registered for server example.com
	2025-10-10 16:30:00.167
2025-10-10 15:30:00,025 - synapse.util.metrics - 330 - ERROR - notify_interested_services-0 - Metric named cache_lru_cache_is_interested_in_event_example.com already registered for server example.com
	2025-10-10 16:29:15.560
2025-10-10 15:29:15,449 - synapse.util.metrics - 330 - ERROR - notify_interested_services_ephemeral-0 - Metric named cache_lru_cache__matches_user_in_member_list_example.com already registered for server example.com
	2025-10-10 16:29:15.560
2025-10-10 15:29:15,449 - synapse.util.metrics - 330 - ERROR - notify_interested_services_ephemeral-0 - Metric named cache_lru_cache_is_interested_in_room_example.com already registered for server example.com
```
2025-10-13 10:15:47 -05:00
Andrew Morgan
a7303c5311 Fix deprecated token field in release script (#19039) 2025-10-13 14:31:09 +01:00
Tulir Asokan
690b3a4fcc Allow using MSC4190 features without opt-in (#19031) 2025-10-13 13:07:11 +00:00
Eric Eastwood
d399d7649a Move start_doing_background_updates() to SynapseHomeServer.start_background_tasks() (#19036)
(more sane standard location for this sort of thing)

The one difference here is that previously, `start_doing_background_updates
()` only ran on the main Synapse instance. But since it now lives in
`start_background_tasks()`, it will run on the worker that supposed to
`run_background_tasks`. Doesn't seem like a problem though.
2025-10-10 14:30:38 -05:00
Andrew Morgan
9d9275da5a Merge branch 'release-v1.140' into develop 2025-10-10 15:30:59 +01:00
Andrew Morgan
ef80338c2d Add s3 warning to changelog and upgrade notes 2025-10-10 12:09:14 +01:00
Andrew Morgan
be75de2cfc changelog updates 2025-10-10 11:52:07 +01:00
Andrew Morgan
07cfb69778 Changelog updates 2025-10-10 11:28:56 +01:00
Andrew Morgan
c0d6998dea 1.140.0rc1 2025-10-10 11:24:27 +01:00
Andrew Morgan
8390138fa4 Add 'Fetch Event' Admin API page to the docs SUMMARY.md
Otherwise it won't appear on the documentation website's sidebar.
2025-10-10 11:20:48 +01:00
Andrew Morgan
627be7e0a7 Add 'Fetch Event' Admin API page to the docs SUMMARY.md
Otherwise it won't appear on the documentation website's sidebar.
2025-10-10 11:20:04 +01:00
Eric Eastwood
47fb4b43ca Introduce RootConfig.validate_config() which can be subclassed in HomeServerConfig to do cross-config class validation (#19027)
This means we
can move the open registration config validation from `setup()` to
`HomeServerConfig.validate_config()` (much more sane).

Spawning from looking at this area of code in
https://github.com/element-hq/synapse/pull/19015
2025-10-09 14:56:22 -05:00
Eric Eastwood
715cc5ee37 Split homeserver creation and setup (#19015)
### Background

As part of Element's plan to support a light form of vhosting (virtual
host) (multiple instances of Synapse in the same Python process), we're
currently diving into the details and implications of running multiple
instances of Synapse in the same Python process.

"Clean tenant provisioning" tracked internally by
https://github.com/element-hq/synapse-small-hosts/issues/221


### Partial startup problem

In the context of Synapse Pro for Small Hosts, since the Twisted reactor
is already running (from the `multi_synapse` shard process itself), when
provisioning a homeserver tenant, the `reactor.callWhenRunning(...)`
callbacks will be invoked immediately. This includes the Synapse's
[`start`](0615b64bb4/synapse/app/homeserver.py (L418-L429))
callback which sets up everything (including listeners, background
tasks, etc). If we encounter an error at this point, we are partially
setup but the exception will [bubble back to
us](8be122186b/multi_synapse/app/shard.py (L114-L121))
without us having a handle to the homeserver yet so we can't call
`hs.shutdown()` and clean everything up.


### What does this PR do?

Structures Synapse so we split creating the homeserver instance from
setting everything up. This way we have access to `hs` if anything goes
wrong during setup and can subsequently `hs.shutdown()` to clean
everything up.
2025-10-09 13:12:10 -05:00
Andrew Morgan
d440cfc9e2 Allow any release script command to accept --gh-token (#19035) 2025-10-09 17:15:54 +01:00
fkwp
18f07fdc4c Add MatrixRTC backend/services discovery endpoint (#18967)
Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
2025-10-09 17:15:47 +01:00
Andrew Morgan
e3344dc0c3 Expose defer_to_threadpool in the module API (#19032) 2025-10-09 15:15:13 +01:00
Andrew Morgan
bcbbccca23 Swap macos-13 with macos-15-intel GHA runner in CI (#19025) 2025-10-08 12:58:42 +01:00
Shay
8f01eb8ee0 Add an Admin API to fetch an event by ID (#18963)
Adds an endpoint to allow server admins to fetch an event regardless of
their membership in the originating room.
2025-10-08 11:38:15 +01:00
Andrew Morgan
21d125e29a Merge branch 'master' into develop 2025-10-08 10:20:14 +01:00
Andrew Morgan
638fa0f33d Merge branch 'release-v1.139' 2025-10-08 10:19:59 +01:00
Andrew Morgan
38afd10823 Merge branch 'master' into develop 2025-10-08 10:16:17 +01:00
Andrew Morgan
87cfe56d14 Merge branch 'release-v1.138' 2025-10-08 10:16:04 +01:00
Eric Eastwood
631eed91f1 Fix bad merge with start_background_tasks (#19013)
This was originally removed in
https://github.com/element-hq/synapse/pull/18886 but it looks like it
snuck back in https://github.com/element-hq/synapse/pull/18828 during a
[bad
merge](4cd3d9172e).

Noticed while looking at Synapse setup and startup (just by happen
stance).

I don't think this has adverse effects on Synapse actually working and
`start_background_tasks()` can be called multiple times.


### Is there a good way to audit all of these merges?

As I would like to see the conflicts for each merge.

This works but it's still hard to notice anything is wrong:

```
git log --remerge-diff <commit-sha>
```

> shows the difference from mechanical merge result and the result that
is actually recorded in a merge commit

via
https://stackoverflow.com/questions/15277708/how-do-you-see-show-a-git-merge-conflict-resolution-that-was-done-given-a-mer/71181334#71181334

The following better. Specify the version range to the commit right
before the merge to the merge. And can even specify which file to look
at to make it more obvious with the hindsight we have now.

```
git log --remerge-diff <merge-commit-sha>~1..<merge-commit-sha> -- synapse/server.py
```

Example:
```
git log --remerge-diff 4cd3d9172ed7b87e509746851a376c861a27820e~1..4cd3d9172ed7b87e509746851a376c861a27820e -- synapse/server.py
```
2025-10-07 13:29:22 -05:00
Eric Eastwood
7b8831310f No need to have version_string as an argument since it's always the same (#19012)
Assuming, we're happy with
https://github.com/element-hq/synapse/pull/19011, this PR makes sense.
2025-10-07 13:27:24 -05:00
dependabot[bot]
fb12d516cd Bump authlib from 1.6.4 to 1.6.5 (#19019)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 18:00:46 +01:00
dependabot[bot]
dde4e0e83d Bump types-pyyaml from 6.0.12.20250809 to 6.0.12.20250915 (#19018)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 18:00:28 +01:00
dependabot[bot]
8696551e7f Bump pydantic from 2.11.9 to 2.11.10 (#19017)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 18:00:02 +01:00
dependabot[bot]
28bc486bff Bump prometheus-client from 0.22.1 to 0.23.1 (#19016)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 17:59:39 +01:00
Eric Eastwood
ca27938257 Align Synapse version string to use SYNAPSE_VERSION (#19011)
See https://github.com/matrix-org/synapse/pull/12973 where we previously
used `version_string="Synapse/" +
get_distribution_version_string("matrix-synapse")` everywhere; and then
updated to use `version_string=f"Synapse/{SYNAPSE_VERSION}"` for every
other place except `synapse/app/homeserver.py` (why?!?!?!). This seems
more like a typo than something on purpose especially without any
context in the comments or PR. The whole point of that PR was trying to
solve the missing git info in version strings.

For reference, here is what both variables look like for me locally on
the latest `develop`:

 - `SYNAPSE_VERSION`: `1.139.0 (b=develop,1d2ddbc76e,dirty)`
 - `VERSION`: `1.139.0`

Only reason we may want to do this is to hide the branch name (some
sensitive name that exposes a security fix, etc). But we don't hide
anything:

`https://matrix.org/_matrix/federation/v1/version`
```json
{
  "server": {
    "name": "Synapse",
    "version": "1.139.0rc3 (b=matrix-org-hotfixes-priv,f538ed5ac3)"
  }
}
```

On `matrix.org`, the `Server` response header is masked as `cloudflare`
which would otherwise show `1.139.0rc3` for everything from the main
process.

---

This is spawning from looking at the way we setup and start Synapse for
homeserver tenant provisioning in the Synapse Pro for Small Hosts
project (https://github.com/element-hq/synapse-small-hosts/issues/221)
2025-10-07 10:44:56 -05:00
Andrew Morgan
036fb87584 1.139.2 2025-10-07 16:30:03 +01:00
Andrew Morgan
abe974cd2b 1.138.4 2025-10-07 16:28:59 +01:00
Andrew Morgan
5e3839e2af Update KeyUploadServlet to handle case where client sends device_keys: null (#19023) 2025-10-07 16:28:26 +01:00
Andrew Morgan
0ae1f105b2 Update KeyUploadServlet to handle case where client sends device_keys: null (#19023) 2025-10-07 16:27:58 +01:00
Andrew Morgan
2443760d0d Update KeyUploadServlet to handle case where client sends device_keys: null (#19023) 2025-10-07 16:23:55 +01:00
Andrew Morgan
4f7ffc13a7 Merge branch 'master' into develop 2025-10-07 14:57:04 +01:00
Andrew Morgan
340bdd896a Merge branch 'release-v1.138' 2025-10-07 14:56:48 +01:00
Andrew Morgan
957456ed3a Merge branch 'master' into develop 2025-10-07 13:55:58 +01:00
Andrew Morgan
459ebe07fc Merge branch 'release-v1.139' 2025-10-07 13:55:48 +01:00
Andrew Morgan
527e831b61 1.138.3 2025-10-07 12:54:43 +01:00
Andrew Morgan
76b012c3f5 1.139.1 2025-10-07 11:58:08 +01:00
Till
7069636c2d Validate the body of requests to /keys/upload (#17097)
Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-authored-by: Eric Eastwood <erice@element.io>
2025-10-07 11:41:00 +01:00
Andrew Morgan
dde1e012a4 Remove unstable prefixes for MSC2732: Olm fallback keys (#18996)
Co-authored-by: Eric Eastwood <erice@element.io>
2025-10-07 11:40:55 +01:00
Andrew Morgan
533d5e0a7a Remove unstable prefixes for MSC2732
This MSC was accepted in 2022. We shouldn't need to continue supporting the unstable field names.
2025-10-07 11:40:50 +01:00
Till
26aaaf9e48 Validate the body of requests to /keys/upload (#17097)
Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-authored-by: Eric Eastwood <erice@element.io>
2025-10-07 11:34:07 +01:00
Andrew Morgan
4a37c4d87a Remove unstable prefixes for MSC2732: Olm fallback keys (#18996)
Co-authored-by: Eric Eastwood <erice@element.io>
2025-10-07 11:34:03 +01:00
Andrew Morgan
d67280f5d8 Remove unstable prefixes for MSC2732
This MSC was accepted in 2022. We shouldn't need to continue supporting the unstable field names.
2025-10-07 11:33:58 +01:00
Till
42bbff8294 Validate the body of requests to /keys/upload (#17097)
Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-authored-by: Eric Eastwood <erice@element.io>
2025-10-07 11:27:53 +01:00
Andrew Morgan
5465c68553 Remove unstable prefixes for MSC2732: Olm fallback keys (#18996)
Co-authored-by: Eric Eastwood <erice@element.io>
2025-10-07 11:15:35 +01:00
Francesco Stefanini
1d2ddbc76e Fix bug where ephemeral events were not filtered by room ID (#19002)
Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
2025-10-03 13:19:57 +01:00
Eric Eastwood
70c044db8e Remove deprecated LoggingContext.set_current_context/LoggingContext.current_context methods (#18989)
These were added for backwards compatibility (and essentially
deprecated) in https://github.com/matrix-org/synapse/pull/7408
(2020-05-04) because
[`synapse-s3-storage-provider`](https://github.com/matrix-org/synapse-s3-storage-provider)
previously relied on them -- but `synapse-s3-storage-provider` since
been
[updated](https://github.com/matrix-org/synapse-s3-storage-provider/pull/36)
to no longer use them.
2025-10-02 13:21:37 -05:00
Eric Eastwood
6835e7be0d Wrap the Rust HTTP client with make_deferred_yieldable (#18903)
Wrap the Rust HTTP client with `make_deferred_yieldable` so downstream
usage doesn't need to use `PreserveLoggingContext()` or
`make_deferred_yieldable`.

> it seems like we should have some wrapper around it that uses
[`make_deferred_yieldable(...)`](40edb10a98/docs/log_contexts.md (where-you-create-a-new-awaitable-make-it-follow-the-rules))
to make things right so we don't have to do this in the downstream code.
>
> *-- @MadLittleMods,
https://github.com/element-hq/synapse/pull/18357#discussion_r2294941827*

Spawning from wanting to [remove `PreserveLoggingContext()` from the
codebase](https://github.com/element-hq/synapse/pull/18870) and thinking
that we [shouldn't have to pollute all downstream usage with
`PreserveLoggingContext()` or
`make_deferred_yieldable`](https://github.com/element-hq/synapse/pull/18357#discussion_r2294941827)

Part of https://github.com/element-hq/synapse/issues/18905 (Remove
`sentinel` logcontext where we log in Synapse)
2025-10-02 13:00:50 -05:00
Eric Eastwood
d27ff161f5 Add debug logs wherever we change current logcontext (#18966)
Add debug logs wherever we change current logcontext (`LoggingContext`).
I've had to make this same set of changes over and over as I've been
debugging things so it seems useful enough to include by default.

Instead of tracing things at the `set_current_context(...)` level, I've
added the debug logging on all of the utilities that utilize
`set_current_context(...)`. It's much easier to reason about the log
context changing because of `PreserveLoggingContext` changing things
than an opaque `set_current_context(...)` call.
2025-10-02 11:51:17 -05:00
Eric Eastwood
06a84f4fe0 Revert "Switch to OpenTracing's ContextVarsScopeManager (#18849)" (#19007)
Revert https://github.com/element-hq/synapse/pull/18849

Go back to our custom `LogContextScopeManager` after trying
OpenTracing's `ContextVarsScopeManager`.

Fix https://github.com/element-hq/synapse/issues/19004

### Why revert?

For reference, with the normal reactor, `ContextVarsScopeManager` worked
just as good as our custom `LogContextScopeManager` as far as I can tell
(and even better in some cases). But since Twisted appears to not fully
support `ContextVar`'s, it doesn't work as expected in all cases.
Compounding things, `ContextVarsScopeManager` was causing errors with
the experimental `SYNAPSE_ASYNC_IO_REACTOR` option.

Since we're not getting the full benefit that we originally desired, we
might as well revert and figure out alternatives for extending the
logcontext lifetimes to support the use case we were trying to unlock
(c.f. https://github.com/element-hq/synapse/pull/18804).

See
https://github.com/element-hq/synapse/issues/19004#issuecomment-3358052171
for more info.


### Does this require backporting and patch releases?

No. Since `ContextVarsScopeManager` operates just as good with the
normal reactor and was only causing actual errors with the experimental
`SYNAPSE_ASYNC_IO_REACTOR` option, I don't think this requires us to
backport and make patch releases at all.



### Maintain cross-links between main trace and background process work

In order to maintain the functionality introduced in https://github.com/element-hq/synapse/pull/18932 (cross-links between the background process trace and currently active trace), we also needed a small change.

Previously, when we were using `ContextVarsScopeManager`, it tracked the tracing scope across the logcontext changes without issue. Now that we're using our own custom `LogContextScopeManager` again, we need to capture the active span from the logcontext before we reset to the sentinel context because of the `PreserveLoggingContext()` below.

Added some tests to ensure we maintain the `run_as_background` tracing behavior regardless of the tracing scope manager we use.
2025-10-02 11:27:26 -05:00
Eric Eastwood
1c093509ce Switch task scheduler from raw logcontext manipulation (set_current_context) to utils (PreserveLoggingContext) (#18990)
Prefer the utils over raw logcontext manipulation.

Spawning from adding some logcontext debug logs in
https://github.com/element-hq/synapse/pull/18966 and since we're not
logging at the `set_current_context(...)` level (see reasoning there),
this removes some usage of `set_current_context(...)`.
2025-10-02 10:22:25 -05:00
dependabot[bot]
0615b64bb4 Bump phonenumbers from 9.0.14 to 9.0.15 (#18991)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-01 13:50:12 +01:00
Andrew Morgan
c284d8cb24 Merge branch 'master' into develop 2025-10-01 09:42:18 +01:00
Andrew Morgan
5fff5a1893 Merge branch 'develop' of github.com:element-hq/synapse into develop 2025-10-01 09:40:38 +01:00
Andrew Morgan
765817a1ad Merge branch 'release-v1.139' 2025-10-01 09:40:14 +01:00
Devon Hudson
396de6544a Cleanly shutdown SynapseHomeServer object (#18828)
This PR aims to allow for a clean shutdown of the `SynapseHomeServer`
object so that it can be fully deleted and cleaned up by garbage
collection without shutting down the entire python process.

Fix https://github.com/element-hq/synapse-small-hosts/issues/50

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Eric Eastwood <erice@element.io>
2025-10-01 02:42:09 +00:00
Sebastian Spaeth
d1c96ee0f2 Fix rc_room_creation and rc_reports docs - remove per_user typo (#18998) 2025-09-30 15:17:11 -05:00
Eric Eastwood
5adb08f3c9 Remove MockClock() (#18992)
Spawning from adding some logcontext debug logs in
https://github.com/element-hq/synapse/pull/18966 and since we're not
logging at the `set_current_context(...)` level (see reasoning there),
this removes some usage of `set_current_context(...)`.

Specifically, `MockClock.call_later(...)` doesn't handle logcontexts
correctly. It uses the calling logcontext as the callback context
(wrong, as the logcontext could finish before the callback finishes) and
it didn't reset back to the sentinel context before handing back to the
reactor. It was like this since it was [introduced 10+ years
ago](38da9884e7).
Instead of fixing the implementation which would just be a copy of our
normal `Clock`, we can just remove `MockClock`
2025-09-30 11:27:29 -05:00
Andrew Morgan
2aab171042 Remove unstable prefixes for MSC2732
This MSC was accepted in 2022. We shouldn't need to continue supporting the unstable field names.
2025-09-30 17:10:32 +01:00
Andrew Morgan
0aeb95fb07 Add MAS note to 1.139.0 changelog 2025-09-30 12:05:28 +01:00
Andrew Morgan
72020f3f2c 1.139.0 2025-09-30 11:58:59 +01:00
Andrew Morgan
ad8dcc2119 Remove internal ReplicationUploadKeysForUserRestServlet (#18988) 2025-09-30 11:12:14 +01:00
642 changed files with 10304 additions and 7240 deletions

View File

@@ -99,24 +99,24 @@ set_output("trial_test_matrix", test_matrix)
# First calculate the various sytest jobs.
#
# For each type of test we only run on bullseye on PRs
# For each type of test we only run on bookworm on PRs
sytest_tests = [
{
"sytest-tag": "bullseye",
"sytest-tag": "bookworm",
},
{
"sytest-tag": "bullseye",
"sytest-tag": "bookworm",
"postgres": "postgres",
},
{
"sytest-tag": "bullseye",
"sytest-tag": "bookworm",
"postgres": "multi-postgres",
"workers": "workers",
},
{
"sytest-tag": "bullseye",
"sytest-tag": "bookworm",
"postgres": "multi-postgres",
"workers": "workers",
"reactor": "asyncio",
@@ -127,11 +127,11 @@ if not IS_PR:
sytest_tests.extend(
[
{
"sytest-tag": "bullseye",
"sytest-tag": "bookworm",
"reactor": "asyncio",
},
{
"sytest-tag": "bullseye",
"sytest-tag": "bookworm",
"postgres": "postgres",
"reactor": "asyncio",
},

View File

@@ -139,9 +139,9 @@ jobs:
fail-fast: false
matrix:
include:
- sytest-tag: bullseye
- sytest-tag: bookworm
- sytest-tag: bullseye
- sytest-tag: bookworm
postgres: postgres
workers: workers
redis: redis

View File

@@ -114,8 +114,8 @@ jobs:
os:
- ubuntu-24.04
- ubuntu-24.04-arm
- macos-13 # This uses x86-64
- macos-14 # This uses arm64
- macos-15-intel # This uses x86-64
# is_pr is a flag used to exclude certain jobs from the matrix on PRs.
# It is not read by the rest of the workflow.
is_pr:
@@ -124,7 +124,7 @@ jobs:
exclude:
# Don't build macos wheels on PR CI.
- is_pr: true
os: "macos-13"
os: "macos-15-intel"
- is_pr: true
os: "macos-14"
# Don't build aarch64 wheels on PR CI.

View File

@@ -108,11 +108,11 @@ jobs:
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
container:
# We're using debian:bullseye because it uses Python 3.9 which is our minimum supported Python version.
# We're using bookworm because that's what Debian oldstable is at the time of writing.
# This job is a canary to warn us about unreleased twisted changes that would cause problems for us if
# they were to be released immediately. For simplicity's sake (and to save CI runners) we use the oldest
# version, assuming that any incompatibilities on newer versions would also be present on the oldest.
image: matrixdotorg/sytest-synapse:bullseye
image: matrixdotorg/sytest-synapse:bookworm
volumes:
- ${{ github.workspace }}:/src

View File

@@ -1,3 +1,193 @@
# Synapse 1.141.0rc1 (2025-10-21)
## Deprecation of MacOS Python wheels
The team has decided to deprecate and eventually stop publishing python wheels
for MacOS. This is a burden on the team, and we're not aware of any parties
that use them. Synapse docker images will continue to work on MacOS, as will
building Synapse from source (though note this requires a Rust compiler).
Publishing MacOS Python wheels will continue for the next few releases. If you
do make use of these wheels downstream, please reach out to us in
[#synapse-dev:matrix.org](https://matrix.to/#/#synapse-dev:matrix.org). We'd
love to hear from you!
## Features
- Allow using [MSC4190](https://github.com/matrix-org/matrix-spec-proposals/pull/4190) behavior without the opt-in registration flag. Contributed by @tulir @ Beeper. ([\#19031](https://github.com/element-hq/synapse/issues/19031))
- Stabilized support for [MSC4326](https://github.com/matrix-org/matrix-spec-proposals/pull/4326): Device masquerading for appservices. Contributed by @tulir @ Beeper. ([\#19033](https://github.com/element-hq/synapse/issues/19033))
## Bugfixes
- Fix a bug introduced in 1.136.0 that would prevent Synapse from being able to be `reload`-ed more than once when running under systemd. ([\#19060](https://github.com/element-hq/synapse/issues/19060))
- Fix a bug introduced in 1.140.0 where an internal server error could be raised when hashing user passwords that are too long. ([\#19078](https://github.com/element-hq/synapse/issues/19078))
## Updates to the Docker image
- Update docker image to use Debian trixie as the base and thus Python 3.13. ([\#19064](https://github.com/element-hq/synapse/issues/19064))
## Internal Changes
- Move unique snowflake homeserver background tasks to `start_background_tasks` (the standard pattern for this kind of thing). ([\#19037](https://github.com/element-hq/synapse/issues/19037))
- Drop a deprecated field of the `PyGitHub` dependency in the release script and raise the dependency's minimum version to `1.59.0`. ([\#19039](https://github.com/element-hq/synapse/issues/19039))
- Update TODO list of conflicting areas where we encounter metrics being clobbered (`ApplicationService`). ([\#19040](https://github.com/element-hq/synapse/issues/19040))
# Synapse 1.140.0 (2025-10-14)
## Compatibility notice for users of `synapse-s3-storage-provider`
Deployments that make use of the
[synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider)
module must upgrade to
[v1.6.0](https://github.com/matrix-org/synapse-s3-storage-provider/releases/tag/v1.6.0).
Using older versions of the module with this release of Synapse will prevent
users from being able to upload or download media.
No significant changes since 1.140.0rc1.
# Synapse 1.140.0rc1 (2025-10-10)
## Features
- Add [a new Media Query by ID Admin API](https://element-hq.github.io/synapse/v1.140/admin_api/media_admin_api.html#query-a-piece-of-media-by-id) that allows server admins to query and investigate the metadata of local or cached remote media via
the `origin/media_id` identifier found in a [Matrix Content URI](https://spec.matrix.org/v1.14/client-server-api/#matrix-content-mxc-uris). ([\#18911](https://github.com/element-hq/synapse/issues/18911))
- Add [a new Fetch Event Admin API](https://element-hq.github.io/synapse/v1.140/admin_api/fetch_event.html) to fetch an event by ID. ([\#18963](https://github.com/element-hq/synapse/issues/18963))
- Update [MSC4284: Policy Servers](https://github.com/matrix-org/matrix-spec-proposals/pull/4284) implementation to support signatures when available. ([\#18934](https://github.com/element-hq/synapse/issues/18934))
- Add experimental implementation of the `GET /_matrix/client/v1/rtc/transports` endpoint for the latest draft of [MSC4143: MatrixRTC](https://github.com/matrix-org/matrix-spec-proposals/pull/4143). ([\#18967](https://github.com/element-hq/synapse/issues/18967))
- Expose a `defer_to_threadpool` function in the Synapse Module API that allows modules to run a function on a separate thread in a custom threadpool. ([\#19032](https://github.com/element-hq/synapse/issues/19032))
## Bugfixes
- Fix room upgrade `room_config` argument and documentation for `user_may_create_room` spam-checker callback. ([\#18721](https://github.com/element-hq/synapse/issues/18721))
- Compute a user's last seen timestamp from their devices' last seen timestamps instead of IPs, because the latter are automatically cleared according to `user_ips_max_age`. ([\#18948](https://github.com/element-hq/synapse/issues/18948))
- Fix bug where ephemeral events were not filtered by room ID. Contributed by @frastefanini. ([\#19002](https://github.com/element-hq/synapse/issues/19002))
- Update Synapse main process version string to include git info. ([\#19011](https://github.com/element-hq/synapse/issues/19011))
## Improved Documentation
- Explain how `Deferred` callbacks interact with logcontexts. ([\#18914](https://github.com/element-hq/synapse/issues/18914))
- Fix documentation for `rc_room_creation` and `rc_reports` to clarify that a `per_user` rate limit is not supported. ([\#18998](https://github.com/element-hq/synapse/issues/18998))
## Deprecations and Removals
- Remove deprecated `LoggingContext.set_current_context`/`LoggingContext.current_context` methods which already have equivalent bare methods in `synapse.logging.context`. ([\#18989](https://github.com/element-hq/synapse/issues/18989))
- Drop support for unstable field names from the long-accepted [MSC2732](https://github.com/matrix-org/matrix-spec-proposals/pull/2732) (Olm fallback keys) proposal. ([\#18996](https://github.com/element-hq/synapse/issues/18996))
## Internal Changes
- Cleanly shutdown `SynapseHomeServer` object, allowing artifacts of embedded small hosts to be properly garbage collected. ([\#18828](https://github.com/element-hq/synapse/issues/18828))
- Update OEmbed providers to use 'X' instead of 'Twitter' in URL previews, following a rebrand. Contributed by @HammyHavoc. ([\#18767](https://github.com/element-hq/synapse/issues/18767))
- Fix `server_name` in logging context for multiple Synapse instances in one process. ([\#18868](https://github.com/element-hq/synapse/issues/18868))
- Wrap the Rust HTTP client with `make_deferred_yieldable` so it follows Synapse logcontext rules. ([\#18903](https://github.com/element-hq/synapse/issues/18903))
- Fix the GitHub Actions workflow that moves issues labeled "X-Needs-Info" to the "Needs info" column on the team's internal triage board. ([\#18913](https://github.com/element-hq/synapse/issues/18913))
- Disconnect background process work from request trace. ([\#18932](https://github.com/element-hq/synapse/issues/18932))
- Reduce overall number of calls to `_get_e2e_cross_signing_signatures_for_devices` by increasing the batch size of devices the query is called with, reducing DB load. ([\#18939](https://github.com/element-hq/synapse/issues/18939))
- Update error code used when an appservice tries to masquerade as an unknown device using [MSC4326](https://github.com/matrix-org/matrix-spec-proposals/pull/4326). Contributed by @tulir @ Beeper. ([\#18947](https://github.com/element-hq/synapse/issues/18947))
- Fix `no active span when trying to log` tracing error on startup (when OpenTracing is enabled). ([\#18959](https://github.com/element-hq/synapse/issues/18959))
- Fix `run_coroutine_in_background(...)` incorrectly handling logcontext. ([\#18964](https://github.com/element-hq/synapse/issues/18964))
- Add debug logs wherever we change current logcontext. ([\#18966](https://github.com/element-hq/synapse/issues/18966))
- Update dockerfile metadata to fix broken link; point to documentation website. ([\#18971](https://github.com/element-hq/synapse/issues/18971))
- Note that the code is additionally licensed under the [Element Commercial license](https://github.com/element-hq/synapse/blob/develop/LICENSE-COMMERCIAL) in SPDX expression field configs. ([\#18973](https://github.com/element-hq/synapse/issues/18973))
- Fix logcontext handling in `timeout_deferred` tests. ([\#18974](https://github.com/element-hq/synapse/issues/18974))
- Remove internal `ReplicationUploadKeysForUserRestServlet` as a follow-up to the work in https://github.com/element-hq/synapse/pull/18581 that moved device changes off the main process. ([\#18988](https://github.com/element-hq/synapse/issues/18988))
- Switch task scheduler from raw logcontext manipulation to using the dedicated logcontext utils. ([\#18990](https://github.com/element-hq/synapse/issues/18990))
- Remove `MockClock()` in tests. ([\#18992](https://github.com/element-hq/synapse/issues/18992))
- Switch back to our own custom `LogContextScopeManager` instead of OpenTracing's `ContextVarsScopeManager` which was causing problems when using the experimental `SYNAPSE_ASYNC_IO_REACTOR` option with tracing enabled. ([\#19007](https://github.com/element-hq/synapse/issues/19007))
- Remove `version_string` argument from `HomeServer` since it's always the same. ([\#19012](https://github.com/element-hq/synapse/issues/19012))
- Remove duplicate call to `hs.start_background_tasks()` introduced from a bad merge. ([\#19013](https://github.com/element-hq/synapse/issues/19013))
- Split homeserver creation (`create_homeserver`) and setup (`setup`). ([\#19015](https://github.com/element-hq/synapse/issues/19015))
- Swap near-end-of-life `macos-13` GitHub Actions runner for the `macos-15-intel` variant. ([\#19025](https://github.com/element-hq/synapse/issues/19025))
- Introduce `RootConfig.validate_config()` which can be subclassed in `HomeServerConfig` to do cross-config class validation. ([\#19027](https://github.com/element-hq/synapse/issues/19027))
- Allow any command of the `release.py` script to accept a `--gh-token` argument. ([\#19035](https://github.com/element-hq/synapse/issues/19035))
### Updates to locked dependencies
* Bump Swatinem/rust-cache from 2.8.0 to 2.8.1. ([\#18949](https://github.com/element-hq/synapse/issues/18949))
* Bump actions/cache from 4.2.4 to 4.3.0. ([\#18983](https://github.com/element-hq/synapse/issues/18983))
* Bump anyhow from 1.0.99 to 1.0.100. ([\#18950](https://github.com/element-hq/synapse/issues/18950))
* Bump authlib from 1.6.3 to 1.6.4. ([\#18957](https://github.com/element-hq/synapse/issues/18957))
* Bump authlib from 1.6.4 to 1.6.5. ([\#19019](https://github.com/element-hq/synapse/issues/19019))
* Bump bcrypt from 4.3.0 to 5.0.0. ([\#18984](https://github.com/element-hq/synapse/issues/18984))
* Bump docker/login-action from 3.5.0 to 3.6.0. ([\#18978](https://github.com/element-hq/synapse/issues/18978))
* Bump lxml from 6.0.0 to 6.0.2. ([\#18979](https://github.com/element-hq/synapse/issues/18979))
* Bump phonenumbers from 9.0.13 to 9.0.14. ([\#18954](https://github.com/element-hq/synapse/issues/18954))
* Bump phonenumbers from 9.0.14 to 9.0.15. ([\#18991](https://github.com/element-hq/synapse/issues/18991))
* Bump prometheus-client from 0.22.1 to 0.23.1. ([\#19016](https://github.com/element-hq/synapse/issues/19016))
* Bump pydantic from 2.11.9 to 2.11.10. ([\#19017](https://github.com/element-hq/synapse/issues/19017))
* Bump pygithub from 2.7.0 to 2.8.1. ([\#18952](https://github.com/element-hq/synapse/issues/18952))
* Bump regex from 1.11.2 to 1.11.3. ([\#18981](https://github.com/element-hq/synapse/issues/18981))
* Bump serde from 1.0.224 to 1.0.226. ([\#18953](https://github.com/element-hq/synapse/issues/18953))
* Bump serde from 1.0.226 to 1.0.228. ([\#18982](https://github.com/element-hq/synapse/issues/18982))
* Bump setuptools-rust from 1.11.1 to 1.12.0. ([\#18980](https://github.com/element-hq/synapse/issues/18980))
* Bump twine from 6.1.0 to 6.2.0. ([\#18985](https://github.com/element-hq/synapse/issues/18985))
* Bump types-pyyaml from 6.0.12.20250809 to 6.0.12.20250915. ([\#19018](https://github.com/element-hq/synapse/issues/19018))
* Bump types-requests from 2.32.4.20250809 to 2.32.4.20250913. ([\#18951](https://github.com/element-hq/synapse/issues/18951))
* Bump typing-extensions from 4.14.1 to 4.15.0. ([\#18956](https://github.com/element-hq/synapse/issues/18956))
# Synapse 1.139.2 (2025-10-07)
## Bugfixes
- Fix a bug introduced in 1.139.1 where a client could receive an Internal Server Error if they set `device_keys: null` in the request to [`POST /_matrix/client/v3/keys/upload`](https://spec.matrix.org/v1.16/client-server-api/#post_matrixclientv3keysupload). ([\#19023](https://github.com/element-hq/synapse/issues/19023))
# Synapse 1.139.1 (2025-10-07)
## Security Fixes
- Fix [CVE-2025-61672](https://www.cve.org/CVERecord?id=CVE-2025-61672) / [GHSA-fh66-fcv5-jjfr](https://github.com/element-hq/synapse/security/advisories/GHSA-fh66-fcv5-jjfr). Lack of validation for device keys in Synapse before 1.139.1 allows an attacker registered on the victim homeserver to degrade federation functionality, unpredictably breaking outbound federation to other homeservers. ([\#17097](https://github.com/element-hq/synapse/issues/17097))
## Deprecations and Removals
- Drop support for unstable field names from the long-accepted [MSC2732](https://github.com/matrix-org/matrix-spec-proposals/pull/2732) (Olm fallback keys) proposal. This change allows unit tests to pass following the security patch above. ([\#18996](https://github.com/element-hq/synapse/issues/18996))
# Synapse 1.138.4 (2025-10-07)
## Bugfixes
- Fix a bug introduced in 1.138.3 where a client could receive an Internal Server Error if they set `device_keys: null` in the request to [`POST /_matrix/client/v3/keys/upload`](https://spec.matrix.org/v1.16/client-server-api/#post_matrixclientv3keysupload). ([\#19023](https://github.com/element-hq/synapse/issues/19023))
# Synapse 1.138.3 (2025-10-07)
## Security Fixes
- Fix [CVE-2025-61672](https://www.cve.org/CVERecord?id=CVE-2025-61672) / [GHSA-fh66-fcv5-jjfr](https://github.com/element-hq/synapse/security/advisories/GHSA-fh66-fcv5-jjfr). Lack of validation for device keys in Synapse before 1.139.1 allows an attacker registered on the victim homeserver to degrade federation functionality, unpredictably breaking outbound federation to other homeservers. ([\#17097](https://github.com/element-hq/synapse/issues/17097))
## Deprecations and Removals
- Drop support for unstable field names from the long-accepted [MSC2732](https://github.com/matrix-org/matrix-spec-proposals/pull/2732) (Olm fallback keys) proposal. This change allows unit tests to pass following the security patch above. ([\#18996](https://github.com/element-hq/synapse/issues/18996))
# Synapse 1.139.0 (2025-09-30)
### `/register` requests from old application service implementations may break when using MAS
If you are using Matrix Authentication Service (MAS), as of this release any
Application Services that do not set `inhibit_login=true` when calling `POST
/_matrix/client/v3/register` will receive the error
`IO.ELEMENT.MSC4190.M_APPSERVICE_LOGIN_UNSUPPORTED` in response. Please see [the
upgrade
notes](https://element-hq.github.io/synapse/develop/upgrade.html#register-requests-from-old-application-service-implementations-may-break-when-using-mas)
for more information.
No significant changes since 1.139.0rc3.
# Synapse 1.139.0rc3 (2025-09-25)
## Bugfixes

View File

@@ -2,13 +2,13 @@
import itertools
import os
from typing import Any, Dict
from typing import Any
from packaging.specifiers import SpecifierSet
from setuptools_rust import Binding, RustExtension
def build(setup_kwargs: Dict[str, Any]) -> None:
def build(setup_kwargs: dict[str, Any]) -> None:
original_project_dir = os.path.dirname(os.path.realpath(__file__))
cargo_toml_path = os.path.join(original_project_dir, "rust", "Cargo.toml")

View File

@@ -1 +0,0 @@
Fix room upgrade `room_config` argument and documentation for `user_may_create_room` spam-checker callback.

View File

@@ -1 +0,0 @@
Update OEmbed providers to use 'X' instead of 'Twitter' in URL previews, following a rebrand. Contributed by @HammyHavoc.

View File

@@ -1 +0,0 @@
Fix `server_name` in logging context for multiple Synapse instances in one process.

View File

@@ -1,2 +0,0 @@
Add an Admin API that allows server admins to to query and investigate the metadata of local or cached remote media via
the `origin/media_id` identifier found in a [Matrix Content URI](https://spec.matrix.org/v1.14/client-server-api/#matrix-content-mxc-uris).

View File

@@ -1 +0,0 @@
Fix the GitHub Actions workflow that moves issues labeled "X-Needs-Info" to the "Needs info" column on the team's internal triage board.

View File

@@ -1 +0,0 @@
Explain how Deferred callbacks interact with logcontexts.

View File

@@ -1 +0,0 @@
Disconnect background process work from request trace.

View File

@@ -1 +0,0 @@
Update [MSC4284: Policy Servers](https://github.com/matrix-org/matrix-spec-proposals/pull/4284) implementation to support signatures when available.

View File

@@ -1 +0,0 @@
Reduce overall number of calls to `_get_e2e_cross_signing_signatures_for_devices` by increasing the batch size of devices the query is called with, reducing DB load.

View File

@@ -1 +0,0 @@
Update error code used when an appservice tries to masquerade as an unknown device using [MSC4326](https://github.com/matrix-org/matrix-spec-proposals/pull/4326). Contributed by @tulir @ Beeper.

View File

@@ -1 +0,0 @@
Compute a user's last seen timestamp from their devices' last seen timestamps instead of IPs, because the latter are automatically cleared according to `user_ips_max_age`.

View File

@@ -1 +0,0 @@
Fix `no active span when trying to log` tracing error on startup (when OpenTracing is enabled).

View File

@@ -1 +0,0 @@
Fix `run_coroutine_in_background(...)` incorrectly handling logcontext.

View File

@@ -1 +0,0 @@
Update dockerfile metadata to fix broken link; point to documentation website.

View File

@@ -1 +0,0 @@
Note that the code is additionally licensed under the [Element Commercial license](https://github.com/element-hq/synapse/blob/develop/LICENSE-COMMERCIAL) in SPDX expression field configs.

View File

@@ -1 +0,0 @@
Fix logcontext handling in `timeout_deferred` tests.

View File

@@ -0,0 +1,2 @@
Add an [Admin API](https://element-hq.github.io/synapse/latest/usage/administration/admin_api/index.html)
to allow an admin to fetch the space/room hierarchy for a given space.

1
changelog.d/19046.misc Normal file
View File

@@ -0,0 +1 @@
Use type hinting generics in standard collections, as per PEP 585, added in Python 3.9.

1
changelog.d/19047.doc Normal file
View File

@@ -0,0 +1 @@
Update the link to the Debian oldstable package for SQLite.

1
changelog.d/19047.misc Normal file
View File

@@ -0,0 +1 @@
Always treat `RETURNING` as supported by SQL engines, now that the minimum-supported versions of both SQLite and PostgreSQL support it.

View File

@@ -0,0 +1 @@
Remove support for SQLite < 3.37.2.

1
changelog.d/19073.doc Normal file
View File

@@ -0,0 +1 @@
Point out additional Redis configuration options available in the worker docs. Contributed by @servisbryce.

1
changelog.d/19079.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix the `oidc_session_no_samesite` cookie to have the `Secure` attribute, so the only difference between it and the paired `oidc_session` cookie, is the configuration of the `SameSite` attribute as described in the comments / cookie names. Contributed by @kieranlane.

1
changelog.d/19080.misc Normal file
View File

@@ -0,0 +1 @@
Update deprecated code in the release script to prevent a warning message from being printed.

1
changelog.d/19081.misc Normal file
View File

@@ -0,0 +1 @@
Update the deprecated poetry development dependencies group name in `pyproject.toml`.

1
changelog.d/19085.misc Normal file
View File

@@ -0,0 +1 @@
Remove `pp38*` skip selector from cibuildwheel to silence warning.

1
changelog.d/19088.misc Normal file
View File

@@ -0,0 +1 @@
Don't immediately exit the release script if the checkout is dirty. Instead, allow the user to clear the dirty changes and retry.

1
changelog.d/19089.misc Normal file
View File

@@ -0,0 +1 @@
Update the release script's generated announcement text to include a title and extra text for RC's.

1
changelog.d/19092.misc Normal file
View File

@@ -0,0 +1 @@
Fix lints on main branch.

View File

@@ -24,7 +24,6 @@ import datetime
import html
import json
import urllib.request
from typing import List
import pydot
@@ -33,7 +32,7 @@ def make_name(pdu_id: str, origin: str) -> str:
return f"{pdu_id}@{origin}"
def make_graph(pdus: List[dict], filename_prefix: str) -> None:
def make_graph(pdus: list[dict], filename_prefix: str) -> None:
"""
Generate a dot and SVG file for a graph of events in the room based on the
topological ordering by querying a homeserver.
@@ -127,7 +126,7 @@ def make_graph(pdus: List[dict], filename_prefix: str) -> None:
graph.write_svg("%s.svg" % filename_prefix, prog="dot")
def get_pdus(host: str, room: str) -> List[dict]:
def get_pdus(host: str, room: str) -> list[dict]:
transaction = json.loads(
urllib.request.urlopen(
f"http://{host}/_matrix/federation/v1/context/{room}/"

80
debian/changelog vendored
View File

@@ -1,4 +1,58 @@
matrix-synapse-py3 (1.139.0~rc3+nmu1) UNRELEASED; urgency=medium
matrix-synapse-py3 (1.141.0~rc1) stable; urgency=medium
* New Synapse release 1.141.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 21 Oct 2025 11:01:44 +0100
matrix-synapse-py3 (1.140.0) stable; urgency=medium
* New Synapse release 1.140.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 14 Oct 2025 15:22:36 +0100
matrix-synapse-py3 (1.140.0~rc1) stable; urgency=medium
* New Synapse release 1.140.0rc1.
-- Synapse Packaging team <packages@matrix.org> Fri, 10 Oct 2025 10:56:51 +0100
matrix-synapse-py3 (1.139.2) stable; urgency=medium
* New Synapse release 1.139.2.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 16:29:47 +0100
matrix-synapse-py3 (1.139.1) stable; urgency=medium
* New Synapse release 1.139.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 11:46:51 +0100
matrix-synapse-py3 (1.138.4) stable; urgency=medium
* New Synapse release 1.138.4.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 16:28:38 +0100
matrix-synapse-py3 (1.138.3) stable; urgency=medium
* New Synapse release 1.138.3.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 12:54:18 +0100
matrix-synapse-py3 (1.139.0) stable; urgency=medium
* New Synapse release 1.139.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Sep 2025 11:58:55 +0100
matrix-synapse-py3 (1.139.0~rc3) stable; urgency=medium
* New Synapse release 1.139.0rc3.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Sep 2025 12:13:23 +0100
matrix-synapse-py3 (1.138.2) stable; urgency=medium
* The licensing specifier has been updated to add an optional
`LicenseRef-Element-Commercial` license. The code was already licensed in
@@ -6,11 +60,11 @@ matrix-synapse-py3 (1.139.0~rc3+nmu1) UNRELEASED; urgency=medium
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Sep 2025 12:17:17 +0100
matrix-synapse-py3 (1.139.0~rc3) stable; urgency=medium
matrix-synapse-py3 (1.138.1) stable; urgency=medium
* New Synapse release 1.139.0rc3.
* New Synapse release 1.138.1.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Sep 2025 12:13:23 +0100
-- Synapse Packaging team <packages@matrix.org> Wed, 24 Sep 2025 11:32:38 +0100
matrix-synapse-py3 (1.139.0~rc2) stable; urgency=medium
@@ -24,24 +78,6 @@ matrix-synapse-py3 (1.139.0~rc1) stable; urgency=medium
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Sep 2025 13:24:50 +0100
matrix-synapse-py3 (1.138.2) stable; urgency=medium
* New Synapse release 1.138.2.
-- Synapse Packaging team <packages@matrix.org> Wed, 24 Sep 2025 12:26:16 +0100
matrix-synapse-py3 (1.138.1) stable; urgency=medium
* New Synapse release 1.138.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 24 Sep 2025 11:32:38 +0100
matrix-synapse-py3 (1.138.0) stable; urgency=medium
* New Synapse release 1.138.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 09 Sep 2025 11:21:25 +0100
matrix-synapse-py3 (1.138.0~rc1) stable; urgency=medium
* New synapse release 1.138.0rc1.

View File

@@ -20,8 +20,8 @@
# `poetry export | pip install -r /dev/stdin`, but beware: we have experienced bugs in
# in `poetry export` in the past.
ARG DEBIAN_VERSION=bookworm
ARG PYTHON_VERSION=3.12
ARG DEBIAN_VERSION=trixie
ARG PYTHON_VERSION=3.13
ARG POETRY_VERSION=2.1.1
###
@@ -142,10 +142,10 @@ RUN \
libwebp7 \
xmlsec1 \
libjemalloc2 \
libicu \
| grep '^\w' > /tmp/pkg-list && \
for arch in arm64 amd64; do \
mkdir -p /tmp/debs-${arch} && \
chown _apt:root /tmp/debs-${arch} && \
cd /tmp/debs-${arch} && \
apt-get -o APT::Architecture="${arch}" download $(cat /tmp/pkg-list); \
done
@@ -176,11 +176,6 @@ LABEL org.opencontainers.image.documentation='https://element-hq.github.io/synap
LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git'
LABEL org.opencontainers.image.licenses='AGPL-3.0-or-later OR LicenseRef-Element-Commercial'
# On the runtime image, /lib is a symlink to /usr/lib, so we need to copy the
# libraries to the right place, else the `COPY` won't work.
# On amd64, we'll also have a /lib64 folder with ld-linux-x86-64.so.2, which is
# already present in the runtime image.
COPY --from=runtime-deps /install-${TARGETARCH}/lib /usr/lib
COPY --from=runtime-deps /install-${TARGETARCH}/etc /etc
COPY --from=runtime-deps /install-${TARGETARCH}/usr /usr
COPY --from=runtime-deps /install-${TARGETARCH}/var /var

View File

@@ -1,9 +1,10 @@
# syntax=docker/dockerfile:1
# syntax=docker/dockerfile:1-labs
ARG SYNAPSE_VERSION=latest
ARG FROM=matrixdotorg/synapse:$SYNAPSE_VERSION
ARG DEBIAN_VERSION=bookworm
ARG PYTHON_VERSION=3.12
ARG DEBIAN_VERSION=trixie
ARG PYTHON_VERSION=3.13
ARG REDIS_VERSION=7.2
# first of all, we create a base image with dependencies which we can copy into the
# target image. For repeated rebuilds, this is much faster than apt installing
@@ -11,15 +12,27 @@ ARG PYTHON_VERSION=3.12
FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS deps_base
ARG DEBIAN_VERSION
ARG REDIS_VERSION
# Tell apt to keep downloaded package files, as we're using cache mounts.
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
# The upstream redis-server deb has fewer dynamic libraries than Debian's package which makes it easier to copy later on
RUN \
curl -fsSL https://packages.redis.io/gpg | gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg && \
chmod 644 /usr/share/keyrings/redis-archive-keyring.gpg && \
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb ${DEBIAN_VERSION} main" | tee /etc/apt/sources.list.d/redis.list
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -yqq --no-install-recommends \
nginx-light
nginx-light \
redis-server="6:${REDIS_VERSION}.*" redis-tools="6:${REDIS_VERSION}.*" \
# libicu is required by postgres, see `docker/complement/Dockerfile`
libicu76
RUN \
# remove default page
@@ -35,19 +48,12 @@ FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS deps_base
RUN mkdir -p /uv/etc/supervisor/conf.d
# Similarly, a base to copy the redis server from.
#
# The redis docker image has fewer dynamic libraries than the debian package,
# which makes it much easier to copy (but we need to make sure we use an image
# based on the same debian version as the synapse image, to make sure we get
# the expected version of libc.
FROM docker.io/library/redis:7-${DEBIAN_VERSION} AS redis_base
# now build the final image, based on the the regular Synapse docker image
FROM $FROM
# Copy over dependencies
COPY --from=redis_base /usr/local/bin/redis-server /usr/local/bin
COPY --from=deps_base --parents /usr/lib/*-linux-gnu/libicu* /
COPY --from=deps_base /usr/bin/redis-server /usr/local/bin
COPY --from=deps_base /uv /
COPY --from=deps_base /usr/sbin/nginx /usr/sbin
COPY --from=deps_base /usr/share/nginx /usr/share/nginx

View File

@@ -9,7 +9,7 @@
ARG SYNAPSE_VERSION=latest
# This is an intermediate image, to be built locally (not pulled from a registry).
ARG FROM=matrixdotorg/synapse-workers:$SYNAPSE_VERSION
ARG DEBIAN_VERSION=bookworm
ARG DEBIAN_VERSION=trixie
FROM docker.io/library/postgres:13-${DEBIAN_VERSION} AS postgres_base
@@ -18,10 +18,10 @@ FROM $FROM
# since for repeated rebuilds, this is much faster than apt installing
# postgres each time.
# This trick only works because (a) the Synapse image happens to have all the
# shared libraries that postgres wants, (b) we use a postgres image based on
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
# This trick only works because we use a postgres image based on the same
# debian version as Synapse's docker image (so the versions of the shared
# libraries match). Any missing libraries need to be added to either the
# Synapse image or docker/Dockerfile-workers.
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql

View File

@@ -65,13 +65,10 @@ from itertools import chain
from pathlib import Path
from typing import (
Any,
Dict,
List,
Mapping,
MutableMapping,
NoReturn,
Optional,
Set,
SupportsIndex,
)
@@ -96,7 +93,7 @@ WORKER_PLACEHOLDER_NAME = "placeholder_name"
# Watching /_matrix/media and related needs a "media" listener
# Stream Writers require "client" and "replication" listeners because they
# have to attach by instance_map to the master process and have client endpoints.
WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
WORKERS_CONFIG: dict[str, dict[str, Any]] = {
"pusher": {
"app": "synapse.app.generic_worker",
"listener_resources": [],
@@ -408,7 +405,7 @@ def convert(src: str, dst: str, **template_vars: object) -> None:
def add_worker_roles_to_shared_config(
shared_config: dict,
worker_types_set: Set[str],
worker_types_set: set[str],
worker_name: str,
worker_port: int,
) -> None:
@@ -471,9 +468,9 @@ def add_worker_roles_to_shared_config(
def merge_worker_template_configs(
existing_dict: Optional[Dict[str, Any]],
to_be_merged_dict: Dict[str, Any],
) -> Dict[str, Any]:
existing_dict: Optional[dict[str, Any]],
to_be_merged_dict: dict[str, Any],
) -> dict[str, Any]:
"""When given an existing dict of worker template configuration consisting with both
dicts and lists, merge new template data from WORKERS_CONFIG(or create) and
return new dict.
@@ -484,7 +481,7 @@ def merge_worker_template_configs(
existing_dict.
Returns: The newly merged together dict values.
"""
new_dict: Dict[str, Any] = {}
new_dict: dict[str, Any] = {}
if not existing_dict:
# It doesn't exist yet, just use the new dict(but take a copy not a reference)
new_dict = to_be_merged_dict.copy()
@@ -509,8 +506,8 @@ def merge_worker_template_configs(
def insert_worker_name_for_worker_config(
existing_dict: Dict[str, Any], worker_name: str
) -> Dict[str, Any]:
existing_dict: dict[str, Any], worker_name: str
) -> dict[str, Any]:
"""Insert a given worker name into the worker's configuration dict.
Args:
@@ -526,7 +523,7 @@ def insert_worker_name_for_worker_config(
return dict_to_edit
def apply_requested_multiplier_for_worker(worker_types: List[str]) -> List[str]:
def apply_requested_multiplier_for_worker(worker_types: list[str]) -> list[str]:
"""
Apply multiplier(if found) by returning a new expanded list with some basic error
checking.
@@ -587,7 +584,7 @@ def is_sharding_allowed_for_worker_type(worker_type: str) -> bool:
def split_and_strip_string(
given_string: str, split_char: str, max_split: SupportsIndex = -1
) -> List[str]:
) -> list[str]:
"""
Helper to split a string on split_char and strip whitespace from each end of each
element.
@@ -616,8 +613,8 @@ def generate_base_homeserver_config() -> None:
def parse_worker_types(
requested_worker_types: List[str],
) -> Dict[str, Set[str]]:
requested_worker_types: list[str],
) -> dict[str, set[str]]:
"""Read the desired list of requested workers and prepare the data for use in
generating worker config files while also checking for potential gotchas.
@@ -633,14 +630,14 @@ def parse_worker_types(
# A counter of worker_base_name -> int. Used for determining the name for a given
# worker when generating its config file, as each worker's name is just
# worker_base_name followed by instance number
worker_base_name_counter: Dict[str, int] = defaultdict(int)
worker_base_name_counter: dict[str, int] = defaultdict(int)
# Similar to above, but more finely grained. This is used to determine we don't have
# more than a single worker for cases where multiples would be bad(e.g. presence).
worker_type_shard_counter: Dict[str, int] = defaultdict(int)
worker_type_shard_counter: dict[str, int] = defaultdict(int)
# The final result of all this processing
dict_to_return: Dict[str, Set[str]] = {}
dict_to_return: dict[str, set[str]] = {}
# Handle any multipliers requested for given workers.
multiple_processed_worker_types = apply_requested_multiplier_for_worker(
@@ -684,7 +681,7 @@ def parse_worker_types(
# Split the worker_type_string on "+", remove whitespace from ends then make
# the list a set so it's deduplicated.
worker_types_set: Set[str] = set(
worker_types_set: set[str] = set(
split_and_strip_string(worker_type_string, "+")
)
@@ -743,7 +740,7 @@ def generate_worker_files(
environ: Mapping[str, str],
config_path: str,
data_dir: str,
requested_worker_types: Dict[str, Set[str]],
requested_worker_types: dict[str, set[str]],
) -> None:
"""Read the desired workers(if any) that is passed in and generate shared
homeserver, nginx and supervisord configs.
@@ -764,7 +761,7 @@ def generate_worker_files(
# First read the original config file and extract the listeners block. Then we'll
# add another listener for replication. Later we'll write out the result to the
# shared config file.
listeners: List[Any]
listeners: list[Any]
if using_unix_sockets:
listeners = [
{
@@ -792,12 +789,12 @@ def generate_worker_files(
# base shared worker jinja2 template. This config file will be passed to all
# workers, included Synapse's main process. It is intended mainly for disabling
# functionality when certain workers are spun up, and adding a replication listener.
shared_config: Dict[str, Any] = {"listeners": listeners}
shared_config: dict[str, Any] = {"listeners": listeners}
# List of dicts that describe workers.
# We pass this to the Supervisor template later to generate the appropriate
# program blocks.
worker_descriptors: List[Dict[str, Any]] = []
worker_descriptors: list[dict[str, Any]] = []
# Upstreams for load-balancing purposes. This dict takes the form of the worker
# type to the ports of each worker. For example:
@@ -805,14 +802,14 @@ def generate_worker_files(
# worker_type: {1234, 1235, ...}}
# }
# and will be used to construct 'upstream' nginx directives.
nginx_upstreams: Dict[str, Set[int]] = {}
nginx_upstreams: dict[str, set[int]] = {}
# A map of: {"endpoint": "upstream"}, where "upstream" is a str representing what
# will be placed after the proxy_pass directive. The main benefit to representing
# this data as a dict over a str is that we can easily deduplicate endpoints
# across multiple instances of the same worker. The final rendering will be combined
# with nginx_upstreams and placed in /etc/nginx/conf.d.
nginx_locations: Dict[str, str] = {}
nginx_locations: dict[str, str] = {}
# Create the worker configuration directory if it doesn't already exist
os.makedirs("/conf/workers", exist_ok=True)
@@ -846,7 +843,7 @@ def generate_worker_files(
# yaml config file
for worker_name, worker_types_set in requested_worker_types.items():
# The collected and processed data will live here.
worker_config: Dict[str, Any] = {}
worker_config: dict[str, Any] = {}
# Merge all worker config templates for this worker into a single config
for worker_type in worker_types_set:
@@ -1029,7 +1026,7 @@ def generate_worker_log_config(
Returns: the path to the generated file
"""
# Check whether we should write worker logs to disk, in addition to the console
extra_log_template_args: Dict[str, Optional[str]] = {}
extra_log_template_args: dict[str, Optional[str]] = {}
if environ.get("SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK"):
extra_log_template_args["LOG_FILE_PATH"] = f"{data_dir}/logs/{worker_name}.log"
@@ -1053,7 +1050,7 @@ def generate_worker_log_config(
return log_config_filepath
def main(args: List[str], environ: MutableMapping[str, str]) -> None:
def main(args: list[str], environ: MutableMapping[str, str]) -> None:
parser = ArgumentParser()
parser.add_argument(
"--generate-only",
@@ -1087,7 +1084,7 @@ def main(args: List[str], environ: MutableMapping[str, str]) -> None:
if not worker_types_env:
# No workers, just the main process
worker_types = []
requested_worker_types: Dict[str, Any] = {}
requested_worker_types: dict[str, Any] = {}
else:
# Split type names by comma, ignoring whitespace.
worker_types = split_and_strip_string(worker_types_env, ",")

View File

@@ -8,9 +8,9 @@ ARG PYTHON_VERSION=3.9
###
### Stage 0: generate requirements.txt
###
# We hardcode the use of Debian bookworm here because this could change upstream
# and other Dockerfiles used for testing are expecting bookworm.
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm
# We hardcode the use of Debian trixie here because this could change upstream
# and other Dockerfiles used for testing are expecting trixie.
FROM docker.io/library/python:${PYTHON_VERSION}-slim-trixie
# Install Rust and other dependencies (stolen from normal Dockerfile)
# install the OS build deps

View File

@@ -6,7 +6,7 @@ import os
import platform
import subprocess
import sys
from typing import Any, Dict, List, Mapping, MutableMapping, NoReturn, Optional
from typing import Any, Mapping, MutableMapping, NoReturn, Optional
import jinja2
@@ -69,7 +69,7 @@ def generate_config_from_template(
)
# populate some params from data files (if they exist, else create new ones)
environ: Dict[str, Any] = dict(os_environ)
environ: dict[str, Any] = dict(os_environ)
secrets = {
"registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
"macaroon": "SYNAPSE_MACAROON_SECRET_KEY",
@@ -200,7 +200,7 @@ def run_generate_config(environ: Mapping[str, str], ownership: Optional[str]) ->
subprocess.run(args, check=True)
def main(args: List[str], environ: MutableMapping[str, str]) -> None:
def main(args: list[str], environ: MutableMapping[str, str]) -> None:
mode = args[1] if len(args) > 1 else "run"
# if we were given an explicit user to switch to, do so

View File

@@ -60,6 +60,7 @@
- [Admin API](usage/administration/admin_api/README.md)
- [Account Validity](admin_api/account_validity.md)
- [Background Updates](usage/administration/admin_api/background_updates.md)
- [Fetch Event](admin_api/fetch_event.md)
- [Event Reports](admin_api/event_reports.md)
- [Experimental Features](admin_api/experimental_features.md)
- [Media](admin_api/media_admin_api.md)

View File

@@ -0,0 +1,53 @@
# Fetch Event API
The fetch event API allows admins to fetch an event regardless of their membership in the room it
originated in.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api/).
Request:
```http
GET /_synapse/admin/v1/fetch_event/<event_id>
```
The API returns a JSON body like the following:
Response:
```json
{
"event": {
"auth_events": [
"$WhLChbYg6atHuFRP7cUd95naUtc8L0f7fqeizlsUVvc",
"$9Wj8dt02lrNEWweeq-KjRABUYKba0K9DL2liRvsAdtQ",
"$qJxBFxBt8_ODd9b3pgOL_jXP98S_igc1_kizuPSZFi4"
],
"content": {
"body": "Hey now",
"msgtype": "m.text"
},
"depth": 6,
"event_id": "$hJ_kcXbVMcI82JDrbqfUJIHu61tJD86uIFJ_8hNHi7s",
"hashes": {
"sha256": "LiNw8DtrRVf55EgAH8R42Wz7WCJUqGsPt2We6qZO5Rg"
},
"origin_server_ts": 799,
"prev_events": [
"$cnSUrNMnC3Ywh9_W7EquFxYQjC_sT3BAAVzcUVxZq1g"
],
"room_id": "!aIhKToCqgPTBloWMpf:test",
"sender": "@user:test",
"signatures": {
"test": {
"ed25519:a_lPym": "7mqSDwK1k7rnw34Dd8Fahu0rhPW7jPmcWPRtRDoEN9Yuv+BCM2+Rfdpv2MjxNKy3AYDEBwUwYEuaKMBaEMiKAQ"
}
},
"type": "m.room.message",
"unsigned": {
"age_ts": 799
}
}
}
```

View File

@@ -1115,3 +1115,76 @@ Example response:
]
}
```
# Admin Space Hierarchy Endpoint
This API allows an admin to fetch the space/room hierarchy for a given space,
returning details about that room and any children the room may have, paginating
over the space tree in a depth-first manner to locate child rooms. This is
functionally similar to the [CS Hierarchy](https://spec.matrix.org/v1.16/client-server-api/#get_matrixclientv1roomsroomidhierarchy) endpoint but does not check for
room membership when returning room summaries.
The endpoint does not query other servers over federation about remote rooms
that the server has not joined. This is a deliberate trade-off: while this
means it will leave some holes in the hierarchy that we could otherwise
sometimes fill in, it significantly improves the endpoint's response time and
the admin endpoint is designed for managing rooms local to the homeserver
anyway.
**Parameters**
The following query parameters are available:
* `from` - An optional pagination token, provided when there are more rooms to
return than the limit.
* `limit` - Maximum amount of rooms to return. Must be a non-negative integer,
defaults to `50`.
* `max_depth` - The maximum depth in the tree to explore, must be a non-negative
integer. 0 would correspond to just the root room, 1 would include just the
root room's children, etc. If not provided will recurse into the space tree without limit.
Request:
```http
GET /_synapse/admin/v1/rooms/<room_id>/hierarchy
```
Response:
```json
{
"rooms":
[
{ "children_state": [
{
"content": {
"via": ["local_test_server"]
},
"origin_server_ts": 1500,
"sender": "@user:test",
"state_key": "!QrMkkqBSwYRIFNFCso:test",
"type": "m.space.child"
}
],
"name": "space room",
"guest_can_join": false,
"join_rule": "public",
"num_joined_members": 1,
"room_id": "!sPOpNyMHbZAoAOsOFL:test",
"room_type": "m.space",
"world_readable": false
},
{
"children_state": [],
"guest_can_join": true,
"join_rule": "invite",
"name": "nefarious",
"num_joined_members": 1,
"room_id": "!QrMkkqBSwYRIFNFCso:test",
"topic": "being bad",
"world_readable": false}
],
"next_batch": "KUYmRbeSpAoaAIgOKGgyaCEn"
}
```

View File

@@ -21,7 +21,7 @@ people building from source should ensure they can fetch recent versions of Rust
(e.g. by using [rustup](https://rustup.rs/)).
The oldest supported version of SQLite is the version
[provided](https://packages.debian.org/bullseye/libsqlite3-0) by
[provided](https://packages.debian.org/oldstable/libsqlite3-0) by
[Debian oldstable](https://wiki.debian.org/DebianOldStable).

View File

@@ -320,7 +320,7 @@ The following command will let you run the integration test with the most common
configuration:
```sh
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:bullseye
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:bookworm
```
(Note that the paths must be full paths! You could also write `$(realpath relative/path)` if needed.)

View File

@@ -548,3 +548,19 @@ chain are dropped. Dropping the the reference to an awaitable you're
supposed to be awaiting is bad practice, so this doesn't
actually happen too much. Unfortunately, when it does happen, it will
lead to leaked logcontexts which are incredibly hard to track down.
## Debugging logcontext issues
Debugging logcontext issues can be tricky as leaking or losing a logcontext will surface
downstream and can point to an unrelated part of the codebase. It's best to enable debug
logging for `synapse.logging.context.debug` (needs to be explicitly configured) and go
backwards in the logs from the point where the issue is observed to find the root cause.
`log.config.yaml`
```yaml
loggers:
# Unlike other loggers, this one needs to be explicitly configured to see debug logs.
synapse.logging.context.debug:
level: DEBUG
```

View File

@@ -117,6 +117,24 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.141.0
## Docker images now based on Debian `trixie` with Python 3.13
The Docker images are now based on Debian `trixie` and use Python 3.13. If you
are using the Docker images as a base image you may need to e.g. adjust the
paths you mount any additional Python packages at.
# Upgrading to v1.140.0
## Users of `synapse-s3-storage-provider` must update the module to `v1.6.0`
Deployments that make use of the
[synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider/)
module must update it to
[v1.6.0](https://github.com/matrix-org/synapse-s3-storage-provider/releases/tag/v1.6.0),
otherwise users will be unable to upload or download media.
# Upgrading to v1.139.0
## `/register` requests from old application service implementations may break when using MAS

View File

@@ -2006,9 +2006,8 @@ This setting has the following sub-options:
Default configuration:
```yaml
rc_reports:
per_user:
per_second: 1.0
burst_count: 5.0
per_second: 1.0
burst_count: 5.0
```
Example configuration:
@@ -2031,9 +2030,8 @@ This setting has the following sub-options:
Default configuration:
```yaml
rc_room_creation:
per_user:
per_second: 0.016
burst_count: 10.0
per_second: 0.016
burst_count: 10.0
```
Example configuration:
@@ -2575,6 +2573,28 @@ Example configuration:
turn_allow_guests: false
```
---
### `matrix_rtc`
*(object)* Options related to MatrixRTC. Defaults to `{}`.
This setting has the following sub-options:
* `transports` (array): A list of transport types and arguments to use for MatrixRTC connections. Defaults to `[]`.
Options for each entry include:
* `type` (string): The type of transport to use to connect to the selective forwarding unit (SFU).
* `livekit_service_url` (string): The base URL of the LiveKit service. Should only be used with LiveKit-based transports.
Example configuration:
```yaml
matrix_rtc:
transports:
- type: livekit
livekit_service_url: https://matrix-rtc.example.com/livekit/jwt
```
---
## Registration
Registration can be rate-limited using the parameters in the [Ratelimiting](#ratelimiting) section of this manual.

View File

@@ -120,6 +120,9 @@ worker_replication_secret: ""
redis:
enabled: true
# For additional Redis configuration options (TLS, authentication, etc.),
# see the Synapse configuration documentation:
# https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#redis
instance_map:
main:

View File

@@ -69,7 +69,7 @@ warn_unused_ignores = False
;; https://github.com/python/typeshed/tree/master/stubs
;; and for each package `foo` there's a corresponding `types-foo` package on PyPI,
;; which we can pull in as a dev dependency by adding to `pyproject.toml`'s
;; `[tool.poetry.dev-dependencies]` list.
;; `[tool.poetry.group.dev.dependencies]` list.
# https://github.com/lepture/authlib/issues/460
[mypy-authlib.*]

414
poetry.lock generated
View File

@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.2.0 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
[[package]]
name = "annotated-types"
@@ -34,15 +34,15 @@ tests-mypy = ["mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" a
[[package]]
name = "authlib"
version = "1.6.4"
version = "1.6.5"
description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients."
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"all\" or extra == \"jwt\" or extra == \"oidc\""
markers = "extra == \"oidc\" or extra == \"jwt\" or extra == \"all\""
files = [
{file = "authlib-1.6.4-py2.py3-none-any.whl", hash = "sha256:39313d2a2caac3ecf6d8f95fbebdfd30ae6ea6ae6a6db794d976405fdd9aa796"},
{file = "authlib-1.6.4.tar.gz", hash = "sha256:104b0442a43061dc8bc23b133d1d06a2b0a9c2e3e33f34c4338929e816287649"},
{file = "authlib-1.6.5-py2.py3-none-any.whl", hash = "sha256:3e0e0507807f842b02175507bdee8957a1d5707fd4afb17c32fb43fee90b6e3a"},
{file = "authlib-1.6.5.tar.gz", hash = "sha256:6aaf9c79b7cc96c900f0b284061691c5d4e61221640a948fe690b556a6d6d10b"},
]
[package.dependencies]
@@ -447,7 +447,7 @@ description = "XML bomb protection for Python stdlib modules"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
@@ -472,7 +472,7 @@ description = "XPath 1.0/2.0/3.0/3.1 parsers and selectors for ElementTree and l
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "elementpath-4.1.5-py3-none-any.whl", hash = "sha256:2ac1a2fb31eb22bbbf817f8cf6752f844513216263f0e3892c8e79782fe4bb55"},
{file = "elementpath-4.1.5.tar.gz", hash = "sha256:c2d6dc524b29ef751ecfc416b0627668119d8812441c555d7471da41d4bacb8d"},
@@ -523,7 +523,7 @@ description = "Python wrapper for hiredis"
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"all\" or extra == \"redis\""
markers = "extra == \"redis\" or extra == \"all\""
files = [
{file = "hiredis-3.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:add17efcbae46c5a6a13b244ff0b4a8fa079602ceb62290095c941b42e9d5dec"},
{file = "hiredis-3.2.1-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:5fe955cc4f66c57df1ae8e5caf4de2925d43b5efab4e40859662311d1bcc5f54"},
@@ -673,14 +673,14 @@ test = ["coverage[toml]", "pretend", "pytest", "pytest-cov"]
[[package]]
name = "idna"
version = "3.10"
version = "3.11"
description = "Internationalized Domain Names in Applications (IDNA)"
optional = false
python-versions = ">=3.6"
python-versions = ">=3.8"
groups = ["main", "dev"]
files = [
{file = "idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3"},
{file = "idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9"},
{file = "idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea"},
{file = "idna-3.11.tar.gz", hash = "sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902"},
]
[package.extras]
@@ -688,97 +688,107 @@ all = ["flake8 (>=7.1.1)", "mypy (>=1.11.2)", "pytest (>=8.3.2)", "ruff (>=0.6.2
[[package]]
name = "ijson"
version = "3.4.0"
version = "3.4.0.post0"
description = "Iterative JSON parser with standard Python iterator interfaces"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "ijson-3.4.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e27e50f6dcdee648f704abc5d31b976cd2f90b4642ed447cf03296d138433d09"},
{file = "ijson-3.4.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2a753be681ac930740a4af9c93cfb4edc49a167faed48061ea650dc5b0f406f1"},
{file = "ijson-3.4.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a07c47aed534e0ec198e6a2d4360b259d32ac654af59c015afc517ad7973b7fb"},
{file = "ijson-3.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9c55f48181e11c597cd7146fb31edc8058391201ead69f8f40d2ecbb0b3e4fc6"},
{file = "ijson-3.4.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd5669f96f79d8a2dd5ae81cbd06770a4d42c435fd4a75c74ef28d9913b697d"},
{file = "ijson-3.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e3ddd46d16b8542c63b1b8af7006c758d4e21cc1b86122c15f8530fae773461"},
{file = "ijson-3.4.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:1504cec7fe04be2bb0cc33b50c9dd3f83f98c0540ad4991d4017373b7853cfe6"},
{file = "ijson-3.4.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:2f2ff456adeb216603e25d7915f10584c1b958b6eafa60038d76d08fc8a5fb06"},
{file = "ijson-3.4.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:0ab00d75d61613a125fbbb524551658b1ad6919a52271ca16563ca5bc2737bb1"},
{file = "ijson-3.4.0-cp310-cp310-win32.whl", hash = "sha256:ada421fd59fe2bfa4cfa64ba39aeba3f0753696cdcd4d50396a85f38b1d12b01"},
{file = "ijson-3.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:8c75e82cec05d00ed3a4af5f4edf08f59d536ed1a86ac7e84044870872d82a33"},
{file = "ijson-3.4.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9e369bf5a173ca51846c243002ad8025d32032532523b06510881ecc8723ee54"},
{file = "ijson-3.4.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:26e7da0a3cd2a56a1fde1b34231867693f21c528b683856f6691e95f9f39caec"},
{file = "ijson-3.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1c28c7f604729be22aa453e604e9617b665fa0c24cd25f9f47a970e8130c571a"},
{file = "ijson-3.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bed8bcb84d3468940f97869da323ba09ae3e6b950df11dea9b62e2b231ca1e3"},
{file = "ijson-3.4.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:296bc824f4088f2af814aaf973b0435bc887ce3d9f517b1577cc4e7d1afb1cb7"},
{file = "ijson-3.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8145f8f40617b6a8aa24e28559d0adc8b889e56a203725226a8a60fa3501073f"},
{file = "ijson-3.4.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:b674a97bd503ea21bc85103e06b6493b1b2a12da3372950f53e1c664566a33a4"},
{file = "ijson-3.4.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:8bc731cf1c3282b021d3407a601a5a327613da9ad3c4cecb1123232623ae1826"},
{file = "ijson-3.4.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:42ace5e940e0cf58c9de72f688d6829ddd815096d07927ee7e77df2648006365"},
{file = "ijson-3.4.0-cp311-cp311-win32.whl", hash = "sha256:5be39a0df4cd3f02b304382ea8885391900ac62e95888af47525a287c50005e9"},
{file = "ijson-3.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:0b1be1781792291e70d2e177acf564ec672a7907ba74f313583bdf39fe81f9b7"},
{file = "ijson-3.4.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:956b148f88259a80a9027ffbe2d91705fae0c004fbfba3e5a24028fbe72311a9"},
{file = "ijson-3.4.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:06b89960f5c721106394c7fba5760b3f67c515b8eb7d80f612388f5eca2f4621"},
{file = "ijson-3.4.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:9a0bb591cf250dd7e9dfab69d634745a7f3272d31cfe879f9156e0a081fd97ee"},
{file = "ijson-3.4.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:72e92de999977f4c6b660ffcf2b8d59604ccd531edcbfde05b642baf283e0de8"},
{file = "ijson-3.4.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9e9602157a5b869d44b6896e64f502c712a312fcde044c2e586fccb85d3e316e"},
{file = "ijson-3.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b1e83660edb931a425b7ff662eb49db1f10d30ca6d4d350e5630edbed098bc01"},
{file = "ijson-3.4.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:49bf8eac1c7b7913073865a859c215488461f7591b4fa6a33c14b51cb73659d0"},
{file = "ijson-3.4.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:160b09273cb42019f1811469508b0a057d19f26434d44752bde6f281da6d3f32"},
{file = "ijson-3.4.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2019ff4e6f354aa00c76c8591bd450899111c61f2354ad55cc127e2ce2492c44"},
{file = "ijson-3.4.0-cp312-cp312-win32.whl", hash = "sha256:931c007bf6bb8330705429989b2deed6838c22b63358a330bf362b6e458ba0bf"},
{file = "ijson-3.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:71523f2b64cb856a820223e94d23e88369f193017ecc789bb4de198cc9d349eb"},
{file = "ijson-3.4.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:e8d96f88d75196a61c9d9443de2b72c2d4a7ba9456ff117b57ae3bba23a54256"},
{file = "ijson-3.4.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:c45906ce2c1d3b62f15645476fc3a6ca279549127f01662a39ca5ed334a00cf9"},
{file = "ijson-3.4.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4ab4bc2119b35c4363ea49f29563612237cae9413d2fbe54b223be098b97bc9e"},
{file = "ijson-3.4.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:97b0a9b5a15e61dfb1f14921ea4e0dba39f3a650df6d8f444ddbc2b19b479ff1"},
{file = "ijson-3.4.0-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e3047bb994dabedf11de11076ed1147a307924b6e5e2df6784fb2599c4ad8c60"},
{file = "ijson-3.4.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:68c83161b052e9f5dc8191acbc862bb1e63f8a35344cb5cd0db1afd3afd487a6"},
{file = "ijson-3.4.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1eebd9b6c20eb1dffde0ae1f0fbb4aeacec2eb7b89adb5c7c0449fc9fd742760"},
{file = "ijson-3.4.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:13fb6d5c35192c541421f3ee81239d91fc15a8d8f26c869250f941f4b346a86c"},
{file = "ijson-3.4.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:28b7196ff7b37c4897c547a28fa4876919696739fc91c1f347651c9736877c69"},
{file = "ijson-3.4.0-cp313-cp313-win32.whl", hash = "sha256:3c2691d2da42629522140f77b99587d6f5010440d58d36616f33bc7bdc830cc3"},
{file = "ijson-3.4.0-cp313-cp313-win_amd64.whl", hash = "sha256:c4554718c275a044c47eb3874f78f2c939f300215d9031e785a6711cc51b83fc"},
{file = "ijson-3.4.0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:915a65e3f3c0eee2ea937bc62aaedb6c14cc1e8f0bb9f3f4fb5a9e2bbfa4b480"},
{file = "ijson-3.4.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:afbe9748707684b6c5adc295c4fdcf27765b300aec4d484e14a13dca4e5c0afa"},
{file = "ijson-3.4.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d823f8f321b4d8d5fa020d0a84f089fec5d52b7c0762430476d9f8bf95bbc1a9"},
{file = "ijson-3.4.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b8a0a2c54f3becf76881188beefd98b484b1d3bd005769a740d5b433b089fa23"},
{file = "ijson-3.4.0-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ced19a83ab09afa16257a0b15bc1aa888dbc555cb754be09d375c7f8d41051f2"},
{file = "ijson-3.4.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8100f9885eff1f38d35cef80ef759a1bbf5fc946349afa681bd7d0e681b7f1a0"},
{file = "ijson-3.4.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:d7bcc3f7f21b0f703031ecd15209b1284ea51b2a329d66074b5261de3916c1eb"},
{file = "ijson-3.4.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:2dcb190227b09dd171bdcbfe4720fddd574933c66314818dfb3960c8a6246a77"},
{file = "ijson-3.4.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:eda4cfb1d49c6073a901735aaa62e39cb7ab47f3ad7bb184862562f776f1fa8a"},
{file = "ijson-3.4.0-cp313-cp313t-win32.whl", hash = "sha256:0772638efa1f3b72b51736833404f1cbd2f5beeb9c1a3d392e7d385b9160cba7"},
{file = "ijson-3.4.0-cp313-cp313t-win_amd64.whl", hash = "sha256:3d8a0d67f36e4fb97c61a724456ef0791504b16ce6f74917a31c2e92309bbeb9"},
{file = "ijson-3.4.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8a990401dc7350c1739f42187823e68d2ef6964b55040c6e9f3a29461f9929e2"},
{file = "ijson-3.4.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:80f50e0f5da4cd6b65e2d8ff38cb61b26559608a05dd3a3f9cfa6f19848e6f22"},
{file = "ijson-3.4.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:2d9ca52f5650d820a2e7aa672dea1c560f609e165337e5b3ed7cf56d696bf309"},
{file = "ijson-3.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:940c8c5fd20fb89b56dde9194a4f1c7b779149f1ab26af6d8dc1da51a95d26dd"},
{file = "ijson-3.4.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:41dbb525666017ad856ac9b4f0f4b87d3e56b7dfde680d5f6d123556b22e2172"},
{file = "ijson-3.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9f84f5e2eea5c2d271c97221c382db005534294d1175ddd046a12369617c41c"},
{file = "ijson-3.4.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:c0cd126c11835839bba8ac0baaba568f67d701fc4f717791cf37b10b74a2ebd7"},
{file = "ijson-3.4.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:f9a9d3bbc6d91c24a2524a189d2aca703cb5f7e8eb34ad0aff3c91702404a983"},
{file = "ijson-3.4.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:56679ee133470d0f1f598a8ad109d760fcfebeef4819531e29335aefb7e4cb1a"},
{file = "ijson-3.4.0-cp39-cp39-win32.whl", hash = "sha256:583c15ded42ba80104fa1d0fa0dfdd89bb47922f3bb893a931bb843aeb55a3f3"},
{file = "ijson-3.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:4563e603e56f4451572d96b47311dffef5b933d825f3417881d4d3630c6edac2"},
{file = "ijson-3.4.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:54e989c35dba9cf163d532c14bcf0c260897d5f465643f0cd1fba9c908bed7ef"},
{file = "ijson-3.4.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:494eeb8e87afef22fbb969a4cb81ac2c535f30406f334fb6136e9117b0bb5380"},
{file = "ijson-3.4.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:81603de95de1688958af65cd2294881a4790edae7de540b70c65c8253c5dc44a"},
{file = "ijson-3.4.0-pp310-pypy310_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8524be12c1773e1be466034cc49c1ecbe3d5b47bb86217bd2a57f73f970a6c19"},
{file = "ijson-3.4.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17994696ec895d05e0cfa21b11c68c920c82634b4a3d8b8a1455d6fe9fdee8f7"},
{file = "ijson-3.4.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:0b67727aaee55d43b2e82b6a866c3cbcb2b66a5e9894212190cbd8773d0d9857"},
{file = "ijson-3.4.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:cdc8c5ca0eec789ed99db29c68012dda05027af0860bb360afd28d825238d69d"},
{file = "ijson-3.4.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:8e6b44b6ec45d5b1a0ee9d97e0e65ab7f62258727004cbbe202bf5f198bc21f7"},
{file = "ijson-3.4.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b51e239e4cb537929796e840d349fc731fdc0d58b1a0683ce5465ad725321e0f"},
{file = "ijson-3.4.0-pp311-pypy311_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed05d43ec02be8ddb1ab59579761f6656b25d241a77fd74f4f0f7ec09074318a"},
{file = "ijson-3.4.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cfeca1aaa59d93fd0a3718cbe5f7ef0effff85cf837e0bceb71831a47f39cc14"},
{file = "ijson-3.4.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:7ca72ca12e9a1dd4252c97d952be34282907f263f7e28fcdff3a01b83981e837"},
{file = "ijson-3.4.0-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0f79b2cd52bd220fff83b3ee4ef89b54fd897f57cc8564a6d8ab7ac669de3930"},
{file = "ijson-3.4.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:d16eed737610ad5ad8989b5864fbe09c64133129734e840c29085bb0d497fb03"},
{file = "ijson-3.4.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6b3aac1d7a27e1e3bdec5bd0689afe55c34aa499baa06a80852eda31f1ffa6dc"},
{file = "ijson-3.4.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:784ae654aa9851851e87f323e9429b20b58a5399f83e6a7e348e080f2892081f"},
{file = "ijson-3.4.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d05bd8fa6a8adefb32bbf7b993d2a2f4507db08453dd1a444c281413a6d9685"},
{file = "ijson-3.4.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:b5a05fd935cc28786b88c16976313086cd96414c6a3eb0a3822c47ab48b1793e"},
{file = "ijson-3.4.0.tar.gz", hash = "sha256:5f74dcbad9d592c428d3ca3957f7115a42689ee7ee941458860900236ae9bb13"},
{file = "ijson-3.4.0.post0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:8f904a405b58a04b6ef0425f1babbc5c65feb66b0a4cc7f214d4ad7de106f77d"},
{file = "ijson-3.4.0.post0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:a07dcc1a8a1ddd76131a7c7528cbd12951c2e34eb3c3d63697b905069a2d65b1"},
{file = "ijson-3.4.0.post0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ab3be841b8c430c1883b8c0775eb551f21b5500c102c7ee828afa35ddd701bdd"},
{file = "ijson-3.4.0.post0-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:43059ae0d657b11c5ddb11d149bc400c44f9e514fb8663057e9b2ea4d8d44c1f"},
{file = "ijson-3.4.0.post0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0d3e82963096579d1385c06b2559570d7191e225664b7fa049617da838e1a4a4"},
{file = "ijson-3.4.0.post0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:461ce4e87a21a261b60c0a68a2ad17c7dd214f0b90a0bec7e559a66b6ae3bd7e"},
{file = "ijson-3.4.0.post0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:890cf6610c9554efcb9765a93e368efeb5bb6135f59ce0828d92eaefff07fde5"},
{file = "ijson-3.4.0.post0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:6793c29a5728e7751a7df01be58ba7da9b9690c12bf79d32094c70a908fa02b9"},
{file = "ijson-3.4.0.post0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:a56b6674d7feec0401c91f86c376f4e3d8ff8129128a8ad21ca43ec0b1242f79"},
{file = "ijson-3.4.0.post0-cp310-cp310-win32.whl", hash = "sha256:01767fcbd75a5fa5a626069787b41f04681216b798510d5f63bcf66884386368"},
{file = "ijson-3.4.0.post0-cp310-cp310-win_amd64.whl", hash = "sha256:09127c06e5dec753feb9e4b8c5f6a23603d1cd672d098159a17e53a73b898eec"},
{file = "ijson-3.4.0.post0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:0b473112e72c0c506da425da3278367b6680f340ecc093084693a1e819d28435"},
{file = "ijson-3.4.0.post0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:043f9b7cf9cc744263a78175e769947733710d2412d25180df44b1086b23ebd5"},
{file = "ijson-3.4.0.post0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b55e49045f4c8031f3673f56662fd828dc9e8d65bd3b03a9420dda0d370e64ba"},
{file = "ijson-3.4.0.post0-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:11f13b73194ea2a5a8b4a2863f25b0b4624311f10db3a75747b510c4958179b0"},
{file = "ijson-3.4.0.post0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:659acb2843433e080c271ecedf7d19c71adde1ee5274fc7faa2fec0a793f9f1c"},
{file = "ijson-3.4.0.post0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:deda4cfcaafa72ca3fa845350045b1d0fef9364ec9f413241bb46988afbe6ee6"},
{file = "ijson-3.4.0.post0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:47352563e8c594360bacee2e0753e97025f0861234722d02faace62b1b6d2b2a"},
{file = "ijson-3.4.0.post0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:5a48b9486242d1295abe7fd0fbb6308867da5ca3f69b55c77922a93c2b6847aa"},
{file = "ijson-3.4.0.post0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9c0886234d1fae15cf4581a430bdba03d79251c1ab3b07e30aa31b13ef28d01c"},
{file = "ijson-3.4.0.post0-cp311-cp311-win32.whl", hash = "sha256:fecae19b5187d92900c73debb3a979b0b3290a53f85df1f8f3c5ba7d1e9fb9cb"},
{file = "ijson-3.4.0.post0-cp311-cp311-win_amd64.whl", hash = "sha256:b39dbf87071f23a23c8077eea2ae7cfeeca9ff9ffec722dfc8b5f352e4dd729c"},
{file = "ijson-3.4.0.post0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:b607a500fca26101be47d2baf7cddb457b819ab60a75ce51ed1092a40da8b2f9"},
{file = "ijson-3.4.0.post0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:4827d9874a6a81625412c59f7ca979a84d01f7f6bfb3c6d4dc4c46d0382b14e0"},
{file = "ijson-3.4.0.post0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d4d4afec780881edb2a0d2dd40b1cdbe246e630022d5192f266172a0307986a7"},
{file = "ijson-3.4.0.post0-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:432fb60ffb952926f9438e0539011e2dfcd108f8426ee826ccc6173308c3ff2c"},
{file = "ijson-3.4.0.post0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:54a0e3e05d9a0c95ecba73d9579f146cf6d5c5874116c849dba2d39a5f30380e"},
{file = "ijson-3.4.0.post0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:05807edc0bcbd222dc6ea32a2b897f0c81dc7f12c8580148bc82f6d7f5e7ec7b"},
{file = "ijson-3.4.0.post0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a5269af16f715855d9864937f9dd5c348ca1ac49cee6a2c7a1b7091c159e874f"},
{file = "ijson-3.4.0.post0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:b200df83c901f5bfa416d069ac71077aa1608f854a4c50df1b84ced560e9c9ec"},
{file = "ijson-3.4.0.post0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:6458bd8e679cdff459a0a5e555b107c3bbacb1f382da3fe0f40e392871eb518d"},
{file = "ijson-3.4.0.post0-cp312-cp312-win32.whl", hash = "sha256:55f7f656b5986326c978cbb3a9eea9e33f3ef6ecc4535b38f1d452c731da39ab"},
{file = "ijson-3.4.0.post0-cp312-cp312-win_amd64.whl", hash = "sha256:e15833dcf6f6d188fdc624a31cd0520c3ba21b6855dc304bc7c1a8aeca02d4ac"},
{file = "ijson-3.4.0.post0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:114ed248166ac06377e87a245a158d6b98019d2bdd3bb93995718e0bd996154f"},
{file = "ijson-3.4.0.post0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ffb21203736b08fe27cb30df6a4f802fafb9ef7646c5ff7ef79569b63ea76c57"},
{file = "ijson-3.4.0.post0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:07f20ecd748602ac7f18c617637e53bd73ded7f3b22260bba3abe401a7fc284e"},
{file = "ijson-3.4.0.post0-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:27aa193d47ffc6bc4e45453896ad98fb089a367e8283b973f1fe5c0198b60b4e"},
{file = "ijson-3.4.0.post0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ccddb2894eb7af162ba43b9475ac5825d15d568832f82eb8783036e5d2aebd42"},
{file = "ijson-3.4.0.post0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:61ab0b8c5bf707201dc67e02c116f4b6545c4afd7feb2264b989d242d9c4348a"},
{file = "ijson-3.4.0.post0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:254cfb8c124af68327a0e7a49b50bbdacafd87c4690a3d62c96eb01020a685ef"},
{file = "ijson-3.4.0.post0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:04ac9ca54db20f82aeda6379b5f4f6112fdb150d09ebce04affeab98a17b4ed3"},
{file = "ijson-3.4.0.post0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a603d7474bf35e7b3a8e49c8dabfc4751841931301adff3f3318171c4e407f32"},
{file = "ijson-3.4.0.post0-cp313-cp313-win32.whl", hash = "sha256:ec5bb1520cb212ebead7dba048bb9b70552c3440584f83b01b0abc96862e2a09"},
{file = "ijson-3.4.0.post0-cp313-cp313-win_amd64.whl", hash = "sha256:3505dff18bdeb8b171eb28af6df34857e2be80dc01e2e3b624e77215ad58897f"},
{file = "ijson-3.4.0.post0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:45a0b1c833ed2620eaf8da958f06ac8351c59e5e470e078400d23814670ed708"},
{file = "ijson-3.4.0.post0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:7809ec8c8f40228edaaa089f33e811dff4c5b8509702652870d3f286c9682e27"},
{file = "ijson-3.4.0.post0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:cf4a34c2cfe852aee75c89c05b0a4531c49dc0be27eeed221afd6fbf9c3e149c"},
{file = "ijson-3.4.0.post0-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:a39d5d36067604b26b78de70b8951c90e9272450642661fe531a8f7a6936a7fa"},
{file = "ijson-3.4.0.post0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:83fc738d81c9ea686b452996110b8a6678296c481e0546857db24785bff8da92"},
{file = "ijson-3.4.0.post0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b2a81aee91633868f5b40280e2523f7c5392e920a5082f47c5e991e516b483f6"},
{file = "ijson-3.4.0.post0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:56169e298c5a2e7196aaa55da78ddc2415876a74fe6304f81b1eb0d3273346f7"},
{file = "ijson-3.4.0.post0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:eeb9540f0b1a575cbb5968166706946458f98c16e7accc6f2fe71efa29864241"},
{file = "ijson-3.4.0.post0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:ba3478ff0bb49d7ba88783f491a99b6e3fa929c930ab062d2bb7837e6a38fe88"},
{file = "ijson-3.4.0.post0-cp313-cp313t-win32.whl", hash = "sha256:b005ce84e82f28b00bf777a464833465dfe3efa43a0a26c77b5ac40723e1a728"},
{file = "ijson-3.4.0.post0-cp313-cp313t-win_amd64.whl", hash = "sha256:fe9c84c9b1c8798afa407be1cea1603401d99bfc7c34497e19f4f5e5ddc9b441"},
{file = "ijson-3.4.0.post0-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:da6a21b88cbf5ecbc53371283988d22c9643aa71ae2873bbeaefd2dea3b6160b"},
{file = "ijson-3.4.0.post0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:cf24a48a1c3ca9d44a04feb59ccefeb9aa52bb49b9cb70ad30518c25cce74bb7"},
{file = "ijson-3.4.0.post0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:d14427d366f95f21adcb97d0ed1f6d30f6fdc04d0aa1e4de839152c50c2b8d65"},
{file = "ijson-3.4.0.post0-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:339d49f6c5d24051c85d9226be96d2d56e633cb8b7d09dd8099de8d8b51a97e2"},
{file = "ijson-3.4.0.post0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7206afcb396aaef66c2b066997b4e9d9042c4b7d777f4d994e9cec6d322c2fe6"},
{file = "ijson-3.4.0.post0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c8dd327da225887194fe8b93f2b3c9c256353e14a6b9eefc940ed17fde38f5b8"},
{file = "ijson-3.4.0.post0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:4810546e66128af51fd4a0c9a640e84e8508e9c15c4f247d8a3e3253b20e1465"},
{file = "ijson-3.4.0.post0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:103a0838061297d063bca81d724b0958b616f372bd893bbc278320152252c652"},
{file = "ijson-3.4.0.post0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:40007c977e230e04118b27322f25a72ae342a3d61464b2057fcd9b21eeb7427a"},
{file = "ijson-3.4.0.post0-cp314-cp314-win32.whl", hash = "sha256:f932969fc1fd4449ca141cf5f47ff357656a154a361f28d9ebca0badc5b02297"},
{file = "ijson-3.4.0.post0-cp314-cp314-win_amd64.whl", hash = "sha256:3ed19b1e4349240773a8ce4a4bfa450892d4a57949c02c515cd6be5a46b7696a"},
{file = "ijson-3.4.0.post0-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:226447e40ca9340a39ed07d68ea02ee14b52cb4fe649425b256c1f0073531c83"},
{file = "ijson-3.4.0.post0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:2c88f0669d45d4b1aa017c9b68d378e7cd15d188dfb6f0209adc78b7f45590a7"},
{file = "ijson-3.4.0.post0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:56b3089dc28c12492d92cc4896d2be585a89ecae34e25d08c1df88f21815cb50"},
{file = "ijson-3.4.0.post0-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:c117321cfa7b749cc1213f9b4c80dc958f0a206df98ec038ae4bcbbdb8463a15"},
{file = "ijson-3.4.0.post0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8311f48db6a33116db5c81682f08b6e2405501a4b4e460193ae69fec3cd1f87a"},
{file = "ijson-3.4.0.post0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:91c61a3e63e04da648737e6b4abd537df1b46fb8cdf3219b072e790bb3c1a46b"},
{file = "ijson-3.4.0.post0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:1709171023ce82651b2f132575c2e6282e47f64ad67bd3260da476418d0e7895"},
{file = "ijson-3.4.0.post0-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:5f0a72b1e3c0f78551670c12b2fdc1bf05f2796254d9c2055ba319bec2216020"},
{file = "ijson-3.4.0.post0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:b982a3597b0439ce9c8f4cfc929d86c6ed43907908be1e8463a34dc35fe5b258"},
{file = "ijson-3.4.0.post0-cp314-cp314t-win32.whl", hash = "sha256:4e39bfdc36b0b460ef15a06550a6a385c64c81f7ac205ccff39bd45147918912"},
{file = "ijson-3.4.0.post0-cp314-cp314t-win_amd64.whl", hash = "sha256:17e45262a5ddef39894013fb1548ee7094e444c8389eb1a97f86708b19bea03e"},
{file = "ijson-3.4.0.post0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:35eb2760a42fd9461358b4be131287587b49ff504fc37fa3014dca6c27c343f4"},
{file = "ijson-3.4.0.post0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f82ca7abfb3ef3cf2194c71dad634572bcccd62a5dd466649f78fe73d492c860"},
{file = "ijson-3.4.0.post0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:97f5ef3d839fc24b0ad47e8b31b4751ae72c5d83606e3ee4c92bb25965c03a4f"},
{file = "ijson-3.4.0.post0-cp39-cp39-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:a2c873742e9f7e21378516217d81d6fa11d34bae860ed364832c00ab1dbf37ed"},
{file = "ijson-3.4.0.post0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2f8b9ffa2c2dfe3289da9aec4e5ab52684fa2b2da2c853c7891b360ec46fba07"},
{file = "ijson-3.4.0.post0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0634b21188c67e5cf471cc1d30d193d19f521d89e2125ab1fb602aa8ae61e050"},
{file = "ijson-3.4.0.post0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:3752dd6f51ef58a71799de745649deff293e959700f1b7f5b1989618da366f24"},
{file = "ijson-3.4.0.post0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:57db77f4ea3eca09f519f627d9f9c76eb862b30edef5d899f031feeed94f05a1"},
{file = "ijson-3.4.0.post0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:435270a4b75667305f6df3226e5224e83cd6906022d7fdcc9df05caae725f796"},
{file = "ijson-3.4.0.post0-cp39-cp39-win32.whl", hash = "sha256:742c211b004ab51ccad2b301525d8a6eb2cf68a5fb82d78836f3a351eec44d4e"},
{file = "ijson-3.4.0.post0-cp39-cp39-win_amd64.whl", hash = "sha256:35aaa979da875fa92bea5dc5969b1541b4912b165091761785459a43f0c20946"},
{file = "ijson-3.4.0.post0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:add9242f886eae844a7410b84aee2bbb8bdc83c624f227cb1fdb2d0476a96cb1"},
{file = "ijson-3.4.0.post0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:69718ed41710dfcaa7564b0af42abc05875d4f7aaa24627c808867ef32634bc7"},
{file = "ijson-3.4.0.post0-pp311-pypy311_pp73-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:636b6eca96c6c43c04629c6b37fad0181662eaacf9877c71c698485637f752f9"},
{file = "ijson-3.4.0.post0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:eb5e73028f6e63d27b3d286069fe350ed80a4ccc493b022b590fea4bb086710d"},
{file = "ijson-3.4.0.post0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:461acf4320219459dabe5ed90a45cb86c9ba8cc6d6db9dad0d9427d42f57794c"},
{file = "ijson-3.4.0.post0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:a0fedf09c0f6ffa2a99e7e7fd9c5f3caf74e655c1ee015a0797383e99382ebc3"},
{file = "ijson-3.4.0.post0.tar.gz", hash = "sha256:9aa02dc70bb245670a6ca7fba737b992aeeb4895360980622f7e568dbf23e41e"},
]
[[package]]
@@ -860,7 +870,7 @@ description = "Jaeger Python OpenTracing Tracer implementation"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "jaeger-client-4.8.0.tar.gz", hash = "sha256:3157836edab8e2c209bd2d6ae61113db36f7ee399e66b1dcbb715d87ab49bfe0"},
]
@@ -998,7 +1008,7 @@ description = "A strictly RFC 4510 conforming LDAP V3 pure Python client library
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
files = [
{file = "ldap3-2.9.1-py2.py3-none-any.whl", hash = "sha256:5869596fc4948797020d3f03b7939da938778a0f9e2009f7a072ccf92b8e8d70"},
{file = "ldap3-2.9.1.tar.gz", hash = "sha256:f3e7fc4718e3f09dda568b57100095e0ce58633bcabbed8667ce3f8fbaa4229f"},
@@ -1014,7 +1024,7 @@ description = "Powerful and Pythonic XML processing library combining libxml2/li
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"all\" or extra == \"url-preview\""
markers = "extra == \"url-preview\" or extra == \"all\""
files = [
{file = "lxml-6.0.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e77dd455b9a16bbd2a5036a63ddbd479c19572af81b624e79ef422f929eef388"},
{file = "lxml-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5d444858b9f07cefff6455b983aea9a67f7462ba1f6cbe4a21e8bf6791bf2153"},
@@ -1301,7 +1311,7 @@ description = "An LDAP3 auth provider for Synapse"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
files = [
{file = "matrix-synapse-ldap3-0.3.0.tar.gz", hash = "sha256:8bb6517173164d4b9cc44f49de411d8cebdb2e705d5dd1ea1f38733c4a009e1d"},
{file = "matrix_synapse_ldap3-0.3.0-py3-none-any.whl", hash = "sha256:8b4d701f8702551e98cc1d8c20dbed532de5613584c08d0df22de376ba99159d"},
@@ -1342,71 +1352,74 @@ files = [
[[package]]
name = "msgpack"
version = "1.1.1"
version = "1.1.2"
description = "MessagePack serializer"
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "msgpack-1.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:353b6fc0c36fde68b661a12949d7d49f8f51ff5fa019c1e47c87c4ff34b080ed"},
{file = "msgpack-1.1.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:79c408fcf76a958491b4e3b103d1c417044544b68e96d06432a189b43d1215c8"},
{file = "msgpack-1.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78426096939c2c7482bf31ef15ca219a9e24460289c00dd0b94411040bb73ad2"},
{file = "msgpack-1.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b17ba27727a36cb73aabacaa44b13090feb88a01d012c0f4be70c00f75048b4"},
{file = "msgpack-1.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7a17ac1ea6ec3c7687d70201cfda3b1e8061466f28f686c24f627cae4ea8efd0"},
{file = "msgpack-1.1.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:88d1e966c9235c1d4e2afac21ca83933ba59537e2e2727a999bf3f515ca2af26"},
{file = "msgpack-1.1.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:f6d58656842e1b2ddbe07f43f56b10a60f2ba5826164910968f5933e5178af75"},
{file = "msgpack-1.1.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:96decdfc4adcbc087f5ea7ebdcfd3dee9a13358cae6e81d54be962efc38f6338"},
{file = "msgpack-1.1.1-cp310-cp310-win32.whl", hash = "sha256:6640fd979ca9a212e4bcdf6eb74051ade2c690b862b679bfcb60ae46e6dc4bfd"},
{file = "msgpack-1.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:8b65b53204fe1bd037c40c4148d00ef918eb2108d24c9aaa20bc31f9810ce0a8"},
{file = "msgpack-1.1.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:71ef05c1726884e44f8b1d1773604ab5d4d17729d8491403a705e649116c9558"},
{file = "msgpack-1.1.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:36043272c6aede309d29d56851f8841ba907a1a3d04435e43e8a19928e243c1d"},
{file = "msgpack-1.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a32747b1b39c3ac27d0670122b57e6e57f28eefb725e0b625618d1b59bf9d1e0"},
{file = "msgpack-1.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8a8b10fdb84a43e50d38057b06901ec9da52baac6983d3f709d8507f3889d43f"},
{file = "msgpack-1.1.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ba0c325c3f485dc54ec298d8b024e134acf07c10d494ffa24373bea729acf704"},
{file = "msgpack-1.1.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:88daaf7d146e48ec71212ce21109b66e06a98e5e44dca47d853cbfe171d6c8d2"},
{file = "msgpack-1.1.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:d8b55ea20dc59b181d3f47103f113e6f28a5e1c89fd5b67b9140edb442ab67f2"},
{file = "msgpack-1.1.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:4a28e8072ae9779f20427af07f53bbb8b4aa81151054e882aee333b158da8752"},
{file = "msgpack-1.1.1-cp311-cp311-win32.whl", hash = "sha256:7da8831f9a0fdb526621ba09a281fadc58ea12701bc709e7b8cbc362feabc295"},
{file = "msgpack-1.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:5fd1b58e1431008a57247d6e7cc4faa41c3607e8e7d4aaf81f7c29ea013cb458"},
{file = "msgpack-1.1.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:ae497b11f4c21558d95de9f64fff7053544f4d1a17731c866143ed6bb4591238"},
{file = "msgpack-1.1.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:33be9ab121df9b6b461ff91baac6f2731f83d9b27ed948c5b9d1978ae28bf157"},
{file = "msgpack-1.1.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6f64ae8fe7ffba251fecb8408540c34ee9df1c26674c50c4544d72dbf792e5ce"},
{file = "msgpack-1.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a494554874691720ba5891c9b0b39474ba43ffb1aaf32a5dac874effb1619e1a"},
{file = "msgpack-1.1.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb643284ab0ed26f6957d969fe0dd8bb17beb567beb8998140b5e38a90974f6c"},
{file = "msgpack-1.1.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d275a9e3c81b1093c060c3837e580c37f47c51eca031f7b5fb76f7b8470f5f9b"},
{file = "msgpack-1.1.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:4fd6b577e4541676e0cc9ddc1709d25014d3ad9a66caa19962c4f5de30fc09ef"},
{file = "msgpack-1.1.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:bb29aaa613c0a1c40d1af111abf025f1732cab333f96f285d6a93b934738a68a"},
{file = "msgpack-1.1.1-cp312-cp312-win32.whl", hash = "sha256:870b9a626280c86cff9c576ec0d9cbcc54a1e5ebda9cd26dab12baf41fee218c"},
{file = "msgpack-1.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:5692095123007180dca3e788bb4c399cc26626da51629a31d40207cb262e67f4"},
{file = "msgpack-1.1.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:3765afa6bd4832fc11c3749be4ba4b69a0e8d7b728f78e68120a157a4c5d41f0"},
{file = "msgpack-1.1.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:8ddb2bcfd1a8b9e431c8d6f4f7db0773084e107730ecf3472f1dfe9ad583f3d9"},
{file = "msgpack-1.1.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:196a736f0526a03653d829d7d4c5500a97eea3648aebfd4b6743875f28aa2af8"},
{file = "msgpack-1.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d592d06e3cc2f537ceeeb23d38799c6ad83255289bb84c2e5792e5a8dea268a"},
{file = "msgpack-1.1.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4df2311b0ce24f06ba253fda361f938dfecd7b961576f9be3f3fbd60e87130ac"},
{file = "msgpack-1.1.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e4141c5a32b5e37905b5940aacbc59739f036930367d7acce7a64e4dec1f5e0b"},
{file = "msgpack-1.1.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:b1ce7f41670c5a69e1389420436f41385b1aa2504c3b0c30620764b15dded2e7"},
{file = "msgpack-1.1.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4147151acabb9caed4e474c3344181e91ff7a388b888f1e19ea04f7e73dc7ad5"},
{file = "msgpack-1.1.1-cp313-cp313-win32.whl", hash = "sha256:500e85823a27d6d9bba1d057c871b4210c1dd6fb01fbb764e37e4e8847376323"},
{file = "msgpack-1.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:6d489fba546295983abd142812bda76b57e33d0b9f5d5b71c09a583285506f69"},
{file = "msgpack-1.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bba1be28247e68994355e028dcd668316db30c1f758d3241a7b903ac78dcd285"},
{file = "msgpack-1.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8f93dcddb243159c9e4109c9750ba5b335ab8d48d9522c5308cd05d7e3ce600"},
{file = "msgpack-1.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fbbc0b906a24038c9958a1ba7ae0918ad35b06cb449d398b76a7d08470b0ed9"},
{file = "msgpack-1.1.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:61e35a55a546a1690d9d09effaa436c25ae6130573b6ee9829c37ef0f18d5e78"},
{file = "msgpack-1.1.1-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:1abfc6e949b352dadf4bce0eb78023212ec5ac42f6abfd469ce91d783c149c2a"},
{file = "msgpack-1.1.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:996f2609ddf0142daba4cefd767d6db26958aac8439ee41db9cc0db9f4c4c3a6"},
{file = "msgpack-1.1.1-cp38-cp38-win32.whl", hash = "sha256:4d3237b224b930d58e9d83c81c0dba7aacc20fcc2f89c1e5423aa0529a4cd142"},
{file = "msgpack-1.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:da8f41e602574ece93dbbda1fab24650d6bf2a24089f9e9dbb4f5730ec1e58ad"},
{file = "msgpack-1.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f5be6b6bc52fad84d010cb45433720327ce886009d862f46b26d4d154001994b"},
{file = "msgpack-1.1.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3a89cd8c087ea67e64844287ea52888239cbd2940884eafd2dcd25754fb72232"},
{file = "msgpack-1.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1d75f3807a9900a7d575d8d6674a3a47e9f227e8716256f35bc6f03fc597ffbf"},
{file = "msgpack-1.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d182dac0221eb8faef2e6f44701812b467c02674a322c739355c39e94730cdbf"},
{file = "msgpack-1.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1b13fe0fb4aac1aa5320cd693b297fe6fdef0e7bea5518cbc2dd5299f873ae90"},
{file = "msgpack-1.1.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:435807eeb1bc791ceb3247d13c79868deb22184e1fc4224808750f0d7d1affc1"},
{file = "msgpack-1.1.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:4835d17af722609a45e16037bb1d4d78b7bdf19d6c0128116d178956618c4e88"},
{file = "msgpack-1.1.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:a8ef6e342c137888ebbfb233e02b8fbd689bb5b5fcc59b34711ac47ebd504478"},
{file = "msgpack-1.1.1-cp39-cp39-win32.whl", hash = "sha256:61abccf9de335d9efd149e2fff97ed5974f2481b3353772e8e2dd3402ba2bd57"},
{file = "msgpack-1.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:40eae974c873b2992fd36424a5d9407f93e97656d999f43fca9d29f820899084"},
{file = "msgpack-1.1.1.tar.gz", hash = "sha256:77b79ce34a2bdab2594f490c8e80dd62a02d650b91a75159a63ec413b8d104cd"},
{file = "msgpack-1.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0051fffef5a37ca2cd16978ae4f0aef92f164df86823871b5162812bebecd8e2"},
{file = "msgpack-1.1.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a605409040f2da88676e9c9e5853b3449ba8011973616189ea5ee55ddbc5bc87"},
{file = "msgpack-1.1.2-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8b696e83c9f1532b4af884045ba7f3aa741a63b2bc22617293a2c6a7c645f251"},
{file = "msgpack-1.1.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:365c0bbe981a27d8932da71af63ef86acc59ed5c01ad929e09a0b88c6294e28a"},
{file = "msgpack-1.1.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:41d1a5d875680166d3ac5c38573896453bbbea7092936d2e107214daf43b1d4f"},
{file = "msgpack-1.1.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:354e81bcdebaab427c3df4281187edc765d5d76bfb3a7c125af9da7a27e8458f"},
{file = "msgpack-1.1.2-cp310-cp310-win32.whl", hash = "sha256:e64c8d2f5e5d5fda7b842f55dec6133260ea8f53c4257d64494c534f306bf7a9"},
{file = "msgpack-1.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:db6192777d943bdaaafb6ba66d44bf65aa0e9c5616fa1d2da9bb08828c6b39aa"},
{file = "msgpack-1.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2e86a607e558d22985d856948c12a3fa7b42efad264dca8a3ebbcfa2735d786c"},
{file = "msgpack-1.1.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:283ae72fc89da59aa004ba147e8fc2f766647b1251500182fac0350d8af299c0"},
{file = "msgpack-1.1.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:61c8aa3bd513d87c72ed0b37b53dd5c5a0f58f2ff9f26e1555d3bd7948fb7296"},
{file = "msgpack-1.1.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:454e29e186285d2ebe65be34629fa0e8605202c60fbc7c4c650ccd41870896ef"},
{file = "msgpack-1.1.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:7bc8813f88417599564fafa59fd6f95be417179f76b40325b500b3c98409757c"},
{file = "msgpack-1.1.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:bafca952dc13907bdfdedfc6a5f579bf4f292bdd506fadb38389afa3ac5b208e"},
{file = "msgpack-1.1.2-cp311-cp311-win32.whl", hash = "sha256:602b6740e95ffc55bfb078172d279de3773d7b7db1f703b2f1323566b878b90e"},
{file = "msgpack-1.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:d198d275222dc54244bf3327eb8cbe00307d220241d9cec4d306d49a44e85f68"},
{file = "msgpack-1.1.2-cp311-cp311-win_arm64.whl", hash = "sha256:86f8136dfa5c116365a8a651a7d7484b65b13339731dd6faebb9a0242151c406"},
{file = "msgpack-1.1.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:70a0dff9d1f8da25179ffcf880e10cf1aad55fdb63cd59c9a49a1b82290062aa"},
{file = "msgpack-1.1.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:446abdd8b94b55c800ac34b102dffd2f6aa0ce643c55dfc017ad89347db3dbdb"},
{file = "msgpack-1.1.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c63eea553c69ab05b6747901b97d620bb2a690633c77f23feb0c6a947a8a7b8f"},
{file = "msgpack-1.1.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:372839311ccf6bdaf39b00b61288e0557916c3729529b301c52c2d88842add42"},
{file = "msgpack-1.1.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2929af52106ca73fcb28576218476ffbb531a036c2adbcf54a3664de124303e9"},
{file = "msgpack-1.1.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:be52a8fc79e45b0364210eef5234a7cf8d330836d0a64dfbb878efa903d84620"},
{file = "msgpack-1.1.2-cp312-cp312-win32.whl", hash = "sha256:1fff3d825d7859ac888b0fbda39a42d59193543920eda9d9bea44d958a878029"},
{file = "msgpack-1.1.2-cp312-cp312-win_amd64.whl", hash = "sha256:1de460f0403172cff81169a30b9a92b260cb809c4cb7e2fc79ae8d0510c78b6b"},
{file = "msgpack-1.1.2-cp312-cp312-win_arm64.whl", hash = "sha256:be5980f3ee0e6bd44f3a9e9dea01054f175b50c3e6cdb692bc9424c0bbb8bf69"},
{file = "msgpack-1.1.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:4efd7b5979ccb539c221a4c4e16aac1a533efc97f3b759bb5a5ac9f6d10383bf"},
{file = "msgpack-1.1.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:42eefe2c3e2af97ed470eec850facbe1b5ad1d6eacdbadc42ec98e7dcf68b4b7"},
{file = "msgpack-1.1.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1fdf7d83102bf09e7ce3357de96c59b627395352a4024f6e2458501f158bf999"},
{file = "msgpack-1.1.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fac4be746328f90caa3cd4bc67e6fe36ca2bf61d5c6eb6d895b6527e3f05071e"},
{file = "msgpack-1.1.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:fffee09044073e69f2bad787071aeec727183e7580443dfeb8556cbf1978d162"},
{file = "msgpack-1.1.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:5928604de9b032bc17f5099496417f113c45bc6bc21b5c6920caf34b3c428794"},
{file = "msgpack-1.1.2-cp313-cp313-win32.whl", hash = "sha256:a7787d353595c7c7e145e2331abf8b7ff1e6673a6b974ded96e6d4ec09f00c8c"},
{file = "msgpack-1.1.2-cp313-cp313-win_amd64.whl", hash = "sha256:a465f0dceb8e13a487e54c07d04ae3ba131c7c5b95e2612596eafde1dccf64a9"},
{file = "msgpack-1.1.2-cp313-cp313-win_arm64.whl", hash = "sha256:e69b39f8c0aa5ec24b57737ebee40be647035158f14ed4b40e6f150077e21a84"},
{file = "msgpack-1.1.2-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:e23ce8d5f7aa6ea6d2a2b326b4ba46c985dbb204523759984430db7114f8aa00"},
{file = "msgpack-1.1.2-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:6c15b7d74c939ebe620dd8e559384be806204d73b4f9356320632d783d1f7939"},
{file = "msgpack-1.1.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:99e2cb7b9031568a2a5c73aa077180f93dd2e95b4f8d3b8e14a73ae94a9e667e"},
{file = "msgpack-1.1.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:180759d89a057eab503cf62eeec0aa61c4ea1200dee709f3a8e9397dbb3b6931"},
{file = "msgpack-1.1.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:04fb995247a6e83830b62f0b07bf36540c213f6eac8e851166d8d86d83cbd014"},
{file = "msgpack-1.1.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:8e22ab046fa7ede9e36eeb4cfad44d46450f37bb05d5ec482b02868f451c95e2"},
{file = "msgpack-1.1.2-cp314-cp314-win32.whl", hash = "sha256:80a0ff7d4abf5fecb995fcf235d4064b9a9a8a40a3ab80999e6ac1e30b702717"},
{file = "msgpack-1.1.2-cp314-cp314-win_amd64.whl", hash = "sha256:9ade919fac6a3e7260b7f64cea89df6bec59104987cbea34d34a2fa15d74310b"},
{file = "msgpack-1.1.2-cp314-cp314-win_arm64.whl", hash = "sha256:59415c6076b1e30e563eb732e23b994a61c159cec44deaf584e5cc1dd662f2af"},
{file = "msgpack-1.1.2-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:897c478140877e5307760b0ea66e0932738879e7aa68144d9b78ea4c8302a84a"},
{file = "msgpack-1.1.2-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a668204fa43e6d02f89dbe79a30b0d67238d9ec4c5bd8a940fc3a004a47b721b"},
{file = "msgpack-1.1.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5559d03930d3aa0f3aacb4c42c776af1a2ace2611871c84a75afe436695e6245"},
{file = "msgpack-1.1.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:70c5a7a9fea7f036b716191c29047374c10721c389c21e9ffafad04df8c52c90"},
{file = "msgpack-1.1.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:f2cb069d8b981abc72b41aea1c580ce92d57c673ec61af4c500153a626cb9e20"},
{file = "msgpack-1.1.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:d62ce1f483f355f61adb5433ebfd8868c5f078d1a52d042b0a998682b4fa8c27"},
{file = "msgpack-1.1.2-cp314-cp314t-win32.whl", hash = "sha256:1d1418482b1ee984625d88aa9585db570180c286d942da463533b238b98b812b"},
{file = "msgpack-1.1.2-cp314-cp314t-win_amd64.whl", hash = "sha256:5a46bf7e831d09470ad92dff02b8b1ac92175ca36b087f904a0519857c6be3ff"},
{file = "msgpack-1.1.2-cp314-cp314t-win_arm64.whl", hash = "sha256:d99ef64f349d5ec3293688e91486c5fdb925ed03807f64d98d205d2713c60b46"},
{file = "msgpack-1.1.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ea5405c46e690122a76531ab97a079e184c0daf491e588592d6a23d3e32af99e"},
{file = "msgpack-1.1.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9fba231af7a933400238cb357ecccf8ab5d51535ea95d94fc35b7806218ff844"},
{file = "msgpack-1.1.2-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a8f6e7d30253714751aa0b0c84ae28948e852ee7fb0524082e6716769124bc23"},
{file = "msgpack-1.1.2-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:94fd7dc7d8cb0a54432f296f2246bc39474e017204ca6f4ff345941d4ed285a7"},
{file = "msgpack-1.1.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:350ad5353a467d9e3b126d8d1b90fe05ad081e2e1cef5753f8c345217c37e7b8"},
{file = "msgpack-1.1.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:6bde749afe671dc44893f8d08e83bf475a1a14570d67c4bb5cec5573463c8833"},
{file = "msgpack-1.1.2-cp39-cp39-win32.whl", hash = "sha256:ad09b984828d6b7bb52d1d1d0c9be68ad781fa004ca39216c8a1e63c0f34ba3c"},
{file = "msgpack-1.1.2-cp39-cp39-win_amd64.whl", hash = "sha256:67016ae8c8965124fdede9d3769528ad8284f14d635337ffa6a713a580f6c030"},
{file = "msgpack-1.1.2.tar.gz", hash = "sha256:3b60763c1373dd60f398488069bcdc703cd08a711477b5d480eecc9f9626f47e"},
]
[[package]]
@@ -1540,7 +1553,7 @@ description = "OpenTracing API for Python. See documentation at http://opentraci
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "opentracing-2.4.0.tar.gz", hash = "sha256:a173117e6ef580d55874734d1fa7ecb6f3655160b8b8974a2a1e98e5ec9c840d"},
]
@@ -1589,14 +1602,14 @@ files = [
[[package]]
name = "phonenumbers"
version = "9.0.14"
version = "9.0.15"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "phonenumbers-9.0.14-py2.py3-none-any.whl", hash = "sha256:6bdf5c46dbfefa1d941d122432d1958418d1dfe3f8c8c81d4c8e80f5442ea41f"},
{file = "phonenumbers-9.0.14.tar.gz", hash = "sha256:98afb3e86bf9ae02cc7c98ca44fa8827babb72842f90da9884c5d998937572ae"},
{file = "phonenumbers-9.0.15-py2.py3-none-any.whl", hash = "sha256:269b73bc05258e8fd57582770b9559307099ea677c8f1dc5272476f661344776"},
{file = "phonenumbers-9.0.15.tar.gz", hash = "sha256:345ff7f23768332d866f37732f815cdf1d33c7f0961246562a5c5b78c12c3ff3"},
]
[[package]]
@@ -1609,8 +1622,6 @@ groups = ["main"]
files = [
{file = "pillow-11.3.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1b9c17fd4ace828b3003dfd1e30bff24863e0eb59b535e8f80194d9cc7ecf860"},
{file = "pillow-11.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:65dc69160114cdd0ca0f35cb434633c75e8e7fad4cf855177a05bf38678f73ad"},
{file = "pillow-11.3.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7107195ddc914f656c7fc8e4a5e1c25f32e9236ea3ea860f257b0436011fddd0"},
{file = "pillow-11.3.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:cc3e831b563b3114baac7ec2ee86819eb03caa1a2cef0b481a5675b59c4fe23b"},
{file = "pillow-11.3.0-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f1f182ebd2303acf8c380a54f615ec883322593320a9b00438eb842c1f37ae50"},
{file = "pillow-11.3.0-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4445fa62e15936a028672fd48c4c11a66d641d2c05726c7ec1f8ba6a572036ae"},
{file = "pillow-11.3.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:71f511f6b3b91dd543282477be45a033e4845a40278fa8dcdbfdb07109bf18f9"},
@@ -1620,8 +1631,6 @@ files = [
{file = "pillow-11.3.0-cp310-cp310-win_arm64.whl", hash = "sha256:819931d25e57b513242859ce1876c58c59dc31587847bf74cfe06b2e0cb22d2f"},
{file = "pillow-11.3.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:1cd110edf822773368b396281a2293aeb91c90a2db00d78ea43e7e861631b722"},
{file = "pillow-11.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9c412fddd1b77a75aa904615ebaa6001f169b26fd467b4be93aded278266b288"},
{file = "pillow-11.3.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7d1aa4de119a0ecac0a34a9c8bde33f34022e2e8f99104e47a3ca392fd60e37d"},
{file = "pillow-11.3.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:91da1d88226663594e3f6b4b8c3c8d85bd504117d043740a8e0ec449087cc494"},
{file = "pillow-11.3.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:643f189248837533073c405ec2f0bb250ba54598cf80e8c1e043381a60632f58"},
{file = "pillow-11.3.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:106064daa23a745510dabce1d84f29137a37224831d88eb4ce94bb187b1d7e5f"},
{file = "pillow-11.3.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:cd8ff254faf15591e724dc7c4ddb6bf4793efcbe13802a4ae3e863cd300b493e"},
@@ -1631,8 +1640,6 @@ files = [
{file = "pillow-11.3.0-cp311-cp311-win_arm64.whl", hash = "sha256:30807c931ff7c095620fe04448e2c2fc673fcbb1ffe2a7da3fb39613489b1ddd"},
{file = "pillow-11.3.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fdae223722da47b024b867c1ea0be64e0df702c5e0a60e27daad39bf960dd1e4"},
{file = "pillow-11.3.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:921bd305b10e82b4d1f5e802b6850677f965d8394203d182f078873851dada69"},
{file = "pillow-11.3.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:eb76541cba2f958032d79d143b98a3a6b3ea87f0959bbe256c0b5e416599fd5d"},
{file = "pillow-11.3.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:67172f2944ebba3d4a7b54f2e95c786a3a50c21b88456329314caaa28cda70f6"},
{file = "pillow-11.3.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:97f07ed9f56a3b9b5f49d3661dc9607484e85c67e27f3e8be2c7d28ca032fec7"},
{file = "pillow-11.3.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:676b2815362456b5b3216b4fd5bd89d362100dc6f4945154ff172e206a22c024"},
{file = "pillow-11.3.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3e184b2f26ff146363dd07bde8b711833d7b0202e27d13540bfe2e35a323a809"},
@@ -1645,8 +1652,6 @@ files = [
{file = "pillow-11.3.0-cp313-cp313-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:7859a4cc7c9295f5838015d8cc0a9c215b77e43d07a25e460f35cf516df8626f"},
{file = "pillow-11.3.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ec1ee50470b0d050984394423d96325b744d55c701a439d2bd66089bff963d3c"},
{file = "pillow-11.3.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7db51d222548ccfd274e4572fdbf3e810a5e66b00608862f947b163e613b67dd"},
{file = "pillow-11.3.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:2d6fcc902a24ac74495df63faad1884282239265c6839a0a6416d33faedfae7e"},
{file = "pillow-11.3.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f0f5d8f4a08090c6d6d578351a2b91acf519a54986c055af27e7a93feae6d3f1"},
{file = "pillow-11.3.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c37d8ba9411d6003bba9e518db0db0c58a680ab9fe5179f040b0463644bc9805"},
{file = "pillow-11.3.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:13f87d581e71d9189ab21fe0efb5a23e9f28552d5be6979e84001d3b8505abe8"},
{file = "pillow-11.3.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:023f6d2d11784a465f09fd09a34b150ea4672e85fb3d05931d89f373ab14abb2"},
@@ -1656,8 +1661,6 @@ files = [
{file = "pillow-11.3.0-cp313-cp313-win_arm64.whl", hash = "sha256:1904e1264881f682f02b7f8167935cce37bc97db457f8e7849dc3a6a52b99580"},
{file = "pillow-11.3.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:4c834a3921375c48ee6b9624061076bc0a32a60b5532b322cc0ea64e639dd50e"},
{file = "pillow-11.3.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:5e05688ccef30ea69b9317a9ead994b93975104a677a36a8ed8106be9260aa6d"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1019b04af07fc0163e2810167918cb5add8d74674b6267616021ab558dc98ced"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f944255db153ebb2b19c51fe85dd99ef0ce494123f21b9db4877ffdfc5590c7c"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1f85acb69adf2aaee8b7da124efebbdb959a104db34d3a2cb0f3793dbae422a8"},
{file = "pillow-11.3.0-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:05f6ecbeff5005399bb48d198f098a9b4b6bdf27b8487c7f38ca16eeb070cd59"},
{file = "pillow-11.3.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a7bc6e6fd0395bc052f16b1a8670859964dbd7003bd0af2ff08342eb6e442cfe"},
@@ -1667,8 +1670,6 @@ files = [
{file = "pillow-11.3.0-cp313-cp313t-win_arm64.whl", hash = "sha256:8797edc41f3e8536ae4b10897ee2f637235c94f27404cac7297f7b607dd0716e"},
{file = "pillow-11.3.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:d9da3df5f9ea2a89b81bb6087177fb1f4d1c7146d583a3fe5c672c0d94e55e12"},
{file = "pillow-11.3.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:0b275ff9b04df7b640c59ec5a3cb113eefd3795a8df80bac69646ef699c6981a"},
{file = "pillow-11.3.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0743841cabd3dba6a83f38a92672cccbd69af56e3e91777b0ee7f4dba4385632"},
{file = "pillow-11.3.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2465a69cf967b8b49ee1b96d76718cd98c4e925414ead59fdf75cf0fd07df673"},
{file = "pillow-11.3.0-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:41742638139424703b4d01665b807c6468e23e699e8e90cffefe291c5832b027"},
{file = "pillow-11.3.0-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:93efb0b4de7e340d99057415c749175e24c8864302369e05914682ba642e5d77"},
{file = "pillow-11.3.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7966e38dcd0fa11ca390aed7c6f20454443581d758242023cf36fcb319b1a874"},
@@ -1678,8 +1679,6 @@ files = [
{file = "pillow-11.3.0-cp314-cp314-win_arm64.whl", hash = "sha256:155658efb5e044669c08896c0c44231c5e9abcaadbc5cd3648df2f7c0b96b9a6"},
{file = "pillow-11.3.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:59a03cdf019efbfeeed910bf79c7c93255c3d54bc45898ac2a4140071b02b4ae"},
{file = "pillow-11.3.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:f8a5827f84d973d8636e9dc5764af4f0cf2318d26744b3d902931701b0d46653"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ee92f2fd10f4adc4b43d07ec5e779932b4eb3dbfbc34790ada5a6669bc095aa6"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c96d333dcf42d01f47b37e0979b6bd73ec91eae18614864622d9b87bbd5bbf36"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4c96f993ab8c98460cd0c001447bff6194403e8b1d7e149ade5f00594918128b"},
{file = "pillow-11.3.0-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:41342b64afeba938edb034d122b2dda5db2139b9a4af999729ba8818e0056477"},
{file = "pillow-11.3.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:068d9c39a2d1b358eb9f245ce7ab1b5c3246c7c8c7d9ba58cfa5b43146c06e50"},
@@ -1689,8 +1688,6 @@ files = [
{file = "pillow-11.3.0-cp314-cp314t-win_arm64.whl", hash = "sha256:79ea0d14d3ebad43ec77ad5272e6ff9bba5b679ef73375ea760261207fa8e0aa"},
{file = "pillow-11.3.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:48d254f8a4c776de343051023eb61ffe818299eeac478da55227d96e241de53f"},
{file = "pillow-11.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7aee118e30a4cf54fdd873bd3a29de51e29105ab11f9aad8c32123f58c8f8081"},
{file = "pillow-11.3.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:23cff760a9049c502721bdb743a7cb3e03365fafcdfc2ef9784610714166e5a4"},
{file = "pillow-11.3.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:6359a3bc43f57d5b375d1ad54a0074318a0844d11b76abccf478c37c986d3cfc"},
{file = "pillow-11.3.0-cp39-cp39-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:092c80c76635f5ecb10f3f83d76716165c96f5229addbd1ec2bdbbda7d496e06"},
{file = "pillow-11.3.0-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:cadc9e0ea0a2431124cde7e1697106471fc4c1da01530e679b2391c37d3fbb3a"},
{file = "pillow-11.3.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6a418691000f2a418c9135a7cf0d797c1bb7d9a485e61fe8e7722845b95ef978"},
@@ -1700,15 +1697,11 @@ files = [
{file = "pillow-11.3.0-cp39-cp39-win_arm64.whl", hash = "sha256:6abdbfd3aea42be05702a8dd98832329c167ee84400a1d1f61ab11437f1717eb"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:3cee80663f29e3843b68199b9d6f4f54bd1d4a6b59bdd91bceefc51238bcb967"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:b5f56c3f344f2ccaf0dd875d3e180f631dc60a51b314295a3e681fe8cf851fbe"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:e67d793d180c9df62f1f40aee3accca4829d3794c95098887edc18af4b8b780c"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d000f46e2917c705e9fb93a3606ee4a819d1e3aa7a9b442f6444f07e77cf5e25"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:527b37216b6ac3a12d7838dc3bd75208ec57c1c6d11ef01902266a5a0c14fc27"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:be5463ac478b623b9dd3937afd7fb7ab3d79dd290a28e2b6df292dc75063eb8a"},
{file = "pillow-11.3.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:8dc70ca24c110503e16918a658b869019126ecfe03109b754c402daff12b3d9f"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7c8ec7a017ad1bd562f93dbd8505763e688d388cde6e4a010ae1486916e713e6"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:9ab6ae226de48019caa8074894544af5b53a117ccb9d3b3dcb2871464c829438"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fe27fb049cdcca11f11a7bfda64043c37b30e6b91f10cb5bab275806c32f6ab3"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:465b9e8844e3c3519a983d58b80be3f668e2a7a5db97f2784e7079fbc9f9822c"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5418b53c0d59b3824d05e029669efa023bbef0f3e92e75ec8428f3799487f361"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:504b6f59505f08ae014f724b6207ff6222662aab5cc9542577fb084ed0676ac7"},
{file = "pillow-11.3.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:c84d689db21a1c397d001aa08241044aa2069e7587b398c8cc63020390b1c1b8"},
@@ -1726,14 +1719,14 @@ xmp = ["defusedxml"]
[[package]]
name = "prometheus-client"
version = "0.22.1"
version = "0.23.1"
description = "Python client for the Prometheus monitoring system."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "prometheus_client-0.22.1-py3-none-any.whl", hash = "sha256:cca895342e308174341b2cbf99a56bef291fbc0ef7b9e5412a0f26d653ba7094"},
{file = "prometheus_client-0.22.1.tar.gz", hash = "sha256:190f1331e783cf21eb60bca559354e0a4d4378facecf78f5428c39b675d20d28"},
{file = "prometheus_client-0.23.1-py3-none-any.whl", hash = "sha256:dd1913e6e76b59cfe44e7a4b83e01afc9873c1bdfd2ed8739f1e76aeca115f99"},
{file = "prometheus_client-0.23.1.tar.gz", hash = "sha256:6ae8f9081eaaaf153a2e959d2e6c4f4fb57b12ef76c8c7980202f1e57b48b2ce"},
]
[package.extras]
@@ -1746,7 +1739,7 @@ description = "psycopg2 - Python-PostgreSQL Database Adapter"
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"all\" or extra == \"postgres\""
markers = "extra == \"postgres\" or extra == \"all\""
files = [
{file = "psycopg2-2.9.10-cp310-cp310-win32.whl", hash = "sha256:5df2b672140f95adb453af93a7d669d7a7bf0a56bcd26f1502329166f4a61716"},
{file = "psycopg2-2.9.10-cp310-cp310-win_amd64.whl", hash = "sha256:c6f7b8561225f9e711a9c47087388a97fdc948211c10a4bccbf0ba68ab7b3b5a"},
@@ -1754,7 +1747,6 @@ files = [
{file = "psycopg2-2.9.10-cp311-cp311-win_amd64.whl", hash = "sha256:0435034157049f6846e95103bd8f5a668788dd913a7c30162ca9503fdf542cb4"},
{file = "psycopg2-2.9.10-cp312-cp312-win32.whl", hash = "sha256:65a63d7ab0e067e2cdb3cf266de39663203d38d6a8ed97f5ca0cb315c73fe067"},
{file = "psycopg2-2.9.10-cp312-cp312-win_amd64.whl", hash = "sha256:4a579d6243da40a7b3182e0430493dbd55950c493d8c68f4eec0b302f6bbf20e"},
{file = "psycopg2-2.9.10-cp313-cp313-win_amd64.whl", hash = "sha256:91fd603a2155da8d0cfcdbf8ab24a2d54bca72795b90d2a3ed2b6da8d979dee2"},
{file = "psycopg2-2.9.10-cp39-cp39-win32.whl", hash = "sha256:9d5b3b94b79a844a986d029eee38998232451119ad653aea42bb9220a8c5066b"},
{file = "psycopg2-2.9.10-cp39-cp39-win_amd64.whl", hash = "sha256:88138c8dedcbfa96408023ea2b0c369eda40fe5d75002c0964c78f46f11fa442"},
{file = "psycopg2-2.9.10.tar.gz", hash = "sha256:12ec0b40b0273f95296233e8750441339298e6a572f7039da5b260e3c8b60e11"},
@@ -1767,7 +1759,7 @@ description = ".. image:: https://travis-ci.org/chtd/psycopg2cffi.svg?branch=mas
optional = true
python-versions = "*"
groups = ["main"]
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
files = [
{file = "psycopg2cffi-2.9.0.tar.gz", hash = "sha256:7e272edcd837de3a1d12b62185eb85c45a19feda9e62fa1b120c54f9e8d35c52"},
]
@@ -1783,7 +1775,7 @@ description = "A Simple library to enable psycopg2 compatability"
optional = true
python-versions = "*"
groups = ["main"]
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
files = [
{file = "psycopg2cffi-compat-1.1.tar.gz", hash = "sha256:d25e921748475522b33d13420aad5c2831c743227dc1f1f2585e0fdb5c914e05"},
]
@@ -1832,14 +1824,14 @@ files = [
[[package]]
name = "pydantic"
version = "2.11.9"
version = "2.11.10"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "pydantic-2.11.9-py3-none-any.whl", hash = "sha256:c42dd626f5cfc1c6950ce6205ea58c93efa406da65f479dcb4029d5934857da2"},
{file = "pydantic-2.11.9.tar.gz", hash = "sha256:6b8ffda597a14812a7975c90b82a8a2e777d9257aba3453f973acd3c032a18e2"},
{file = "pydantic-2.11.10-py3-none-any.whl", hash = "sha256:802a655709d49bd004c31e865ef37da30b540786a46bfce02333e0e24b5fe29a"},
{file = "pydantic-2.11.10.tar.gz", hash = "sha256:dc280f0982fbda6c38fada4e476dc0a4f3aeaf9c6ad4c28df68a666ec3c61423"},
]
[package.dependencies]
@@ -2042,7 +2034,7 @@ description = "A development tool to measure, monitor and analyze the memory beh
optional = true
python-versions = ">=3.6"
groups = ["main"]
markers = "extra == \"all\" or extra == \"cache-memory\""
markers = "extra == \"cache-memory\" or extra == \"all\""
files = [
{file = "Pympler-1.0.1-py3-none-any.whl", hash = "sha256:d260dda9ae781e1eab6ea15bacb84015849833ba5555f141d2d9b7b7473b307d"},
{file = "Pympler-1.0.1.tar.gz", hash = "sha256:993f1a3599ca3f4fcd7160c7545ad06310c9e12f70174ae7ae8d4e25f6c5d3fa"},
@@ -2102,7 +2094,7 @@ description = "Python implementation of SAML Version 2 Standard"
optional = true
python-versions = ">=3.9,<4.0"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "pysaml2-7.5.0-py3-none-any.whl", hash = "sha256:bc6627cc344476a83c757f440a73fda1369f13b6fda1b4e16bca63ffbabb5318"},
{file = "pysaml2-7.5.0.tar.gz", hash = "sha256:f36871d4e5ee857c6b85532e942550d2cf90ea4ee943d75eb681044bbc4f54f7"},
@@ -2127,7 +2119,7 @@ description = "Extensions to the standard Python datetime module"
optional = true
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
@@ -2155,7 +2147,7 @@ description = "World timezone definitions, modern and historical"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "pytz-2022.7.1-py2.py3-none-any.whl", hash = "sha256:78f4f37d8198e0627c5f1143240bb0206b8691d8d7ac6d78fee88b78733f8c4a"},
{file = "pytz-2022.7.1.tar.gz", hash = "sha256:01a0681c4b9684a28304615eba55d1ab31ae00bf68ec157ec3708a8182dbbcd0"},
@@ -2521,7 +2513,7 @@ description = "Python client for Sentry (https://sentry.io)"
optional = true
python-versions = ">=3.6"
groups = ["main"]
markers = "extra == \"all\" or extra == \"sentry\""
markers = "extra == \"sentry\" or extra == \"all\""
files = [
{file = "sentry_sdk-2.34.1-py2.py3-none-any.whl", hash = "sha256:b7a072e1cdc5abc48101d5146e1ae680fa81fe886d8d95aaa25a0b450c818d32"},
{file = "sentry_sdk-2.34.1.tar.gz", hash = "sha256:69274eb8c5c38562a544c3e9f68b5be0a43be4b697f5fd385bf98e4fbe672687"},
@@ -2709,7 +2701,7 @@ description = "Tornado IOLoop Backed Concurrent Futures"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "threadloop-1.0.2-py2-none-any.whl", hash = "sha256:5c90dbefab6ffbdba26afb4829d2a9df8275d13ac7dc58dccb0e279992679599"},
{file = "threadloop-1.0.2.tar.gz", hash = "sha256:8b180aac31013de13c2ad5c834819771992d350267bddb854613ae77ef571944"},
@@ -2725,7 +2717,7 @@ description = "Python bindings for the Apache Thrift RPC system"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "thrift-0.16.0.tar.gz", hash = "sha256:2b5b6488fcded21f9d312aa23c9ff6a0195d0f6ae26ddbd5ad9e3e25dfc14408"},
]
@@ -2787,7 +2779,7 @@ description = "Tornado is a Python web framework and asynchronous networking lib
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "tornado-6.5-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:f81067dad2e4443b015368b24e802d0083fecada4f0a4572fdb72fc06e54a9a6"},
{file = "tornado-6.5-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:9ac1cbe1db860b3cbb251e795c701c41d343f06a96049d6274e7c77559117e41"},
@@ -2924,7 +2916,7 @@ description = "non-blocking redis client for python"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"redis\""
markers = "extra == \"redis\" or extra == \"all\""
files = [
{file = "txredisapi-1.4.11-py3-none-any.whl", hash = "sha256:ac64d7a9342b58edca13ef267d4fa7637c1aa63f8595e066801c1e8b56b22d0b"},
{file = "txredisapi-1.4.11.tar.gz", hash = "sha256:3eb1af99aefdefb59eb877b1dd08861efad60915e30ad5bf3d5bf6c5cedcdbc6"},
@@ -3057,14 +3049,14 @@ types-cffi = "*"
[[package]]
name = "types-pyyaml"
version = "6.0.12.20250809"
version = "6.0.12.20250915"
description = "Typing stubs for PyYAML"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_pyyaml-6.0.12.20250809-py3-none-any.whl", hash = "sha256:032b6003b798e7de1a1ddfeefee32fac6486bdfe4845e0ae0e7fb3ee4512b52f"},
{file = "types_pyyaml-6.0.12.20250809.tar.gz", hash = "sha256:af4a1aca028f18e75297da2ee0da465f799627370d74073e96fee876524f61b5"},
{file = "types_pyyaml-6.0.12.20250915-py3-none-any.whl", hash = "sha256:e7d4d9e064e89a3b3cae120b4990cd370874d2bf12fa5f46c97018dd5d3c9ab6"},
{file = "types_pyyaml-6.0.12.20250915.tar.gz", hash = "sha256:0f8b54a528c303f0e6f7165687dd33fafa81c807fcac23f632b63aa624ced1d3"},
]
[[package]]
@@ -3170,7 +3162,7 @@ description = "An XML Schema validator and decoder"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "xmlschema-2.4.0-py3-none-any.whl", hash = "sha256:dc87be0caaa61f42649899189aab2fd8e0d567f2cf548433ba7b79278d231a4a"},
{file = "xmlschema-2.4.0.tar.gz", hash = "sha256:d74cd0c10866ac609e1ef94a5a69b018ad16e39077bc6393408b40c6babee793"},
@@ -3314,4 +3306,4 @@ url-preview = ["lxml"]
[metadata]
lock-version = "2.1"
python-versions = "^3.9.0"
content-hash = "2e8ea085e1a0c6f0ac051d4bc457a96827d01f621b1827086de01a5ffa98cf79"
content-hash = "5d71c862b924bc2af936cb6fef264a023213153543f738af31357deaf6de19b8"

View File

@@ -78,6 +78,12 @@ select = [
"LOG",
# flake8-logging-format
"G",
# pyupgrade
"UP006",
]
extend-safe-fixes = [
# pyupgrade
"UP006"
]
[tool.ruff.lint.isort]
@@ -101,7 +107,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.139.0rc3"
version = "1.141.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later OR LicenseRef-Element-Commercial"
@@ -319,7 +325,7 @@ all = [
# - systemd: this is a system-based requirement
]
[tool.poetry.dev-dependencies]
[tool.poetry.group.dev.dependencies]
# We pin development dependencies in poetry.lock so that our tests don't start
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
@@ -356,7 +362,7 @@ click = ">=8.1.3"
# GitPython was == 3.1.14; bumped to 3.1.20, the first release with type hints.
GitPython = ">=3.1.20"
markdown-it-py = ">=3.0.0"
pygithub = ">=1.55"
pygithub = ">=1.59"
# The following are executed as commands by the release script.
twine = "*"
# Towncrier min version comes from https://github.com/matrix-org/synapse/pull/3425. Rationale unclear.
@@ -381,10 +387,10 @@ build-backend = "poetry.core.masonry.api"
# Skip unsupported platforms (by us or by Rust).
# See https://cibuildwheel.readthedocs.io/en/stable/options/#build-skip for the list of build targets.
# We skip:
# - CPython and PyPy 3.8: EOLed
# - CPython 3.8: EOLed
# - musllinux i686: excluded to reduce number of wheels we build.
# c.f. https://github.com/matrix-org/synapse/pull/12595#discussion_r963107677
skip = "cp38* pp38* *-musllinux_i686"
skip = "cp38* *-musllinux_i686"
# Enable non-default builds.
# "pypy" used to be included by default up until cibuildwheel 3.
enable = "pypy"

View File

@@ -12,7 +12,7 @@
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*/
use std::{collections::HashMap, future::Future};
use std::{collections::HashMap, future::Future, sync::OnceLock};
use anyhow::Context;
use futures::TryStreamExt;
@@ -299,5 +299,22 @@ where
});
});
Ok(deferred)
// Make the deferred follow the Synapse logcontext rules
make_deferred_yieldable(py, &deferred)
}
static MAKE_DEFERRED_YIELDABLE: OnceLock<pyo3::Py<pyo3::PyAny>> = OnceLock::new();
/// Given a deferred, make it follow the Synapse logcontext rules
fn make_deferred_yieldable<'py>(
py: Python<'py>,
deferred: &Bound<'py, PyAny>,
) -> PyResult<Bound<'py, PyAny>> {
let make_deferred_yieldable = MAKE_DEFERRED_YIELDABLE.get_or_init(|| {
let sys = PyModule::import(py, "synapse.logging.context").unwrap();
let func = sys.getattr("make_deferred_yieldable").unwrap().unbind();
func
});
make_deferred_yieldable.call1(py, (deferred,))?.extract(py)
}

View File

@@ -1,5 +1,5 @@
$schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.139/synapse-config.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.141/synapse-config.schema.json
type: object
properties:
modules:
@@ -2259,9 +2259,8 @@ properties:
Setting this to a high value allows users to report content quickly, possibly in
duplicate. This can result in higher database usage.
default:
per_user:
per_second: 1.0
burst_count: 5.0
per_second: 1.0
burst_count: 5.0
examples:
- per_second: 2.0
burst_count: 20.0
@@ -2270,9 +2269,8 @@ properties:
description: >-
Sets rate limits for how often users are able to create rooms.
default:
per_user:
per_second: 0.016
burst_count: 10.0
per_second: 0.016
burst_count: 10.0
examples:
- per_second: 1.0
burst_count: 5.0
@@ -2886,6 +2884,35 @@ properties:
default: true
examples:
- false
matrix_rtc:
type: object
description: >-
Options related to MatrixRTC.
properties:
transports:
type: array
items:
type: object
required:
- type
properties:
type:
type: string
description: The type of transport to use to connect to the selective forwarding unit (SFU).
example: livekit
livekit_service_url:
type: string
description: >-
The base URL of the LiveKit service. Should only be used with LiveKit-based transports.
example: https://matrix-rtc.example.com/livekit/jwt
description:
A list of transport types and arguments to use for MatrixRTC connections.
default: []
default: {}
examples:
- transports:
- type: livekit
livekit_service_url: https://matrix-rtc.example.com/livekit/jwt
enable_registration:
type: boolean
description: >-

View File

@@ -18,7 +18,7 @@ import sys
import threading
from concurrent.futures import ThreadPoolExecutor
from types import FrameType
from typing import Collection, Optional, Sequence, Set
from typing import Collection, Optional, Sequence
# These are expanded inside the dockerfile to be a fully qualified image name.
# e.g. docker.io/library/debian:bullseye
@@ -54,7 +54,7 @@ class Builder:
):
self.redirect_stdout = redirect_stdout
self._docker_build_args = tuple(docker_build_args or ())
self.active_containers: Set[str] = set()
self.active_containers: set[str] = set()
self._lock = threading.Lock()
self._failed = False

View File

@@ -21,7 +21,6 @@
#
import sys
from pathlib import Path
from typing import Dict, List
import tomli
@@ -33,7 +32,7 @@ def main() -> None:
# Poetry 1.3+ lockfile format:
# There's a `files` inline table in each [[package]]
packages_to_assets: Dict[str, List[Dict[str, str]]] = {
packages_to_assets: dict[str, list[dict[str, str]]] = {
package["name"]: package["files"] for package in lockfile_content["package"]
}

View File

@@ -47,11 +47,7 @@ from contextlib import contextmanager
from typing import (
Any,
Callable,
Dict,
Generator,
List,
Set,
Type,
TypeVar,
)
@@ -69,7 +65,7 @@ from synapse._pydantic_compat import (
logger = logging.getLogger(__name__)
CONSTRAINED_TYPE_FACTORIES_WITH_STRICT_FLAG: List[Callable] = [
CONSTRAINED_TYPE_FACTORIES_WITH_STRICT_FLAG: list[Callable] = [
constr,
conbytes,
conint,
@@ -145,7 +141,7 @@ class PatchedBaseModel(PydanticBaseModel):
"""
@classmethod
def __init_subclass__(cls: Type[PydanticBaseModel], **kwargs: object):
def __init_subclass__(cls: type[PydanticBaseModel], **kwargs: object):
for field in cls.__fields__.values():
# Note that field.type_ and field.outer_type are computed based on the
# annotation type, see pydantic.fields.ModelField._type_analysis
@@ -212,7 +208,7 @@ def lint() -> int:
return os.EX_DATAERR if failures else os.EX_OK
def do_lint() -> Set[str]:
def do_lint() -> set[str]:
"""Try to import all of Synapse and see if we spot any Pydantic type coercions."""
failures = set()
@@ -258,8 +254,8 @@ def run_test_snippet(source: str) -> None:
# > Remember that at the module level, globals and locals are the same dictionary.
# > If exec gets two separate objects as globals and locals, the code will be
# > executed as if it were embedded in a class definition.
globals_: Dict[str, object]
locals_: Dict[str, object]
globals_: dict[str, object]
locals_: dict[str, object]
globals_ = locals_ = {}
exec(textwrap.dedent(source), globals_, locals_)
@@ -394,10 +390,10 @@ class TestFieldTypeInspection(unittest.TestCase):
("bool"),
("Optional[str]",),
("Union[None, str]",),
("List[str]",),
("List[List[str]]",),
("Dict[StrictStr, str]",),
("Dict[str, StrictStr]",),
("list[str]",),
("list[list[str]]",),
("dict[StrictStr, str]",),
("dict[str, StrictStr]",),
("TypedDict('D', x=int)",),
]
)
@@ -425,9 +421,9 @@ class TestFieldTypeInspection(unittest.TestCase):
("constr(strict=True, min_length=10)",),
("Optional[StrictStr]",),
("Union[None, StrictStr]",),
("List[StrictStr]",),
("List[List[StrictStr]]",),
("Dict[StrictStr, StrictStr]",),
("list[StrictStr]",),
("list[list[StrictStr]]",),
("dict[StrictStr, StrictStr]",),
("TypedDict('D', x=StrictInt)",),
]
)

View File

@@ -5,7 +5,7 @@
# Also checks that schema deltas do not try and create or drop indices.
import re
from typing import Any, Dict, List
from typing import Any
import click
import git
@@ -48,16 +48,16 @@ def main(force_colors: bool) -> None:
r = repo.git.show(f"origin/{DEVELOP_BRANCH}:synapse/storage/schema/__init__.py")
locals: Dict[str, Any] = {}
locals: dict[str, Any] = {}
exec(r, locals)
current_schema_version = locals["SCHEMA_VERSION"]
diffs: List[git.Diff] = repo.remote().refs[DEVELOP_BRANCH].commit.diff(None)
diffs: list[git.Diff] = repo.remote().refs[DEVELOP_BRANCH].commit.diff(None)
# Get the schema version of the local file to check against current schema on develop
with open("synapse/storage/schema/__init__.py") as file:
local_schema = file.read()
new_locals: Dict[str, Any] = {}
new_locals: dict[str, Any] = {}
exec(local_schema, new_locals)
local_schema_version = new_locals["SCHEMA_VERSION"]

View File

@@ -43,7 +43,7 @@ import argparse
import base64
import json
import sys
from typing import Any, Dict, Mapping, Optional, Tuple, Union
from typing import Any, Mapping, Optional, Union
from urllib import parse as urlparse
import requests
@@ -147,7 +147,7 @@ def request(
s = requests.Session()
s.mount("matrix-federation://", MatrixConnectionAdapter())
headers: Dict[str, str] = {
headers: dict[str, str] = {
"Authorization": authorization_headers[0],
}
@@ -303,7 +303,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
request: PreparedRequest,
verify: Optional[Union[bool, str]],
proxies: Optional[Mapping[str, str]] = None,
cert: Optional[Union[Tuple[str, str], str]] = None,
cert: Optional[Union[tuple[str, str], str]] = None,
) -> HTTPConnectionPool:
# overrides the get_connection_with_tls_context() method in the base class
parsed = urlparse.urlsplit(request.url)
@@ -326,7 +326,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
)
@staticmethod
def _lookup(server_name: str) -> Tuple[str, int, str]:
def _lookup(server_name: str) -> tuple[str, int, str]:
"""
Do an SRV lookup on a server name and return the host:port to connect to
Given the server_name (after any .well-known lookup), return the host, port and

View File

@@ -24,7 +24,7 @@ can crop up, e.g the cache descriptors.
"""
import enum
from typing import Callable, Mapping, Optional, Tuple, Type, Union
from typing import Callable, Mapping, Optional, Union
import attr
import mypy.types
@@ -68,18 +68,42 @@ PROMETHEUS_METRIC_MISSING_FROM_LIST_TO_CHECK = ErrorCode(
category="per-homeserver-tenant-metrics",
)
PREFER_SYNAPSE_CLOCK_CALL_LATER = ErrorCode(
"call-later-not-tracked",
"Prefer using `synapse.util.Clock.call_later` instead of `reactor.callLater`",
category="synapse-reactor-clock",
)
PREFER_SYNAPSE_CLOCK_LOOPING_CALL = ErrorCode(
"prefer-synapse-clock-looping-call",
"Prefer using `synapse.util.Clock.looping_call` instead of `task.LoopingCall`",
category="synapse-reactor-clock",
)
PREFER_SYNAPSE_CLOCK_CALL_WHEN_RUNNING = ErrorCode(
"prefer-synapse-clock-call-when-running",
"`synapse.util.Clock.call_when_running` should be used instead of `reactor.callWhenRunning`",
"Prefer using `synapse.util.Clock.call_when_running` instead of `reactor.callWhenRunning`",
category="synapse-reactor-clock",
)
PREFER_SYNAPSE_CLOCK_ADD_SYSTEM_EVENT_TRIGGER = ErrorCode(
"prefer-synapse-clock-add-system-event-trigger",
"`synapse.util.Clock.add_system_event_trigger` should be used instead of `reactor.addSystemEventTrigger`",
"Prefer using `synapse.util.Clock.add_system_event_trigger` instead of `reactor.addSystemEventTrigger`",
category="synapse-reactor-clock",
)
MULTIPLE_INTERNAL_CLOCKS_CREATED = ErrorCode(
"multiple-internal-clocks",
"Only one instance of `clock.Clock` should be created",
category="synapse-reactor-clock",
)
UNTRACKED_BACKGROUND_PROCESS = ErrorCode(
"untracked-background-process",
"Prefer using `HomeServer.run_as_background_process` method over the bare `run_as_background_process`",
category="synapse-tracked-calls",
)
class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
@@ -160,8 +184,8 @@ should be in the source code.
# Unbound at this point because we don't know the mypy version yet.
# This is set in the `plugin(...)` function below.
MypyPydanticPluginClass: Type[Plugin]
MypyZopePluginClass: Type[Plugin]
MypyPydanticPluginClass: type[Plugin]
MypyZopePluginClass: type[Plugin]
class SynapsePlugin(Plugin):
@@ -222,6 +246,18 @@ class SynapsePlugin(Plugin):
# callback, let's just pass it in while we have it.
return lambda ctx: check_prometheus_metric_instantiation(ctx, fullname)
if fullname == "twisted.internet.task.LoopingCall":
return check_looping_call
if fullname == "synapse.util.clock.Clock":
return check_clock_creation
if (
fullname
== "synapse.metrics.background_process_metrics.run_as_background_process"
):
return check_background_process
return None
def get_method_signature_hook(
@@ -241,6 +277,13 @@ class SynapsePlugin(Plugin):
):
return check_is_cacheable_wrapper
if fullname in (
"twisted.internet.interfaces.IReactorTime.callLater",
"synapse.types.ISynapseThreadlessReactor.callLater",
"synapse.types.ISynapseReactor.callLater",
):
return check_call_later
if fullname in (
"twisted.internet.interfaces.IReactorCore.callWhenRunning",
"synapse.types.ISynapseThreadlessReactor.callWhenRunning",
@@ -258,6 +301,78 @@ class SynapsePlugin(Plugin):
return None
def check_clock_creation(ctx: FunctionSigContext) -> CallableType:
"""
Ensure that the only `clock.Clock` instance is the one used by the `HomeServer`.
This is so that the `HomeServer` can cancel any tracked delayed or looping calls
during server shutdown.
Args:
ctx: The `FunctionSigContext` from mypy.
"""
signature: CallableType = ctx.default_signature
ctx.api.fail(
"Expected the only `clock.Clock` instance to be the one used by the `HomeServer`. "
"This is so that the `HomeServer` can cancel any tracked delayed or looping calls "
"during server shutdown",
ctx.context,
code=MULTIPLE_INTERNAL_CLOCKS_CREATED,
)
return signature
def check_call_later(ctx: MethodSigContext) -> CallableType:
"""
Ensure that the `reactor.callLater` callsites aren't used.
`synapse.util.Clock.call_later` should always be used instead of `reactor.callLater`.
This is because the `synapse.util.Clock` tracks delayed calls in order to cancel any
outstanding calls during server shutdown. Delayed calls which are either short lived
(<~60s) or frequently called and can be tracked via other means could be candidates for
using `synapse.util.Clock.call_later` with `call_later_cancel_on_shutdown` set to
`False`. There shouldn't be a need to use `reactor.callLater` outside of tests or the
`Clock` class itself. If a need arises, you can use a type ignore comment to disable the
check, e.g. `# type: ignore[call-later-not-tracked]`.
Args:
ctx: The `FunctionSigContext` from mypy.
"""
signature: CallableType = ctx.default_signature
ctx.api.fail(
"Expected all `reactor.callLater` calls to use `synapse.util.Clock.call_later` "
"instead. This is so that long lived calls can be tracked for cancellation during "
"server shutdown",
ctx.context,
code=PREFER_SYNAPSE_CLOCK_CALL_LATER,
)
return signature
def check_looping_call(ctx: FunctionSigContext) -> CallableType:
"""
Ensure that the `task.LoopingCall` callsites aren't used.
`synapse.util.Clock.looping_call` should always be used instead of `task.LoopingCall`.
`synapse.util.Clock` tracks looping calls in order to cancel any outstanding calls
during server shutdown.
Args:
ctx: The `FunctionSigContext` from mypy.
"""
signature: CallableType = ctx.default_signature
ctx.api.fail(
"Expected all `task.LoopingCall` instances to use `synapse.util.Clock.looping_call` "
"instead. This is so that long lived calls can be tracked for cancellation during "
"server shutdown",
ctx.context,
code=PREFER_SYNAPSE_CLOCK_LOOPING_CALL,
)
return signature
def check_call_when_running(ctx: MethodSigContext) -> CallableType:
"""
Ensure that the `reactor.callWhenRunning` callsites aren't used.
@@ -312,6 +427,27 @@ def check_add_system_event_trigger(ctx: MethodSigContext) -> CallableType:
return signature
def check_background_process(ctx: FunctionSigContext) -> CallableType:
"""
Ensure that calls to `run_as_background_process` use the `HomeServer` method.
This is so that the `HomeServer` can cancel any running background processes during
server shutdown.
Args:
ctx: The `FunctionSigContext` from mypy.
"""
signature: CallableType = ctx.default_signature
ctx.api.fail(
"Prefer using `HomeServer.run_as_background_process` method over the bare "
"`run_as_background_process`. This is so that the `HomeServer` can cancel "
"any background processes during server shutdown",
ctx.context,
code=UNTRACKED_BACKGROUND_PROCESS,
)
return signature
def analyze_prometheus_metric_classes(ctx: ClassDefContext) -> None:
"""
Cross-check the list of Prometheus metric classes against the
@@ -659,7 +795,7 @@ AT_CACHED_MUTABLE_RETURN = ErrorCode(
def is_cacheable(
rt: mypy.types.Type, signature: CallableType, verbose: bool
) -> Tuple[bool, Optional[str]]:
) -> tuple[bool, Optional[str]]:
"""
Check if a particular type is cachable.
@@ -769,7 +905,7 @@ def is_cacheable(
return False, f"Don't know how to handle {type(rt).__qualname__} return type"
def plugin(version: str) -> Type[SynapsePlugin]:
def plugin(version: str) -> type[SynapsePlugin]:
global MypyPydanticPluginClass, MypyZopePluginClass
# This is the entry point of the plugin, and lets us deal with the fact
# that the mypy plugin interface is *not* stable by looking at the version

View File

@@ -32,11 +32,13 @@ import time
import urllib.request
from os import path
from tempfile import TemporaryDirectory
from typing import Any, List, Match, Optional, Union
from typing import Any, Match, Optional, Union
import attr
import click
import git
import github
import github.Auth
from click.exceptions import ClickException
from git import GitCommandError, Repo
from github import BadCredentialsException, Github
@@ -397,7 +399,7 @@ def _tag(gh_token: Optional[str]) -> None:
return
# Create a new draft release
gh = Github(gh_token)
gh = Github(auth=github.Auth.Token(token=gh_token))
gh_repo = gh.get_repo("element-hq/synapse")
release = gh_repo.create_git_release(
tag=tag_name,
@@ -428,7 +430,7 @@ def _publish(gh_token: str) -> None:
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh = Github(auth=github.Auth.Token(token=gh_token))
gh.get_user()
# Make sure we're in a git repo.
@@ -441,7 +443,7 @@ def _publish(gh_token: str) -> None:
return
# Publish the draft release
gh = Github(gh_token)
gh = Github(auth=github.Auth.Token(token=gh_token))
gh_repo = gh.get_repo("element-hq/synapse")
for release in gh_repo.get_releases():
if release.title == tag_name:
@@ -486,8 +488,13 @@ def _upload(gh_token: Optional[str]) -> None:
click.echo(f"Tag {tag_name} ({tag.commit}) is not currently checked out!")
click.get_current_context().abort()
if gh_token:
gh = Github(auth=github.Auth.Token(token=gh_token))
else:
# Use github anonymously.
gh = Github()
# Query all the assets corresponding to this release.
gh = Github(gh_token)
gh_repo = gh.get_repo("element-hq/synapse")
gh_release = gh_repo.get_release(tag_name)
@@ -639,7 +646,16 @@ def _notify(message: str) -> None:
@cli.command()
def merge_back() -> None:
# Although this option is not used, allow it anyways. Otherwise the user will
# receive an error when providing it, which is annoying as other commands accept
# it.
@click.option(
"--gh-token",
"_gh_token",
envvar=["GH_TOKEN", "GITHUB_TOKEN"],
required=False,
)
def merge_back(_gh_token: Optional[str]) -> None:
_merge_back()
@@ -687,7 +703,16 @@ def _merge_back() -> None:
@cli.command()
def announce() -> None:
# Although this option is not used, allow it anyways. Otherwise the user will
# receive an error when providing it, which is annoying as other commands accept
# it.
@click.option(
"--gh-token",
"_gh_token",
envvar=["GH_TOKEN", "GITHUB_TOKEN"],
required=False,
)
def announce(_gh_token: Optional[str]) -> None:
_announce()
@@ -696,18 +721,31 @@ def _announce() -> None:
current_version = get_package_version()
tag_name = f"v{current_version}"
is_rc = "rc" in tag_name
release_text = f"""
### Synapse {current_version} {"🧪" if is_rc else "🚀"}
click.echo(
f"""
Hi everyone. Synapse {current_version} has just been released.
"""
if "rc" in tag_name:
release_text += (
"\nThis is a release candidate. Please help us test it out "
"before the final release by deploying it to non-production environments, "
"and reporting any issues you find to "
"[the issue tracker](https://github.com/element-hq/synapse/issues). Thanks!\n"
)
release_text += f"""
[notes](https://github.com/element-hq/synapse/releases/tag/{tag_name}) | \
[docker](https://hub.docker.com/r/matrixdotorg/synapse/tags?name={tag_name}) | \
[debs](https://packages.matrix.org/debian/) | \
[pypi](https://pypi.org/project/matrix-synapse/{current_version}/)"""
)
if "rc" in tag_name:
click.echo(release_text)
if is_rc:
click.echo(
"""
Announce the RC in
@@ -732,7 +770,7 @@ Ask the designated people to do the blog and tweets."""
def full(gh_token: str) -> None:
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh = Github(auth=github.Auth.Token(token=gh_token))
gh.get_user()
click.echo("1. If this is a security release, read the security wiki page.")
@@ -801,8 +839,12 @@ def get_repo_and_check_clean_checkout(
raise click.ClickException(
f"{path} is not a git repository (expecting a {name} repository)."
)
if repo.is_dirty():
raise click.ClickException(f"Uncommitted changes exist in {path}.")
while repo.is_dirty():
if not click.confirm(
f"Uncommitted changes exist in {path}. Commit or stash them. Ready to continue?"
):
raise click.ClickException("Aborted.")
return repo
@@ -814,7 +856,7 @@ def check_valid_gh_token(gh_token: Optional[str]) -> None:
return
try:
gh = Github(gh_token)
gh = Github(auth=github.Auth.Token(token=gh_token))
# We need to lookup name to trigger a request.
_name = gh.get_user().name
@@ -861,7 +903,7 @@ def get_changes_for_version(wanted_version: version.Version) -> str:
start_line: int
end_line: Optional[int] = None # Is none if its the last entry
headings: List[VersionSection] = []
headings: list[VersionSection] = []
for i, token in enumerate(tokens):
# We look for level 1 headings (h1 tags).
if token.type != "heading_open" or token.tag != "h1":

View File

@@ -38,7 +38,7 @@ import io
import json
import sys
from collections import defaultdict
from typing import Any, Dict, Iterator, Optional, Tuple
from typing import Any, Iterator, Optional
import git
from packaging import version
@@ -57,7 +57,7 @@ SCHEMA_VERSION_FILES = (
OLDEST_SHOWN_VERSION = version.parse("v1.0")
def get_schema_versions(tag: git.Tag) -> Tuple[Optional[int], Optional[int]]:
def get_schema_versions(tag: git.Tag) -> tuple[Optional[int], Optional[int]]:
"""Get the schema and schema compat versions for a tag."""
schema_version = None
schema_compat_version = None
@@ -81,7 +81,7 @@ def get_schema_versions(tag: git.Tag) -> Tuple[Optional[int], Optional[int]]:
# SCHEMA_COMPAT_VERSION is sometimes across multiple lines, the easist
# thing to do is exec the code. Luckily it has only ever existed in
# a file which imports nothing else from Synapse.
locals: Dict[str, Any] = {}
locals: dict[str, Any] = {}
exec(schema_file.data_stream.read().decode("utf-8"), {}, locals)
schema_version = locals["SCHEMA_VERSION"]
schema_compat_version = locals.get("SCHEMA_COMPAT_VERSION")

View File

@@ -7,18 +7,14 @@ from __future__ import annotations
from typing import (
Any,
Callable,
Dict,
Hashable,
ItemsView,
Iterable,
Iterator,
KeysView,
List,
Mapping,
Optional,
Sequence,
Tuple,
Type,
TypeVar,
Union,
ValuesView,
@@ -35,14 +31,14 @@ _VT_co = TypeVar("_VT_co", covariant=True)
_SD = TypeVar("_SD", bound=SortedDict)
_Key = Callable[[_T], Any]
class SortedDict(Dict[_KT, _VT]):
class SortedDict(dict[_KT, _VT]):
@overload
def __init__(self, **kwargs: _VT) -> None: ...
@overload
def __init__(self, __map: Mapping[_KT, _VT], **kwargs: _VT) -> None: ...
@overload
def __init__(
self, __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT
self, __iterable: Iterable[tuple[_KT, _VT]], **kwargs: _VT
) -> None: ...
@overload
def __init__(self, __key: _Key[_KT], **kwargs: _VT) -> None: ...
@@ -52,7 +48,7 @@ class SortedDict(Dict[_KT, _VT]):
) -> None: ...
@overload
def __init__(
self, __key: _Key[_KT], __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT
self, __key: _Key[_KT], __iterable: Iterable[tuple[_KT, _VT]], **kwargs: _VT
) -> None: ...
@property
def key(self) -> Optional[_Key[_KT]]: ...
@@ -84,8 +80,8 @@ class SortedDict(Dict[_KT, _VT]):
def pop(self, key: _KT) -> _VT: ...
@overload
def pop(self, key: _KT, default: _T = ...) -> Union[_VT, _T]: ...
def popitem(self, index: int = ...) -> Tuple[_KT, _VT]: ...
def peekitem(self, index: int = ...) -> Tuple[_KT, _VT]: ...
def popitem(self, index: int = ...) -> tuple[_KT, _VT]: ...
def peekitem(self, index: int = ...) -> tuple[_KT, _VT]: ...
def setdefault(self, key: _KT, default: Optional[_VT] = ...) -> _VT: ...
# Mypy now reports the first overload as an error, because typeshed widened the type
# of `__map` to its internal `_typeshed.SupportsKeysAndGetItem` type in
@@ -102,9 +98,9 @@ class SortedDict(Dict[_KT, _VT]):
# def update(self, **kwargs: _VT) -> None: ...
def __reduce__(
self,
) -> Tuple[
Type[SortedDict[_KT, _VT]],
Tuple[Callable[[_KT], Any], List[Tuple[_KT, _VT]]],
) -> tuple[
type[SortedDict[_KT, _VT]],
tuple[Callable[[_KT], Any], list[tuple[_KT, _VT]]],
]: ...
def __repr__(self) -> str: ...
def _check(self) -> None: ...
@@ -121,20 +117,20 @@ class SortedKeysView(KeysView[_KT_co], Sequence[_KT_co]):
@overload
def __getitem__(self, index: int) -> _KT_co: ...
@overload
def __getitem__(self, index: slice) -> List[_KT_co]: ...
def __getitem__(self, index: slice) -> list[_KT_co]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...
class SortedItemsView(ItemsView[_KT_co, _VT_co], Sequence[Tuple[_KT_co, _VT_co]]):
def __iter__(self) -> Iterator[Tuple[_KT_co, _VT_co]]: ...
class SortedItemsView(ItemsView[_KT_co, _VT_co], Sequence[tuple[_KT_co, _VT_co]]):
def __iter__(self) -> Iterator[tuple[_KT_co, _VT_co]]: ...
@overload
def __getitem__(self, index: int) -> Tuple[_KT_co, _VT_co]: ...
def __getitem__(self, index: int) -> tuple[_KT_co, _VT_co]: ...
@overload
def __getitem__(self, index: slice) -> List[Tuple[_KT_co, _VT_co]]: ...
def __getitem__(self, index: slice) -> list[tuple[_KT_co, _VT_co]]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...
class SortedValuesView(ValuesView[_VT_co], Sequence[_VT_co]):
@overload
def __getitem__(self, index: int) -> _VT_co: ...
@overload
def __getitem__(self, index: slice) -> List[_VT_co]: ...
def __getitem__(self, index: slice) -> list[_VT_co]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...

View File

@@ -9,12 +9,9 @@ from typing import (
Callable,
Iterable,
Iterator,
List,
MutableSequence,
Optional,
Sequence,
Tuple,
Type,
TypeVar,
Union,
overload,
@@ -37,11 +34,11 @@ class SortedList(MutableSequence[_T]):
): ...
# NB: currently mypy does not honour return type, see mypy #3307
@overload
def __new__(cls: Type[_SL], iterable: None, key: None) -> _SL: ...
def __new__(cls: type[_SL], iterable: None, key: None) -> _SL: ...
@overload
def __new__(cls: Type[_SL], iterable: None, key: _Key[_T]) -> SortedKeyList[_T]: ...
def __new__(cls: type[_SL], iterable: None, key: _Key[_T]) -> SortedKeyList[_T]: ...
@overload
def __new__(cls: Type[_SL], iterable: Iterable[_T], key: None) -> _SL: ...
def __new__(cls: type[_SL], iterable: Iterable[_T], key: None) -> _SL: ...
@overload
def __new__(cls, iterable: Iterable[_T], key: _Key[_T]) -> SortedKeyList[_T]: ...
@property
@@ -64,11 +61,11 @@ class SortedList(MutableSequence[_T]):
@overload
def __getitem__(self, index: int) -> _T: ...
@overload
def __getitem__(self, index: slice) -> List[_T]: ...
def __getitem__(self, index: slice) -> list[_T]: ...
@overload
def _getitem(self, index: int) -> _T: ...
@overload
def _getitem(self, index: slice) -> List[_T]: ...
def _getitem(self, index: slice) -> list[_T]: ...
@overload
def __setitem__(self, index: int, value: _T) -> None: ...
@overload
@@ -95,7 +92,7 @@ class SortedList(MutableSequence[_T]):
self,
minimum: Optional[int] = ...,
maximum: Optional[int] = ...,
inclusive: Tuple[bool, bool] = ...,
inclusive: tuple[bool, bool] = ...,
reverse: bool = ...,
) -> Iterator[_T]: ...
def bisect_left(self, value: _T) -> int: ...
@@ -151,14 +148,14 @@ class SortedKeyList(SortedList[_T]):
self,
minimum: Optional[int] = ...,
maximum: Optional[int] = ...,
inclusive: Tuple[bool, bool] = ...,
inclusive: tuple[bool, bool] = ...,
reverse: bool = ...,
) -> Iterator[_T]: ...
def irange_key(
self,
min_key: Optional[Any] = ...,
max_key: Optional[Any] = ...,
inclusive: Tuple[bool, bool] = ...,
inclusive: tuple[bool, bool] = ...,
reserve: bool = ...,
) -> Iterator[_T]: ...
def bisect_left(self, value: _T) -> int: ...

View File

@@ -10,13 +10,9 @@ from typing import (
Hashable,
Iterable,
Iterator,
List,
MutableSet,
Optional,
Sequence,
Set,
Tuple,
Type,
TypeVar,
Union,
overload,
@@ -37,7 +33,7 @@ class SortedSet(MutableSet[_T], Sequence[_T]):
) -> None: ...
@classmethod
def _fromset(
cls, values: Set[_T], key: Optional[_Key[_T]] = ...
cls, values: set[_T], key: Optional[_Key[_T]] = ...
) -> SortedSet[_T]: ...
@property
def key(self) -> Optional[_Key[_T]]: ...
@@ -45,7 +41,7 @@ class SortedSet(MutableSet[_T], Sequence[_T]):
@overload
def __getitem__(self, index: int) -> _T: ...
@overload
def __getitem__(self, index: slice) -> List[_T]: ...
def __getitem__(self, index: slice) -> list[_T]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...
def __eq__(self, other: Any) -> bool: ...
def __ne__(self, other: Any) -> bool: ...
@@ -94,7 +90,7 @@ class SortedSet(MutableSet[_T], Sequence[_T]):
def _update(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def __reduce__(
self,
) -> Tuple[Type[SortedSet[_T]], Set[_T], Callable[[_T], Any]]: ...
) -> tuple[type[SortedSet[_T]], set[_T], Callable[[_T], Any]]: ...
def __repr__(self) -> str: ...
def _check(self) -> None: ...
def bisect_left(self, value: _T) -> int: ...
@@ -109,7 +105,7 @@ class SortedSet(MutableSet[_T], Sequence[_T]):
self,
minimum: Optional[_T] = ...,
maximum: Optional[_T] = ...,
inclusive: Tuple[bool, bool] = ...,
inclusive: tuple[bool, bool] = ...,
reverse: bool = ...,
) -> Iterator[_T]: ...
def index(

View File

@@ -15,7 +15,7 @@
"""Contains *incomplete* type hints for txredisapi."""
from typing import Any, List, Optional, Type, Union
from typing import Any, Optional, Union
from twisted.internet import protocol
from twisted.internet.defer import Deferred
@@ -39,7 +39,7 @@ class RedisProtocol(protocol.Protocol):
class SubscriberProtocol(RedisProtocol):
def __init__(self, *args: object, **kwargs: object): ...
password: Optional[str]
def subscribe(self, channels: Union[str, List[str]]) -> "Deferred[None]": ...
def subscribe(self, channels: Union[str, list[str]]) -> "Deferred[None]": ...
def connectionMade(self) -> None: ...
# type-ignore: twisted.internet.protocol.Protocol provides a default argument for
# `reason`. txredisapi's LineReceiver Protocol doesn't. But that's fine: it's what's
@@ -69,7 +69,7 @@ class UnixConnectionHandler(ConnectionHandler): ...
class RedisFactory(protocol.ReconnectingClientFactory):
continueTrying: bool
handler: ConnectionHandler
pool: List[RedisProtocol]
pool: list[RedisProtocol]
replyTimeout: Optional[int]
def __init__(
self,
@@ -77,7 +77,7 @@ class RedisFactory(protocol.ReconnectingClientFactory):
dbid: Optional[int],
poolsize: int,
isLazy: bool = False,
handler: Type = ConnectionHandler,
handler: type = ConnectionHandler,
charset: str = "utf-8",
password: Optional[str] = None,
replyTimeout: Optional[int] = None,

View File

@@ -24,7 +24,7 @@
import os
import sys
from typing import Any, Dict
from typing import Any
from PIL import ImageFile
@@ -70,7 +70,7 @@ try:
from canonicaljson import register_preserialisation_callback
from immutabledict import immutabledict
def _immutabledict_cb(d: immutabledict) -> Dict[str, Any]:
def _immutabledict_cb(d: immutabledict) -> dict[str, Any]:
try:
return d._dict
except Exception:

View File

@@ -25,7 +25,7 @@ import logging
import re
from collections import defaultdict
from dataclasses import dataclass
from typing import Dict, Iterable, Optional, Pattern, Set, Tuple
from typing import Iterable, Optional, Pattern
import yaml
@@ -81,7 +81,7 @@ class EnumerationResource(HttpServer):
"""
def __init__(self, is_worker: bool) -> None:
self.registrations: Dict[Tuple[str, str], EndpointDescription] = {}
self.registrations: dict[tuple[str, str], EndpointDescription] = {}
self._is_worker = is_worker
def register_paths(
@@ -115,7 +115,7 @@ class EnumerationResource(HttpServer):
def get_registered_paths_for_hs(
hs: HomeServer,
) -> Dict[Tuple[str, str], EndpointDescription]:
) -> dict[tuple[str, str], EndpointDescription]:
"""
Given a homeserver, get all registered endpoints and their descriptions.
"""
@@ -142,7 +142,7 @@ def get_registered_paths_for_hs(
def get_registered_paths_for_default(
worker_app: Optional[str], base_config: HomeServerConfig
) -> Dict[Tuple[str, str], EndpointDescription]:
) -> dict[tuple[str, str], EndpointDescription]:
"""
Given the name of a worker application and a base homeserver configuration,
returns:
@@ -157,15 +157,20 @@ def get_registered_paths_for_default(
# TODO We only do this to avoid an error, but don't need the database etc
hs.setup()
registered_paths = get_registered_paths_for_hs(hs)
hs.cleanup()
# NOTE: a more robust implementation would properly shutdown/cleanup each server
# to avoid resource buildup.
# However, the call to `shutdown` is `async` so it would require additional complexity here.
# We are intentionally skipping this cleanup because this is a short-lived, one-off
# utility script where the simpler approach is sufficient and we shouldn't run into
# any resource buildup issues.
return registered_paths
def elide_http_methods_if_unconflicting(
registrations: Dict[Tuple[str, str], EndpointDescription],
all_possible_registrations: Dict[Tuple[str, str], EndpointDescription],
) -> Dict[Tuple[str, str], EndpointDescription]:
registrations: dict[tuple[str, str], EndpointDescription],
all_possible_registrations: dict[tuple[str, str], EndpointDescription],
) -> dict[tuple[str, str], EndpointDescription]:
"""
Elides HTTP methods (by replacing them with `*`) if all possible registered methods
can be handled by the worker whose registration map is `registrations`.
@@ -175,13 +180,13 @@ def elide_http_methods_if_unconflicting(
"""
def paths_to_methods_dict(
methods_and_paths: Iterable[Tuple[str, str]],
) -> Dict[str, Set[str]]:
methods_and_paths: Iterable[tuple[str, str]],
) -> dict[str, set[str]]:
"""
Given (method, path) pairs, produces a dict from path to set of methods
available at that path.
"""
result: Dict[str, Set[str]] = {}
result: dict[str, set[str]] = {}
for method, path in methods_and_paths:
result.setdefault(path, set()).add(method)
return result
@@ -205,8 +210,8 @@ def elide_http_methods_if_unconflicting(
def simplify_path_regexes(
registrations: Dict[Tuple[str, str], EndpointDescription],
) -> Dict[Tuple[str, str], EndpointDescription]:
registrations: dict[tuple[str, str], EndpointDescription],
) -> dict[tuple[str, str], EndpointDescription]:
"""
Simplify all the path regexes for the dict of endpoint descriptions,
so that we don't use the Python-specific regex extensions
@@ -265,8 +270,8 @@ def main() -> None:
# TODO SSO endpoints (pick_idp etc) NOT REGISTERED BY THIS SCRIPT
categories_to_methods_and_paths: Dict[
Optional[str], Dict[Tuple[str, str], EndpointDescription]
categories_to_methods_and_paths: dict[
Optional[str], dict[tuple[str, str], EndpointDescription]
] = defaultdict(dict)
for (method, path), desc in elided_worker_paths.items():
@@ -278,7 +283,7 @@ def main() -> None:
def print_category(
category_name: Optional[str],
elided_worker_paths: Dict[Tuple[str, str], EndpointDescription],
elided_worker_paths: dict[tuple[str, str], EndpointDescription],
) -> None:
"""
Prints out a category, in documentation page style.

View File

@@ -73,8 +73,18 @@ def main() -> None:
pw = unicodedata.normalize("NFKC", password)
bytes_to_hash = pw.encode("utf8") + password_pepper.encode("utf8")
if len(bytes_to_hash) > 72:
# bcrypt only looks at the first 72 bytes
print(
f"Password is too long ({len(bytes_to_hash)} bytes); truncating to 72 bytes for bcrypt. "
"This is expected behaviour and will not affect a user's ability to log in. 72 bytes is "
"sufficient entropy for a password."
)
bytes_to_hash = bytes_to_hash[:72]
hashed = bcrypt.hashpw(
pw.encode("utf8") + password_pepper.encode("utf8"),
bytes_to_hash,
bcrypt.gensalt(bcrypt_rounds),
).decode("ascii")

View File

@@ -26,7 +26,7 @@ import hashlib
import hmac
import logging
import sys
from typing import Any, Callable, Dict, Optional
from typing import Any, Callable, Optional
import requests
import yaml
@@ -262,7 +262,7 @@ def main() -> None:
args = parser.parse_args()
config: Optional[Dict[str, Any]] = None
config: Optional[dict[str, Any]] = None
if "config" in args and args.config:
config = yaml.safe_load(args.config)
@@ -350,7 +350,7 @@ def _read_file(file_path: Any, config_path: str) -> str:
sys.exit(1)
def _find_client_listener(config: Dict[str, Any]) -> Optional[str]:
def _find_client_listener(config: dict[str, Any]) -> Optional[str]:
# try to find a listener in the config. Returns a host:port pair
for listener in config.get("listeners", []):
if listener.get("type") != "http" or listener.get("tls", False):

View File

@@ -23,7 +23,6 @@ import argparse
import sys
import time
from datetime import datetime
from typing import List
import attr
@@ -50,15 +49,15 @@ class ReviewConfig(RootConfig):
class UserInfo:
user_id: str
creation_ts: int
emails: List[str] = attr.Factory(list)
private_rooms: List[str] = attr.Factory(list)
public_rooms: List[str] = attr.Factory(list)
ips: List[str] = attr.Factory(list)
emails: list[str] = attr.Factory(list)
private_rooms: list[str] = attr.Factory(list)
public_rooms: list[str] = attr.Factory(list)
ips: list[str] = attr.Factory(list)
def get_recent_users(
txn: LoggingTransaction, since_ms: int, exclude_app_service: bool
) -> List[UserInfo]:
) -> list[UserInfo]:
"""Fetches recently registered users and some info on them."""
sql = """

View File

@@ -33,15 +33,10 @@ from typing import (
Any,
Awaitable,
Callable,
Dict,
Generator,
Iterable,
List,
NoReturn,
Optional,
Set,
Tuple,
Type,
TypedDict,
TypeVar,
cast,
@@ -98,7 +93,6 @@ from synapse.storage.databases.state.bg_updates import StateBackgroundUpdateStor
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.types import ISynapseReactor
from synapse.util import SYNAPSE_VERSION
# Cast safety: Twisted does some naughty magic which replaces the
# twisted.internet.reactor module with a Reactor instance at runtime.
@@ -245,7 +239,7 @@ end_error: Optional[str] = None
# not the error then the script will show nothing outside of what's printed in the run
# function. If both are defined, the script will print both the error and the stacktrace.
end_error_exec_info: Optional[
Tuple[Type[BaseException], BaseException, TracebackType]
tuple[type[BaseException], BaseException, TracebackType]
] = None
R = TypeVar("R")
@@ -282,8 +276,8 @@ class Store(
def execute(self, f: Callable[..., R], *args: Any, **kwargs: Any) -> Awaitable[R]:
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)
def execute_sql(self, sql: str, *args: object) -> Awaitable[List[Tuple]]:
def r(txn: LoggingTransaction) -> List[Tuple]:
def execute_sql(self, sql: str, *args: object) -> Awaitable[list[tuple]]:
def r(txn: LoggingTransaction) -> list[tuple]:
txn.execute(sql, args)
return txn.fetchall()
@@ -293,8 +287,8 @@ class Store(
self,
txn: LoggingTransaction,
table: str,
headers: List[str],
rows: List[Tuple],
headers: list[str],
rows: list[tuple],
override_system_value: bool = False,
) -> None:
sql = "INSERT INTO %s (%s) %s VALUES (%s)" % (
@@ -325,14 +319,13 @@ class MockHomeserver(HomeServer):
hostname=config.server.server_name,
config=config,
reactor=reactor,
version_string=f"Synapse/{SYNAPSE_VERSION}",
)
class Porter:
def __init__(
self,
sqlite_config: Dict[str, Any],
sqlite_config: dict[str, Any],
progress: "Progress",
batch_size: int,
hs: HomeServer,
@@ -342,7 +335,7 @@ class Porter:
self.batch_size = batch_size
self.hs = hs
async def setup_table(self, table: str) -> Tuple[str, int, int, int, int]:
async def setup_table(self, table: str) -> tuple[str, int, int, int, int]:
if table in APPEND_ONLY_TABLES:
# It's safe to just carry on inserting.
row = await self.postgres_store.db_pool.simple_select_one(
@@ -405,10 +398,10 @@ class Porter:
return table, already_ported, total_to_port, forward_chunk, backward_chunk
async def get_table_constraints(self) -> Dict[str, Set[str]]:
async def get_table_constraints(self) -> dict[str, set[str]]:
"""Returns a map of tables that have foreign key constraints to tables they depend on."""
def _get_constraints(txn: LoggingTransaction) -> Dict[str, Set[str]]:
def _get_constraints(txn: LoggingTransaction) -> dict[str, set[str]]:
# We can pull the information about foreign key constraints out from
# the postgres schema tables.
sql = """
@@ -424,7 +417,7 @@ class Porter:
"""
txn.execute(sql)
results: Dict[str, Set[str]] = {}
results: dict[str, set[str]] = {}
for table, foreign_table in txn:
results.setdefault(table, set()).add(foreign_table)
return results
@@ -492,7 +485,7 @@ class Porter:
def r(
txn: LoggingTransaction,
) -> Tuple[Optional[List[str]], List[Tuple], List[Tuple]]:
) -> tuple[Optional[list[str]], list[tuple], list[tuple]]:
forward_rows = []
backward_rows = []
if do_forward[0]:
@@ -509,7 +502,7 @@ class Porter:
if forward_rows or backward_rows:
assert txn.description is not None
headers: Optional[List[str]] = [
headers: Optional[list[str]] = [
column[0] for column in txn.description
]
else:
@@ -576,7 +569,7 @@ class Porter:
while True:
def r(txn: LoggingTransaction) -> Tuple[List[str], List[Tuple]]:
def r(txn: LoggingTransaction) -> tuple[list[str], list[tuple]]:
txn.execute(select, (forward_chunk, self.batch_size))
rows = txn.fetchall()
assert txn.description is not None
@@ -958,7 +951,7 @@ class Porter:
self.progress.set_state("Copying to postgres")
constraints = await self.get_table_constraints()
tables_ported = set() # type: Set[str]
tables_ported = set() # type: set[str]
while tables_to_port_info_map:
# Pulls out all tables that are still to be ported and which
@@ -997,8 +990,8 @@ class Porter:
reactor.stop()
def _convert_rows(
self, table: str, headers: List[str], rows: List[Tuple]
) -> List[Tuple]:
self, table: str, headers: list[str], rows: list[tuple]
) -> list[tuple]:
bool_col_names = BOOLEAN_COLUMNS.get(table, [])
bool_cols = [i for i, h in enumerate(headers) if h in bool_col_names]
@@ -1032,7 +1025,7 @@ class Porter:
return outrows
async def _setup_sent_transactions(self) -> Tuple[int, int, int]:
async def _setup_sent_transactions(self) -> tuple[int, int, int]:
# Only save things from the last day
yesterday = int(time.time() * 1000) - 86400000
@@ -1044,7 +1037,7 @@ class Porter:
")"
)
def r(txn: LoggingTransaction) -> Tuple[List[str], List[Tuple]]:
def r(txn: LoggingTransaction) -> tuple[list[str], list[tuple]]:
txn.execute(select)
rows = txn.fetchall()
assert txn.description is not None
@@ -1114,14 +1107,14 @@ class Porter:
self, table: str, forward_chunk: int, backward_chunk: int
) -> int:
frows = cast(
List[Tuple[int]],
list[tuple[int]],
await self.sqlite_store.execute_sql(
"SELECT count(*) FROM %s WHERE rowid >= ?" % (table,), forward_chunk
),
)
brows = cast(
List[Tuple[int]],
list[tuple[int]],
await self.sqlite_store.execute_sql(
"SELECT count(*) FROM %s WHERE rowid <= ?" % (table,), backward_chunk
),
@@ -1138,7 +1131,7 @@ class Porter:
async def _get_total_count_to_port(
self, table: str, forward_chunk: int, backward_chunk: int
) -> Tuple[int, int]:
) -> tuple[int, int]:
remaining, done = await make_deferred_yieldable(
defer.gatherResults(
[
@@ -1223,7 +1216,7 @@ class Porter:
async def _setup_sequence(
self,
sequence_name: str,
stream_id_tables: Iterable[Tuple[str, str]],
stream_id_tables: Iterable[tuple[str, str]],
) -> None:
"""Set a sequence to the correct value."""
current_stream_ids = []
@@ -1333,7 +1326,7 @@ class Progress:
"""Used to report progress of the port"""
def __init__(self) -> None:
self.tables: Dict[str, TableProgress] = {}
self.tables: dict[str, TableProgress] = {}
self.start_time = int(time.time())

View File

@@ -28,11 +28,9 @@ import yaml
from twisted.internet import defer, reactor as reactor_
from synapse.config.homeserver import HomeServerConfig
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.types import ISynapseReactor
from synapse.util import SYNAPSE_VERSION
# Cast safety: Twisted does some naughty magic which replaces the
# twisted.internet.reactor module with a Reactor instance at runtime.
@@ -48,12 +46,10 @@ class MockHomeserver(HomeServer):
hostname=config.server.server_name,
config=config,
reactor=reactor,
version_string=f"Synapse/{SYNAPSE_VERSION}",
)
def run_background_updates(hs: HomeServer) -> None:
server_name = hs.hostname
main = hs.get_datastores().main
state = hs.get_datastores().state
@@ -67,9 +63,8 @@ def run_background_updates(hs: HomeServer) -> None:
def run() -> None:
# Apply all background updates on the database.
defer.ensureDeferred(
run_as_background_process(
hs.run_as_background_process(
"background_updates",
server_name,
run_background_updates,
)
)

View File

@@ -18,7 +18,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
from typing import TYPE_CHECKING, Optional, Protocol, Tuple
from typing import TYPE_CHECKING, Optional, Protocol
from prometheus_client import Histogram
@@ -51,7 +51,7 @@ class Auth(Protocol):
room_id: str,
requester: Requester,
allow_departed_users: bool = False,
) -> Tuple[str, Optional[str]]:
) -> tuple[str, Optional[str]]:
"""Check if the user is in the room, or was at some point.
Args:
room_id: The room to check.
@@ -190,7 +190,7 @@ class Auth(Protocol):
async def check_user_in_room_or_world_readable(
self, room_id: str, requester: Requester, allow_departed_users: bool = False
) -> Tuple[str, Optional[str]]:
) -> tuple[str, Optional[str]]:
"""Checks that the user is or was in the room or the room is world
readable. If it isn't then an exception is raised.

View File

@@ -19,7 +19,7 @@
#
#
import logging
from typing import TYPE_CHECKING, Optional, Tuple
from typing import TYPE_CHECKING, Optional
from netaddr import IPAddress
@@ -64,7 +64,7 @@ class BaseAuth:
room_id: str,
requester: Requester,
allow_departed_users: bool = False,
) -> Tuple[str, Optional[str]]:
) -> tuple[str, Optional[str]]:
"""Check if the user is in the room, or was at some point.
Args:
room_id: The room to check.
@@ -114,7 +114,7 @@ class BaseAuth:
@trace
async def check_user_in_room_or_world_readable(
self, room_id: str, requester: Requester, allow_departed_users: bool = False
) -> Tuple[str, Optional[str]]:
) -> tuple[str, Optional[str]]:
"""Checks that the user is or was in the room or the room is world
readable. If it isn't then an exception is raised.
@@ -302,12 +302,9 @@ class BaseAuth:
(the user_id URI parameter allows an application service to masquerade
any applicable user in its namespace)
- what device the application service should be treated as controlling
(the device_id[^1] URI parameter allows an application service to masquerade
(the device_id URI parameter allows an application service to masquerade
as any device that exists for the relevant user)
[^1] Unstable and provided by MSC3202.
Must use `org.matrix.msc3202.device_id` in place of `device_id` for now.
Returns:
the application service `Requester` of that request
@@ -319,7 +316,8 @@ class BaseAuth:
- The returned device ID, if present, has been checked to be a valid device ID
for the returned user ID.
"""
DEVICE_ID_ARG_NAME = b"org.matrix.msc3202.device_id"
# TODO: We can drop unstable support after 2026-01-01 (couple months after stable support)
UNSTABLE_DEVICE_ID_ARG_NAME = b"org.matrix.msc3202.device_id"
app_service = self.store.get_app_service_by_token(access_token)
if app_service is None:
@@ -341,13 +339,11 @@ class BaseAuth:
else:
effective_user_id = app_service.sender
effective_device_id: Optional[str] = None
if (
self.hs.config.experimental.msc3202_device_masquerading_enabled
and DEVICE_ID_ARG_NAME in request.args
):
effective_device_id = request.args[DEVICE_ID_ARG_NAME][0].decode("utf8")
effective_device_id_args = request.args.get(
b"device_id", request.args.get(UNSTABLE_DEVICE_ID_ARG_NAME)
)
if effective_device_id_args:
effective_device_id = effective_device_id_args[0].decode("utf8")
# We only just set this so it can't be None!
assert effective_device_id is not None
device_opt = await self.store.get_device(
@@ -359,6 +355,8 @@ class BaseAuth:
f"Application service trying to use a device that doesn't exist ('{effective_device_id}' for {effective_user_id})",
Codes.UNKNOWN_DEVICE,
)
else:
effective_device_id = None
return create_requester(
effective_user_id, app_service=app_service, device_id=effective_device_id

View File

@@ -13,7 +13,7 @@
#
#
import logging
from typing import TYPE_CHECKING, Optional, Set
from typing import TYPE_CHECKING, Optional
from urllib.parse import urlencode
from synapse._pydantic_compat import (
@@ -33,7 +33,6 @@ from synapse.api.errors import (
UnrecognizedRequestError,
)
from synapse.http.site import SynapseRequest
from synapse.logging.context import PreserveLoggingContext
from synapse.logging.opentracing import (
active_span,
force_tracing,
@@ -229,13 +228,12 @@ class MasDelegatedAuth(BaseAuth):
try:
with start_active_span("mas-introspect-token"):
inject_request_headers(raw_headers)
with PreserveLoggingContext():
resp_body = await self._rust_http_client.post(
url=self._introspection_endpoint,
response_limit=1 * 1024 * 1024,
headers=raw_headers,
request_body=body,
)
resp_body = await self._rust_http_client.post(
url=self._introspection_endpoint,
response_limit=1 * 1024 * 1024,
headers=raw_headers,
request_body=body,
)
except HttpResponseException as e:
end_time = self._clock.time()
introspection_response_timer.labels(
@@ -371,7 +369,7 @@ class MasDelegatedAuth(BaseAuth):
# We only allow a single device_id in the scope, so we find them all in the
# scope list, and raise if there are more than one. The OIDC server should be
# the one enforcing valid scopes, so we raise a 500 if we find an invalid scope.
device_ids: Set[str] = set()
device_ids: set[str] = set()
for tok in scope:
if tok.startswith(UNSTABLE_SCOPE_MATRIX_DEVICE_PREFIX):
device_ids.add(tok[len(UNSTABLE_SCOPE_MATRIX_DEVICE_PREFIX) :])

View File

@@ -20,7 +20,7 @@
#
import logging
from dataclasses import dataclass
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Set
from typing import TYPE_CHECKING, Any, Callable, Optional
from urllib.parse import urlencode
from authlib.oauth2 import ClientAuth
@@ -38,7 +38,6 @@ from synapse.api.errors import (
UnrecognizedRequestError,
)
from synapse.http.site import SynapseRequest
from synapse.logging.context import PreserveLoggingContext
from synapse.logging.opentracing import (
active_span,
force_tracing,
@@ -71,7 +70,7 @@ STABLE_SCOPE_MATRIX_DEVICE_PREFIX = "urn:matrix:client:device:"
SCOPE_SYNAPSE_ADMIN = "urn:synapse:admin:*"
def scope_to_list(scope: str) -> List[str]:
def scope_to_list(scope: str) -> list[str]:
"""Convert a scope string to a list of scope tokens"""
return scope.strip().split(" ")
@@ -97,7 +96,7 @@ class IntrospectionResult:
absolute_expiry_ms = expires_in * 1000 + self.retrieved_at_ms
return now_ms < absolute_expiry_ms
def get_scope_list(self) -> List[str]:
def get_scope_list(self) -> list[str]:
value = self._inner.get("scope")
if not isinstance(value, str):
return []
@@ -265,7 +264,7 @@ class MSC3861DelegatedAuth(BaseAuth):
logger.warning("Failed to load metadata:", exc_info=True)
return None
async def auth_metadata(self) -> Dict[str, Any]:
async def auth_metadata(self) -> dict[str, Any]:
"""
Returns the auth metadata dict
"""
@@ -304,7 +303,7 @@ class MSC3861DelegatedAuth(BaseAuth):
# By default, we shouldn't cache the result unless we know it's valid
cache_context.should_cache = False
introspection_endpoint = await self._introspection_endpoint()
raw_headers: Dict[str, str] = {
raw_headers: dict[str, str] = {
"Content-Type": "application/x-www-form-urlencoded",
"Accept": "application/json",
# Tell MAS that we support reading the device ID as an explicit
@@ -327,13 +326,12 @@ class MSC3861DelegatedAuth(BaseAuth):
try:
with start_active_span("mas-introspect-token"):
inject_request_headers(raw_headers)
with PreserveLoggingContext():
resp_body = await self._rust_http_client.post(
url=uri,
response_limit=1 * 1024 * 1024,
headers=raw_headers,
request_body=body,
)
resp_body = await self._rust_http_client.post(
url=uri,
response_limit=1 * 1024 * 1024,
headers=raw_headers,
request_body=body,
)
except HttpResponseException as e:
end_time = self._clock.time()
introspection_response_timer.labels(
@@ -522,7 +520,7 @@ class MSC3861DelegatedAuth(BaseAuth):
raise InvalidClientTokenError("Token is not active")
# Let's look at the scope
scope: List[str] = introspection_result.get_scope_list()
scope: list[str] = introspection_result.get_scope_list()
# Determine type of user based on presence of particular scopes
has_user_scope = (
@@ -577,7 +575,7 @@ class MSC3861DelegatedAuth(BaseAuth):
# We only allow a single device_id in the scope, so we find them all in the
# scope list, and raise if there are more than one. The OIDC server should be
# the one enforcing valid scopes, so we raise a 500 if we find an invalid scope.
device_ids: Set[str] = set()
device_ids: set[str] = set()
for tok in scope:
if tok.startswith(UNSTABLE_SCOPE_MATRIX_DEVICE_PREFIX):
device_ids.add(tok[len(UNSTABLE_SCOPE_MATRIX_DEVICE_PREFIX) :])

View File

@@ -26,7 +26,7 @@ import math
import typing
from enum import Enum
from http import HTTPStatus
from typing import Any, Dict, List, Optional, Union
from typing import Any, Optional, Union
from twisted.web import http
@@ -166,7 +166,7 @@ class CodeMessageException(RuntimeError):
self,
code: Union[int, HTTPStatus],
msg: str,
headers: Optional[Dict[str, str]] = None,
headers: Optional[dict[str, str]] = None,
):
super().__init__("%d: %s" % (code, msg))
@@ -201,7 +201,7 @@ class RedirectException(CodeMessageException):
super().__init__(code=http_code, msg=msg)
self.location = location
self.cookies: List[bytes] = []
self.cookies: list[bytes] = []
class SynapseError(CodeMessageException):
@@ -223,8 +223,8 @@ class SynapseError(CodeMessageException):
code: int,
msg: str,
errcode: str = Codes.UNKNOWN,
additional_fields: Optional[Dict] = None,
headers: Optional[Dict[str, str]] = None,
additional_fields: Optional[dict] = None,
headers: Optional[dict[str, str]] = None,
):
"""Constructs a synapse error.
@@ -236,7 +236,7 @@ class SynapseError(CodeMessageException):
super().__init__(code, msg, headers)
self.errcode = errcode
if additional_fields is None:
self._additional_fields: Dict = {}
self._additional_fields: dict = {}
else:
self._additional_fields = dict(additional_fields)
@@ -276,7 +276,7 @@ class ProxiedRequestError(SynapseError):
code: int,
msg: str,
errcode: str = Codes.UNKNOWN,
additional_fields: Optional[Dict] = None,
additional_fields: Optional[dict] = None,
):
super().__init__(code, msg, errcode, additional_fields)
@@ -409,7 +409,7 @@ class OAuthInsufficientScopeError(SynapseError):
def __init__(
self,
required_scopes: List[str],
required_scopes: list[str],
):
headers = {
"WWW-Authenticate": 'Bearer error="insufficient_scope", scope="%s"'

View File

@@ -26,12 +26,9 @@ from typing import (
Awaitable,
Callable,
Collection,
Dict,
Iterable,
List,
Mapping,
Optional,
Set,
TypeVar,
Union,
)
@@ -248,34 +245,34 @@ class FilterCollection:
async def filter_presence(
self, presence_states: Iterable[UserPresenceState]
) -> List[UserPresenceState]:
) -> list[UserPresenceState]:
return await self._presence_filter.filter(presence_states)
async def filter_global_account_data(
self, events: Iterable[JsonDict]
) -> List[JsonDict]:
) -> list[JsonDict]:
return await self._global_account_data_filter.filter(events)
async def filter_room_state(self, events: Iterable[EventBase]) -> List[EventBase]:
async def filter_room_state(self, events: Iterable[EventBase]) -> list[EventBase]:
return await self._room_state_filter.filter(
await self._room_filter.filter(events)
)
async def filter_room_timeline(
self, events: Iterable[EventBase]
) -> List[EventBase]:
) -> list[EventBase]:
return await self._room_timeline_filter.filter(
await self._room_filter.filter(events)
)
async def filter_room_ephemeral(self, events: Iterable[JsonDict]) -> List[JsonDict]:
async def filter_room_ephemeral(self, events: Iterable[JsonDict]) -> list[JsonDict]:
return await self._room_ephemeral_filter.filter(
await self._room_filter.filter(events)
)
async def filter_room_account_data(
self, events: Iterable[JsonDict]
) -> List[JsonDict]:
) -> list[JsonDict]:
return await self._room_account_data_filter.filter(
await self._room_filter.filter(events)
)
@@ -440,7 +437,7 @@ class Filter:
return True
def _check_fields(self, field_matchers: Dict[str, Callable[[str], bool]]) -> bool:
def _check_fields(self, field_matchers: dict[str, Callable[[str], bool]]) -> bool:
"""Checks whether the filter matches the given event fields.
Args:
@@ -474,7 +471,7 @@ class Filter:
# Otherwise, accept it.
return True
def filter_rooms(self, room_ids: Iterable[str]) -> Set[str]:
def filter_rooms(self, room_ids: Iterable[str]) -> set[str]:
"""Apply the 'rooms' filter to a given list of rooms.
Args:
@@ -496,7 +493,7 @@ class Filter:
async def _check_event_relations(
self, events: Collection[FilterEvent]
) -> List[FilterEvent]:
) -> list[FilterEvent]:
# The event IDs to check, mypy doesn't understand the isinstance check.
event_ids = [event.event_id for event in events if isinstance(event, EventBase)] # type: ignore[attr-defined]
event_ids_to_keep = set(
@@ -511,7 +508,7 @@ class Filter:
if not isinstance(event, EventBase) or event.event_id in event_ids_to_keep
]
async def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]:
async def filter(self, events: Iterable[FilterEvent]) -> list[FilterEvent]:
result = [event for event in events if self._check(event)]
if self.related_by_senders or self.related_by_rel_types:

View File

@@ -20,7 +20,7 @@
#
#
from typing import TYPE_CHECKING, Dict, Hashable, Optional, Tuple
from typing import TYPE_CHECKING, Hashable, Optional
from synapse.api.errors import LimitExceededError
from synapse.config.ratelimiting import RatelimitSettings
@@ -92,7 +92,7 @@ class Ratelimiter:
# * The number of tokens currently in the bucket,
# * The time point when the bucket was last completely empty, and
# * The rate_hz (leak rate) of this particular bucket.
self.actions: Dict[Hashable, Tuple[float, float, float]] = {}
self.actions: dict[Hashable, tuple[float, float, float]] = {}
self.clock.looping_call(self._prune_message_counts, 60 * 1000)
@@ -109,7 +109,7 @@ class Ratelimiter:
def _get_action_counts(
self, key: Hashable, time_now_s: float
) -> Tuple[float, float, float]:
) -> tuple[float, float, float]:
"""Retrieve the action counts, with a fallback representing an empty bucket."""
return self.actions.get(key, (0.0, time_now_s, 0.0))
@@ -122,7 +122,7 @@ class Ratelimiter:
update: bool = True,
n_actions: int = 1,
_time_now_s: Optional[float] = None,
) -> Tuple[bool, float]:
) -> tuple[bool, float]:
"""Can the entity (e.g. user or IP address) perform the action?
Checks if the user has ratelimiting disabled in the database by looking

View File

@@ -18,7 +18,7 @@
#
#
from typing import Callable, Dict, Optional, Tuple
from typing import Callable, Optional
import attr
@@ -109,7 +109,7 @@ class RoomVersion:
# is not enough to mark it "supported": the push rule evaluator also needs to
# support the flag. Unknown flags are ignored by the evaluator, making conditions
# fail if used.
msc3931_push_features: Tuple[str, ...] # values from PushRuleRoomFlag
msc3931_push_features: tuple[str, ...] # values from PushRuleRoomFlag
# MSC3757: Restricting who can overwrite a state event
msc3757_enabled: bool
# MSC4289: Creator power enabled
@@ -476,7 +476,7 @@ class RoomVersions:
)
KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
KNOWN_ROOM_VERSIONS: dict[str, RoomVersion] = {
v.identifier: v
for v in (
RoomVersions.V1,

View File

@@ -28,18 +28,17 @@ import sys
import traceback
import warnings
from textwrap import indent
from threading import Thread
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Callable,
Dict,
List,
NoReturn,
Optional,
Tuple,
cast,
)
from wsgiref.simple_server import WSGIServer
from cryptography.utils import CryptographyDeprecationWarning
from typing_extensions import ParamSpec
@@ -62,7 +61,6 @@ from twisted.web.resource import Resource
import synapse.util.caches
from synapse.api.constants import MAX_PDU_SIZE
from synapse.app import check_bind_error
from synapse.app.phone_stats_home import start_phone_stats_home
from synapse.config import ConfigError
from synapse.config._base import format_config_error
from synapse.config.homeserver import HomeServerConfig
@@ -97,22 +95,47 @@ reactor = cast(ISynapseReactor, _reactor)
logger = logging.getLogger(__name__)
# list of tuples of function, args list, kwargs dict
_sighup_callbacks: List[
Tuple[Callable[..., None], Tuple[object, ...], Dict[str, object]]
] = []
_instance_id_to_sighup_callbacks_map: dict[
str, list[tuple[Callable[..., None], tuple[object, ...], dict[str, object]]]
] = {}
"""
Map from homeserver instance_id to a list of callbacks.
We use `instance_id` instead of `server_name` because it's possible to have multiple
workers running in the same process with the same `server_name`.
"""
P = ParamSpec("P")
def register_sighup(func: Callable[P, None], *args: P.args, **kwargs: P.kwargs) -> None:
def register_sighup(
homeserver_instance_id: str,
func: Callable[P, None],
*args: P.args,
**kwargs: P.kwargs,
) -> None:
"""
Register a function to be called when a SIGHUP occurs.
Args:
homeserver_instance_id: The unique ID for this Synapse process instance
(`hs.get_instance_id()`) that this hook is associated with.
func: Function to be called when sent a SIGHUP signal.
*args, **kwargs: args and kwargs to be passed to the target function.
"""
_sighup_callbacks.append((func, args, kwargs))
_instance_id_to_sighup_callbacks_map.setdefault(homeserver_instance_id, []).append(
(func, args, kwargs)
)
def unregister_sighups(instance_id: str) -> None:
"""
Unregister all sighup functions associated with this Synapse instance.
Args:
instance_id: Unique ID for this Synapse process instance.
"""
_instance_id_to_sighup_callbacks_map.pop(instance_id, [])
def start_worker_reactor(
@@ -150,7 +173,7 @@ def start_worker_reactor(
def start_reactor(
appname: str,
soft_file_limit: int,
gc_thresholds: Optional[Tuple[int, int, int]],
gc_thresholds: Optional[tuple[int, int, int]],
pid_file: Optional[str],
daemonize: bool,
print_pidfile: bool,
@@ -281,7 +304,9 @@ def register_start(
clock.call_when_running(lambda: defer.ensureDeferred(wrapper()))
def listen_metrics(bind_addresses: StrCollection, port: int) -> None:
def listen_metrics(
bind_addresses: StrCollection, port: int
) -> list[tuple[WSGIServer, Thread]]:
"""
Start Prometheus metrics server.
@@ -294,14 +319,22 @@ def listen_metrics(bind_addresses: StrCollection, port: int) -> None:
bytecode at a time), this still works because the metrics thread can preempt the
Twisted reactor thread between bytecode boundaries and the metrics thread gets
scheduled with roughly equal priority to the Twisted reactor thread.
Returns:
List of WSGIServer with the thread they are running on.
"""
from prometheus_client import start_http_server as start_http_server_prometheus
from synapse.metrics import RegistryProxy
servers: list[tuple[WSGIServer, Thread]] = []
for host in bind_addresses:
logger.info("Starting metrics listener on %s:%d", host, port)
start_http_server_prometheus(port, addr=host, registry=RegistryProxy)
server, thread = start_http_server_prometheus(
port, addr=host, registry=RegistryProxy
)
servers.append((server, thread))
return servers
def listen_manhole(
@@ -309,7 +342,7 @@ def listen_manhole(
port: int,
manhole_settings: ManholeConfig,
manhole_globals: dict,
) -> None:
) -> list[Port]:
# twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
# warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
# suppress the warning for now.
@@ -321,7 +354,7 @@ def listen_manhole(
from synapse.util.manhole import manhole
listen_tcp(
return listen_tcp(
bind_addresses,
port,
manhole(settings=manhole_settings, globals=manhole_globals),
@@ -334,7 +367,7 @@ def listen_tcp(
factory: ServerFactory,
reactor: IReactorTCP = reactor,
backlog: int = 50,
) -> List[Port]:
) -> list[Port]:
"""
Create a TCP socket for a port and several addresses
@@ -359,7 +392,7 @@ def listen_unix(
factory: ServerFactory,
reactor: IReactorUNIX = reactor,
backlog: int = 50,
) -> List[Port]:
) -> list[Port]:
"""
Create a UNIX socket for a given path and 'mode' permission
@@ -383,18 +416,27 @@ def listen_http(
max_request_body_size: int,
context_factory: Optional[IOpenSSLContextFactory],
reactor: ISynapseReactor = reactor,
) -> List[Port]:
) -> list[Port]:
"""
Args:
listener_config: TODO
root_resource: TODO
version_string: A string to present for the Server header
max_request_body_size: TODO
context_factory: TODO
reactor: TODO
"""
assert listener_config.http_options is not None
site_tag = listener_config.get_site_tag()
site = SynapseSite(
"synapse.access.%s.%s"
logger_name="synapse.access.%s.%s"
% ("https" if listener_config.is_tls() else "http", site_tag),
site_tag,
listener_config,
root_resource,
version_string,
site_tag=site_tag,
config=listener_config,
resource=root_resource,
server_version_string=version_string,
max_request_body_size=max_request_body_size,
reactor=reactor,
hs=hs,
@@ -444,7 +486,7 @@ def listen_ssl(
context_factory: IOpenSSLContextFactory,
reactor: IReactorSSL = reactor,
backlog: int = 50,
) -> List[Port]:
) -> list[Port]:
"""
Create an TLS-over-TCP socket for a port and several addresses
@@ -498,7 +540,7 @@ def refresh_certificate(hs: "HomeServer") -> None:
logger.info("Context factories updated.")
async def start(hs: "HomeServer") -> None:
async def start(hs: "HomeServer", freeze: bool = True) -> None:
"""
Start a Synapse server or worker.
@@ -509,6 +551,11 @@ async def start(hs: "HomeServer") -> None:
Args:
hs: homeserver instance
freeze: whether to freeze the homeserver base objects in the garbage collector.
May improve garbage collection performance by marking objects with an effectively
static lifetime as frozen so they don't need to be considered for cleanup.
If you ever want to `shutdown` the homeserver, this needs to be
False otherwise the homeserver cannot be garbage collected after `shutdown`.
"""
server_name = hs.hostname
reactor = hs.get_reactor()
@@ -541,12 +588,17 @@ async def start(hs: "HomeServer") -> None:
# we're not using systemd.
sdnotify(b"RELOADING=1")
for i, args, kwargs in _sighup_callbacks:
i(*args, **kwargs)
for sighup_callbacks in _instance_id_to_sighup_callbacks_map.values():
for func, args, kwargs in sighup_callbacks:
func(*args, **kwargs)
sdnotify(b"READY=1")
return run_as_background_process(
# It's okay to ignore the linter error here and call
# `run_as_background_process` directly because `_handle_sighup` operates
# outside of the scope of a specific `HomeServer` instance and holds no
# references to it which would prevent a clean shutdown.
return run_as_background_process( # type: ignore[untracked-background-process]
"sighup",
server_name,
_handle_sighup,
@@ -564,8 +616,8 @@ async def start(hs: "HomeServer") -> None:
signal.signal(signal.SIGHUP, run_sighup)
register_sighup(refresh_certificate, hs)
register_sighup(reload_cache_config, hs.config)
register_sighup(hs.get_instance_id(), refresh_certificate, hs)
register_sighup(hs.get_instance_id(), reload_cache_config, hs.config)
# Apply the cache config.
hs.config.caches.resize_all_caches()
@@ -603,7 +655,11 @@ async def start(hs: "HomeServer") -> None:
logger.info("Shutting down...")
# Log when we start the shut down process.
hs.get_clock().add_system_event_trigger("before", "shutdown", log_shutdown)
hs.register_sync_shutdown_handler(
phase="before",
eventType="shutdown",
shutdown_func=log_shutdown,
)
setup_sentry(hs)
setup_sdnotify(hs)
@@ -623,27 +679,24 @@ async def start(hs: "HomeServer") -> None:
if hs.config.worker.run_background_tasks:
hs.start_background_tasks()
# TODO: This should be moved to same pattern we use for other background tasks:
# Add to `REQUIRED_ON_BACKGROUND_TASK_STARTUP` and rely on
# `start_background_tasks` to start it.
await hs.get_common_usage_metrics_manager().setup()
if freeze:
# We now freeze all allocated objects in the hopes that (almost)
# everything currently allocated are things that will be used for the
# rest of time. Doing so means less work each GC (hopefully).
#
# Note that freezing the homeserver object means that it won't be able to be
# garbage collected in the case of attempting an in-memory `shutdown`. This only
# needs to be considered if such a case is desirable. Exiting the entire Python
# process will function expectedly either way.
#
# PyPy does not (yet?) implement gc.freeze()
if hasattr(gc, "freeze"):
gc.collect()
gc.freeze()
# TODO: This feels like another pattern that should refactored as one of the
# `REQUIRED_ON_BACKGROUND_TASK_STARTUP`
start_phone_stats_home(hs)
# We now freeze all allocated objects in the hopes that (almost)
# everything currently allocated are things that will be used for the
# rest of time. Doing so means less work each GC (hopefully).
#
# PyPy does not (yet?) implement gc.freeze()
if hasattr(gc, "freeze"):
gc.collect()
gc.freeze()
# Speed up shutdowns by freezing all allocated objects. This moves everything
# into the permanent generation and excludes them from the final GC.
atexit.register(gc.freeze)
# Speed up process exit by freezing all allocated objects. This moves everything
# into the permanent generation and excludes them from the final GC.
atexit.register(gc.freeze)
def reload_cache_config(config: HomeServerConfig) -> None:

View File

@@ -24,7 +24,7 @@ import logging
import os
import sys
import tempfile
from typing import List, Mapping, Optional, Sequence, Tuple
from typing import Mapping, Optional, Sequence
from twisted.internet import defer, task
@@ -65,7 +65,6 @@ from synapse.storage.databases.main.stream import StreamWorkerStore
from synapse.storage.databases.main.tags import TagsWorkerStore
from synapse.storage.databases.main.user_erasure_store import UserErasureWorkerStore
from synapse.types import JsonMapping, StateMap
from synapse.util import SYNAPSE_VERSION
from synapse.util.logcontext import LoggingContext
logger = logging.getLogger("synapse.app.admin_cmd")
@@ -151,7 +150,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
if list(os.listdir(self.base_directory)):
raise Exception("Directory must be empty")
def write_events(self, room_id: str, events: List[EventBase]) -> None:
def write_events(self, room_id: str, events: list[EventBase]) -> None:
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
events_file = os.path.join(room_directory, "events")
@@ -256,7 +255,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
return self.base_directory
def load_config(argv_options: List[str]) -> Tuple[HomeServerConfig, argparse.Namespace]:
def load_config(argv_options: list[str]) -> tuple[HomeServerConfig, argparse.Namespace]:
parser = argparse.ArgumentParser(description="Synapse Admin Command")
HomeServerConfig.add_arguments_to_parser(parser)
@@ -316,7 +315,6 @@ def start(config: HomeServerConfig, args: argparse.Namespace) -> None:
ss = AdminCmdServer(
config.server.server_name,
config=config,
version_string=f"Synapse/{SYNAPSE_VERSION}",
)
setup_logging(ss, config, use_worker_options=True)

View File

@@ -26,13 +26,13 @@ import os
import signal
import sys
from types import FrameType
from typing import Any, Callable, Dict, List, Optional
from typing import Any, Callable, Optional
from twisted.internet.main import installReactor
# a list of the original signal handlers, before we installed our custom ones.
# We restore these in our child processes.
_original_signal_handlers: Dict[int, Any] = {}
_original_signal_handlers: dict[int, Any] = {}
class ProxiedReactor:
@@ -72,7 +72,7 @@ class ProxiedReactor:
def _worker_entrypoint(
func: Callable[[], None], proxy_reactor: ProxiedReactor, args: List[str]
func: Callable[[], None], proxy_reactor: ProxiedReactor, args: list[str]
) -> None:
"""
Entrypoint for a forked worker process.
@@ -128,7 +128,7 @@ def main() -> None:
# Split up the subsequent arguments into each workers' arguments;
# `--` is our delimiter of choice.
args_by_worker: List[List[str]] = [
args_by_worker: list[list[str]] = [
list(args)
for cond, args in itertools.groupby(ns.args, lambda ele: ele != "--")
if cond and args
@@ -167,7 +167,7 @@ def main() -> None:
update_proc.join()
print("===== PREPARED DATABASE =====", file=sys.stderr)
processes: List[multiprocessing.Process] = []
processes: list[multiprocessing.Process] = []
# Install signal handlers to propagate signals to all our children, so that they
# shut down cleanly. This also inhibits our own exit, but that's good: we want to

View File

@@ -21,7 +21,6 @@
#
import logging
import sys
from typing import Dict, List
from twisted.web.resource import Resource
@@ -112,7 +111,6 @@ from synapse.storage.databases.main.transactions import TransactionWorkerStore
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
from synapse.storage.databases.main.user_directory import UserDirectoryStore
from synapse.storage.databases.main.user_erasure_store import UserErasureWorkerStore
from synapse.util import SYNAPSE_VERSION
from synapse.util.httpresourcetree import create_resource_tree
logger = logging.getLogger("synapse.app.generic_worker")
@@ -182,7 +180,7 @@ class GenericWorkerServer(HomeServer):
# We always include an admin resource that we populate with servlets as needed
admin_resource = JsonResource(self, canonical_json=False)
resources: Dict[str, Resource] = {
resources: dict[str, Resource] = {
# We always include a health resource.
"/health": HealthResource(),
"/_synapse/admin": admin_resource,
@@ -278,11 +276,13 @@ class GenericWorkerServer(HomeServer):
self._listen_http(listener)
elif listener.type == "manhole":
if isinstance(listener, TCPListenerConfig):
_base.listen_manhole(
listener.bind_addresses,
listener.port,
manhole_settings=self.config.server.manhole_settings,
manhole_globals={"hs": self},
self._listening_services.extend(
_base.listen_manhole(
listener.bind_addresses,
listener.port,
manhole_settings=self.config.server.manhole_settings,
manhole_globals={"hs": self},
)
)
else:
raise ConfigError(
@@ -296,9 +296,11 @@ class GenericWorkerServer(HomeServer):
)
else:
if isinstance(listener, TCPListenerConfig):
_base.listen_metrics(
listener.bind_addresses,
listener.port,
self._metrics_listeners.extend(
_base.listen_metrics(
listener.bind_addresses,
listener.port,
)
)
else:
raise ConfigError(
@@ -311,7 +313,7 @@ class GenericWorkerServer(HomeServer):
self.get_replication_command_handler().start_replication(self)
def load_config(argv_options: List[str]) -> HomeServerConfig:
def load_config(argv_options: list[str]) -> HomeServerConfig:
"""
Parse the commandline and config files (does not generate config)
@@ -355,7 +357,6 @@ def start(config: HomeServerConfig) -> None:
hs = GenericWorkerServer(
config.server.server_name,
config=config,
version_string=f"Synapse/{SYNAPSE_VERSION}",
)
setup_logging(hs, config, use_worker_options=True)

View File

@@ -22,7 +22,7 @@
import logging
import os
import sys
from typing import Dict, Iterable, List
from typing import Iterable, Optional
from twisted.internet.tcp import Port
from twisted.web.resource import EncodingResourceWrapper, Resource
@@ -70,7 +70,8 @@ from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import well_known_resource
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.util.check_dependencies import VERSION, check_requirements
from synapse.types import ISynapseReactor
from synapse.util.check_dependencies import check_requirements
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.module_loader import load_module
@@ -82,6 +83,10 @@ def gz_wrap(r: Resource) -> Resource:
class SynapseHomeServer(HomeServer):
"""
Homeserver class for the main Synapse process.
"""
DATASTORE_CLASS = DataStore
def _listener_http(
@@ -94,7 +99,7 @@ class SynapseHomeServer(HomeServer):
site_tag = listener_config.get_site_tag()
# We always include a health resource.
resources: Dict[str, Resource] = {"/health": HealthResource()}
resources: dict[str, Resource] = {"/health": HealthResource()}
for res in listener_config.http_options.resources:
for name in res.names:
@@ -165,7 +170,7 @@ class SynapseHomeServer(HomeServer):
def _configure_named_resource(
self, name: str, compress: bool = False
) -> Dict[str, Resource]:
) -> dict[str, Resource]:
"""Build a resource map for a named resource
Args:
@@ -175,7 +180,7 @@ class SynapseHomeServer(HomeServer):
Returns:
map from path to HTTP resource
"""
resources: Dict[str, Resource] = {}
resources: dict[str, Resource] = {}
if name == "client":
client_resource: Resource = ClientRestResource(self)
if compress:
@@ -277,11 +282,13 @@ class SynapseHomeServer(HomeServer):
)
elif listener.type == "manhole":
if isinstance(listener, TCPListenerConfig):
_base.listen_manhole(
listener.bind_addresses,
listener.port,
manhole_settings=self.config.server.manhole_settings,
manhole_globals={"hs": self},
self._listening_services.extend(
_base.listen_manhole(
listener.bind_addresses,
listener.port,
manhole_settings=self.config.server.manhole_settings,
manhole_globals={"hs": self},
)
)
else:
raise ConfigError(
@@ -294,9 +301,11 @@ class SynapseHomeServer(HomeServer):
)
else:
if isinstance(listener, TCPListenerConfig):
_base.listen_metrics(
listener.bind_addresses,
listener.port,
self._metrics_listeners.extend(
_base.listen_metrics(
listener.bind_addresses,
listener.port,
)
)
else:
raise ConfigError(
@@ -309,7 +318,7 @@ class SynapseHomeServer(HomeServer):
logger.warning("Unrecognized listener type: %s", listener.type)
def load_or_generate_config(argv_options: List[str]) -> HomeServerConfig:
def load_or_generate_config(argv_options: list[str]) -> HomeServerConfig:
"""
Parse the commandline and config files
@@ -340,12 +349,17 @@ def load_or_generate_config(argv_options: List[str]) -> HomeServerConfig:
return config
def setup(config: HomeServerConfig) -> SynapseHomeServer:
def create_homeserver(
config: HomeServerConfig,
reactor: Optional[ISynapseReactor] = None,
) -> SynapseHomeServer:
"""
Create and setup a Synapse homeserver instance given a configuration.
Create a homeserver instance for the Synapse main process.
Args:
config: The configuration for the homeserver.
reactor: Optionally provide a reactor to use. Can be useful in different
scenarios that you want control over the reactor, such as tests.
Returns:
A homeserver instance.
@@ -353,10 +367,9 @@ def setup(config: HomeServerConfig) -> SynapseHomeServer:
if config.worker.worker_app:
raise ConfigError(
"You have specified `worker_app` in the config but are attempting to start a non-worker "
"You have specified `worker_app` in the config but are attempting to setup a non-worker "
"instance. Please use `python -m synapse.app.generic_worker` instead (or remove the option if this is the main process)."
)
sys.exit(1)
events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
@@ -364,64 +377,82 @@ def setup(config: HomeServerConfig) -> SynapseHomeServer:
if config.server.gc_seconds:
synapse.metrics.MIN_TIME_BETWEEN_GCS = config.server.gc_seconds
if (
config.registration.enable_registration
and not config.registration.enable_registration_without_verification
):
if (
not config.captcha.enable_registration_captcha
and not config.registration.registrations_require_3pid
and not config.registration.registration_requires_token
):
raise ConfigError(
"You have enabled open registration without any verification. This is a known vector for "
"spam and abuse. If you would like to allow public registration, please consider adding email, "
"captcha, or token-based verification. Otherwise this check can be removed by setting the "
"`enable_registration_without_verification` config option to `true`."
)
hs = SynapseHomeServer(
config.server.server_name,
hostname=config.server.server_name,
config=config,
version_string=f"Synapse/{VERSION}",
reactor=reactor,
)
setup_logging(hs, config, use_worker_options=False)
return hs
def setup(
hs: SynapseHomeServer,
*,
freeze: bool = True,
) -> None:
"""
Setup a Synapse homeserver instance given a configuration.
Args:
hs: The homeserver to setup.
freeze: whether to freeze the homeserver base objects in the garbage collector.
May improve garbage collection performance by marking objects with an effectively
static lifetime as frozen so they don't need to be considered for cleanup.
If you ever want to `shutdown` the homeserver, this needs to be
False otherwise the homeserver cannot be garbage collected after `shutdown`.
Returns:
A homeserver instance.
"""
setup_logging(hs, hs.config, use_worker_options=False)
# Log after we've configured logging.
logger.info("Setting up server")
# Start the tracer
init_tracer(hs) # noqa
logger.info("Setting up server")
try:
hs.setup()
except Exception as e:
handle_startup_exception(e)
async def start() -> None:
async def _start_when_reactor_running() -> None:
# TODO: Feels like this should be moved somewhere else.
#
# Load the OIDC provider metadatas, if OIDC is enabled.
if hs.config.oidc.oidc_enabled:
oidc = hs.get_oidc_handler()
# Loading the provider metadata also ensures the provider config is valid.
await oidc.load_metadata()
await _base.start(hs)
await _base.start(hs, freeze)
# TODO: Feels like this should be moved somewhere else.
hs.get_datastores().main.db_pool.updates.start_doing_background_updates()
register_start(hs, start)
return hs
# Register a callback to be invoked once the reactor is running
register_start(hs, _start_when_reactor_running)
def run(hs: HomeServer) -> None:
def start_reactor(
config: HomeServerConfig,
) -> None:
"""
Start the reactor (Twisted event-loop).
Args:
config: The configuration for the homeserver.
"""
_base.start_reactor(
"synapse-homeserver",
soft_file_limit=hs.config.server.soft_file_limit,
gc_thresholds=hs.config.server.gc_thresholds,
pid_file=hs.config.server.pid_file,
daemonize=hs.config.server.daemonize,
print_pidfile=hs.config.server.print_pidfile,
soft_file_limit=config.server.soft_file_limit,
gc_thresholds=config.server.gc_thresholds,
pid_file=config.server.pid_file,
daemonize=config.server.daemonize,
print_pidfile=config.server.print_pidfile,
logger=logger,
)
@@ -432,13 +463,14 @@ def main() -> None:
with LoggingContext(name="main", server_name=homeserver_config.server.server_name):
# check base requirements
check_requirements()
hs = setup(homeserver_config)
hs = create_homeserver(homeserver_config)
setup(hs)
# redirect stdio to the logs, if configured.
if not hs.config.logging.no_redirect_stdio:
redirect_stdio_to_logs()
run(hs)
start_reactor(homeserver_config)
if __name__ == "__main__":

View File

@@ -22,26 +22,25 @@ import logging
import math
import resource
import sys
from typing import TYPE_CHECKING, List, Mapping, Sized, Tuple
from typing import TYPE_CHECKING, Mapping, Sized
from prometheus_client import Gauge
from twisted.internet import defer
from synapse.metrics import SERVER_NAME_LABEL
from synapse.metrics.background_process_metrics import (
run_as_background_process,
)
from synapse.types import JsonDict
from synapse.util.constants import ONE_HOUR_SECONDS, ONE_MINUTE_SECONDS
from synapse.util.constants import (
MILLISECONDS_PER_SECOND,
ONE_HOUR_SECONDS,
ONE_MINUTE_SECONDS,
)
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger("synapse.app.homeserver")
MILLISECONDS_PER_SECOND = 1000
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME_SECONDS = 5 * ONE_MINUTE_SECONDS
"""
We wait 5 minutes to send the first set of stats as the server can be quite busy the
@@ -55,7 +54,7 @@ Phone home stats are sent every 3 hours
# Contains the list of processes we will be monitoring
# currently either 0 or 1
_stats_process: List[Tuple[int, "resource.struct_rusage"]] = []
_stats_process: list[tuple[int, "resource.struct_rusage"]] = []
# Gauges to expose monthly active user control metrics
current_mau_gauge = Gauge(
@@ -83,14 +82,12 @@ registered_reserved_users_mau_gauge = Gauge(
def phone_stats_home(
hs: "HomeServer",
stats: JsonDict,
stats_process: List[Tuple[int, "resource.struct_rusage"]] = _stats_process,
stats_process: list[tuple[int, "resource.struct_rusage"]] = _stats_process,
) -> "defer.Deferred[None]":
server_name = hs.hostname
async def _phone_stats_home(
hs: "HomeServer",
stats: JsonDict,
stats_process: List[Tuple[int, "resource.struct_rusage"]] = _stats_process,
stats_process: list[tuple[int, "resource.struct_rusage"]] = _stats_process,
) -> None:
"""Collect usage statistics and send them to the configured endpoint.
@@ -200,8 +197,8 @@ def phone_stats_home(
except Exception as e:
logger.warning("Error reporting stats: %s", e)
return run_as_background_process(
"phone_stats_home", server_name, _phone_stats_home, hs, stats, stats_process
return hs.run_as_background_process(
"phone_stats_home", _phone_stats_home, hs, stats, stats_process
)
@@ -263,9 +260,8 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
float(hs.config.server.max_mau_value)
)
return run_as_background_process(
return hs.run_as_background_process(
"generate_monthly_active_users",
server_name,
_generate_monthly_active_users,
)
@@ -285,10 +281,16 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
# We need to defer this init for the cases that we daemonize
# otherwise the process ID we get is that of the non-daemon process
clock.call_later(0, performance_stats_init)
clock.call_later(
0,
performance_stats_init,
)
# We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes
clock.call_later(
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME_SECONDS, phone_stats_home, hs, stats
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME_SECONDS,
phone_stats_home,
hs,
stats,
)

View File

@@ -23,15 +23,31 @@
import logging
import re
from enum import Enum
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Pattern, Sequence
from typing import (
TYPE_CHECKING,
Iterable,
Optional,
Pattern,
Sequence,
cast,
)
import attr
from netaddr import IPSet
from twisted.internet import reactor
from synapse.api.constants import EventTypes
from synapse.events import EventBase
from synapse.types import DeviceListUpdates, JsonDict, JsonMapping, UserID
from synapse.types import (
DeviceListUpdates,
ISynapseThreadlessReactor,
JsonDict,
JsonMapping,
UserID,
)
from synapse.util.caches.descriptors import _CacheContext, cached
from synapse.util.clock import Clock
if TYPE_CHECKING:
from synapse.appservice.api import ApplicationServiceApi
@@ -41,11 +57,11 @@ logger = logging.getLogger(__name__)
# Type for the `device_one_time_keys_count` field in an appservice transaction
# user ID -> {device ID -> {algorithm -> count}}
TransactionOneTimeKeysCount = Dict[str, Dict[str, Dict[str, int]]]
TransactionOneTimeKeysCount = dict[str, dict[str, dict[str, int]]]
# Type for the `device_unused_fallback_key_types` field in an appservice transaction
# user ID -> {device ID -> [algorithm]}
TransactionUnusedFallbackKeys = Dict[str, Dict[str, List[str]]]
TransactionUnusedFallbackKeys = dict[str, dict[str, list[str]]]
class ApplicationServiceState(Enum):
@@ -98,6 +114,15 @@ class ApplicationService:
self.sender = sender
# The application service user should be part of the server's domain.
self.server_name = sender.domain # nb must be called this for @cached
# Ideally we would require passing in the `HomeServer` `Clock` instance.
# However this is not currently possible as there are places which use
# `@cached` that aren't aware of the `HomeServer` instance.
# nb must be called this for @cached
self.clock = Clock(
cast(ISynapseThreadlessReactor, reactor), server_name=self.server_name
) # type: ignore[multiple-internal-clocks]
self.namespaces = self._check_namespaces(namespaces)
self.id = id
self.ip_range_whitelist = ip_range_whitelist
@@ -118,7 +143,7 @@ class ApplicationService:
def _check_namespaces(
self, namespaces: Optional[JsonDict]
) -> Dict[str, List[Namespace]]:
) -> dict[str, list[Namespace]]:
# Sanity check that it is of the form:
# {
# users: [ {regex: "[A-z]+.*", exclusive: true}, ...],
@@ -128,7 +153,7 @@ class ApplicationService:
if namespaces is None:
namespaces = {}
result: Dict[str, List[Namespace]] = {}
result: dict[str, list[Namespace]] = {}
for ns in ApplicationService.NS_LIST:
result[ns] = []
@@ -361,7 +386,7 @@ class ApplicationService:
def is_exclusive_room(self, room_id: str) -> bool:
return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)
def get_exclusive_user_regexes(self) -> List[Pattern[str]]:
def get_exclusive_user_regexes(self) -> list[Pattern[str]]:
"""Get the list of regexes used to determine if a user is exclusively
registered by the AS
"""
@@ -390,8 +415,8 @@ class AppServiceTransaction:
service: ApplicationService,
id: int,
events: Sequence[EventBase],
ephemeral: List[JsonMapping],
to_device_messages: List[JsonMapping],
ephemeral: list[JsonMapping],
to_device_messages: list[JsonMapping],
one_time_keys_count: TransactionOneTimeKeysCount,
unused_fallback_keys: TransactionUnusedFallbackKeys,
device_list_summary: DeviceListUpdates,

View File

@@ -23,13 +23,10 @@ import logging
import urllib.parse
from typing import (
TYPE_CHECKING,
Dict,
Iterable,
List,
Mapping,
Optional,
Sequence,
Tuple,
TypeVar,
Union,
)
@@ -133,14 +130,14 @@ class ApplicationServiceApi(SimpleHttpClient):
self.clock = hs.get_clock()
self.config = hs.config.appservice
self.protocol_meta_cache: ResponseCache[Tuple[str, str]] = ResponseCache(
self.protocol_meta_cache: ResponseCache[tuple[str, str]] = ResponseCache(
clock=hs.get_clock(),
name="as_protocol_meta",
server_name=self.server_name,
timeout_ms=HOUR_IN_MS,
)
def _get_headers(self, service: "ApplicationService") -> Dict[bytes, List[bytes]]:
def _get_headers(self, service: "ApplicationService") -> dict[bytes, list[bytes]]:
"""This makes sure we have always the auth header and opentracing headers set."""
# This is also ensured before in the functions. However this is needed to please
@@ -210,8 +207,8 @@ class ApplicationServiceApi(SimpleHttpClient):
service: "ApplicationService",
kind: str,
protocol: str,
fields: Dict[bytes, List[bytes]],
) -> List[JsonDict]:
fields: dict[bytes, list[bytes]],
) -> list[JsonDict]:
if kind == ThirdPartyEntityKind.USER:
required_field = "userid"
elif kind == ThirdPartyEntityKind.LOCATION:
@@ -225,7 +222,7 @@ class ApplicationServiceApi(SimpleHttpClient):
assert service.hs_token is not None
try:
args: Mapping[bytes, Union[List[bytes], str]] = fields
args: Mapping[bytes, Union[list[bytes], str]] = fields
if self.config.use_appservice_legacy_authorization:
args = {
**fields,
@@ -320,8 +317,8 @@ class ApplicationServiceApi(SimpleHttpClient):
self,
service: "ApplicationService",
events: Sequence[EventBase],
ephemeral: List[JsonMapping],
to_device_messages: List[JsonMapping],
ephemeral: list[JsonMapping],
to_device_messages: list[JsonMapping],
one_time_keys_count: TransactionOneTimeKeysCount,
unused_fallback_keys: TransactionUnusedFallbackKeys,
device_list_summary: DeviceListUpdates,
@@ -429,9 +426,9 @@ class ApplicationServiceApi(SimpleHttpClient):
return False
async def claim_client_keys(
self, service: "ApplicationService", query: List[Tuple[str, str, str, int]]
) -> Tuple[
Dict[str, Dict[str, Dict[str, JsonDict]]], List[Tuple[str, str, str, int]]
self, service: "ApplicationService", query: list[tuple[str, str, str, int]]
) -> tuple[
dict[str, dict[str, dict[str, JsonDict]]], list[tuple[str, str, str, int]]
]:
"""Claim one time keys from an application service.
@@ -457,7 +454,7 @@ class ApplicationServiceApi(SimpleHttpClient):
assert service.hs_token is not None
# Create the expected payload shape.
body: Dict[str, Dict[str, List[str]]] = {}
body: dict[str, dict[str, list[str]]] = {}
for user_id, device, algorithm, count in query:
body.setdefault(user_id, {}).setdefault(device, []).extend(
[algorithm] * count
@@ -502,8 +499,8 @@ class ApplicationServiceApi(SimpleHttpClient):
return response, missing
async def query_keys(
self, service: "ApplicationService", query: Dict[str, List[str]]
) -> Dict[str, Dict[str, Dict[str, JsonDict]]]:
self, service: "ApplicationService", query: dict[str, list[str]]
) -> dict[str, dict[str, dict[str, JsonDict]]]:
"""Query the application service for keys.
Note that any error (including a timeout) is treated as the application
@@ -545,7 +542,7 @@ class ApplicationServiceApi(SimpleHttpClient):
def _serialize(
self, service: "ApplicationService", events: Iterable[EventBase]
) -> List[JsonDict]:
) -> list[JsonDict]:
time_now = self.clock.time_msec()
return [
serialize_event(

View File

@@ -61,13 +61,9 @@ from typing import (
Awaitable,
Callable,
Collection,
Dict,
Iterable,
List,
Optional,
Sequence,
Set,
Tuple,
)
from twisted.internet.interfaces import IDelayedCall
@@ -81,7 +77,6 @@ from synapse.appservice import (
from synapse.appservice.api import ApplicationServiceApi
from synapse.events import EventBase
from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.databases.main import DataStore
from synapse.types import DeviceListUpdates, JsonMapping
from synapse.util.clock import Clock
@@ -184,22 +179,23 @@ class _ServiceQueuer:
def __init__(self, txn_ctrl: "_TransactionController", hs: "HomeServer"):
# dict of {service_id: [events]}
self.queued_events: Dict[str, List[EventBase]] = {}
self.queued_events: dict[str, list[EventBase]] = {}
# dict of {service_id: [events]}
self.queued_ephemeral: Dict[str, List[JsonMapping]] = {}
self.queued_ephemeral: dict[str, list[JsonMapping]] = {}
# dict of {service_id: [to_device_message_json]}
self.queued_to_device_messages: Dict[str, List[JsonMapping]] = {}
self.queued_to_device_messages: dict[str, list[JsonMapping]] = {}
# dict of {service_id: [device_list_summary]}
self.queued_device_list_summaries: Dict[str, List[DeviceListUpdates]] = {}
self.queued_device_list_summaries: dict[str, list[DeviceListUpdates]] = {}
# the appservices which currently have a transaction in flight
self.requests_in_flight: Set[str] = set()
self.requests_in_flight: set[str] = set()
self.txn_ctrl = txn_ctrl
self._msc3202_transaction_extensions_enabled: bool = (
hs.config.experimental.msc3202_transaction_extensions
)
self.server_name = hs.hostname
self.clock = hs.get_clock()
self.hs = hs
self._store = hs.get_datastores().main
def start_background_request(self, service: ApplicationService) -> None:
@@ -207,9 +203,7 @@ class _ServiceQueuer:
if service.id in self.requests_in_flight:
return
run_as_background_process(
"as-sender", self.server_name, self._send_request, service
)
self.hs.run_as_background_process("as-sender", self._send_request, service)
async def _send_request(self, service: ApplicationService) -> None:
# sanity-check: we shouldn't get here if this service already has a sender
@@ -304,7 +298,7 @@ class _ServiceQueuer:
events: Iterable[EventBase],
ephemerals: Iterable[JsonMapping],
to_device_messages: Iterable[JsonMapping],
) -> Tuple[TransactionOneTimeKeysCount, TransactionUnusedFallbackKeys]:
) -> tuple[TransactionOneTimeKeysCount, TransactionUnusedFallbackKeys]:
"""
Given a list of the events, ephemeral messages and to-device messages,
- first computes a list of application services users that may have
@@ -315,14 +309,14 @@ class _ServiceQueuer:
"""
# Set of 'interesting' users who may have updates
users: Set[str] = set()
users: set[str] = set()
# The sender is always included
users.add(service.sender.to_string())
# All AS users that would receive the PDUs or EDUs sent to these rooms
# are classed as 'interesting'.
rooms_of_interesting_users: Set[str] = set()
rooms_of_interesting_users: set[str] = set()
# PDUs
rooms_of_interesting_users.update(event.room_id for event in events)
# EDUs
@@ -361,11 +355,12 @@ class _TransactionController:
def __init__(self, hs: "HomeServer"):
self.server_name = hs.hostname
self.clock = hs.get_clock()
self.hs = hs
self.store = hs.get_datastores().main
self.as_api = hs.get_application_service_api()
# map from service id to recoverer instance
self.recoverers: Dict[str, "_Recoverer"] = {}
self.recoverers: dict[str, "_Recoverer"] = {}
# for UTs
self.RECOVERER_CLASS = _Recoverer
@@ -374,8 +369,8 @@ class _TransactionController:
self,
service: ApplicationService,
events: Sequence[EventBase],
ephemeral: Optional[List[JsonMapping]] = None,
to_device_messages: Optional[List[JsonMapping]] = None,
ephemeral: Optional[list[JsonMapping]] = None,
to_device_messages: Optional[list[JsonMapping]] = None,
one_time_keys_count: Optional[TransactionOneTimeKeysCount] = None,
unused_fallback_keys: Optional[TransactionUnusedFallbackKeys] = None,
device_list_summary: Optional[DeviceListUpdates] = None,
@@ -448,6 +443,7 @@ class _TransactionController:
recoverer = self.RECOVERER_CLASS(
self.server_name,
self.clock,
self.hs,
self.store,
self.as_api,
service,
@@ -494,6 +490,7 @@ class _Recoverer:
self,
server_name: str,
clock: Clock,
hs: "HomeServer",
store: DataStore,
as_api: ApplicationServiceApi,
service: ApplicationService,
@@ -501,6 +498,7 @@ class _Recoverer:
):
self.server_name = server_name
self.clock = clock
self.hs = hs
self.store = store
self.as_api = as_api
self.service = service
@@ -513,9 +511,8 @@ class _Recoverer:
logger.info("Scheduling retries on %s in %fs", self.service.id, delay)
self.scheduled_recovery = self.clock.call_later(
delay,
run_as_background_process,
self.hs.run_as_background_process,
"as-recoverer",
self.server_name,
self.retry,
)
@@ -535,9 +532,8 @@ class _Recoverer:
if self.scheduled_recovery:
self.clock.cancel_call_later(self.scheduled_recovery)
# Run a retry, which will resechedule a recovery if it fails.
run_as_background_process(
self.hs.run_as_background_process(
"retry",
self.server_name,
self.retry,
)

View File

@@ -20,13 +20,12 @@
#
#
import sys
from typing import List
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
def main(args: List[str]) -> None:
def main(args: list[str]) -> None:
action = args[1] if len(args) > 1 and args[1] == "read" else None
# If we're reading a key in the config file, then `args[1]` will be `read` and `args[2]`
# will be the key to read.

View File

@@ -33,14 +33,10 @@ from textwrap import dedent
from typing import (
Any,
ClassVar,
Dict,
Iterable,
Iterator,
List,
MutableMapping,
Optional,
Tuple,
Type,
TypeVar,
Union,
)
@@ -321,9 +317,9 @@ class Config:
def read_templates(
self,
filenames: List[str],
filenames: list[str],
custom_template_directories: Optional[Iterable[str]] = None,
) -> List[jinja2.Template]:
) -> list[jinja2.Template]:
"""Load a list of template files from disk using the given variables.
This function will attempt to load the given templates from the default Synapse
@@ -402,7 +398,7 @@ class RootConfig:
class, lower-cased and with "Config" removed.
"""
config_classes: List[Type[Config]] = []
config_classes: list[type[Config]] = []
def __init__(self, config_files: StrSequence = ()):
# Capture absolute paths here, so we can reload config after we daemonize.
@@ -471,7 +467,7 @@ class RootConfig:
generate_secrets: bool = False,
report_stats: Optional[bool] = None,
open_private_ports: bool = False,
listeners: Optional[List[dict]] = None,
listeners: Optional[list[dict]] = None,
tls_certificate_path: Optional[str] = None,
tls_private_key_path: Optional[str] = None,
) -> str:
@@ -545,18 +541,22 @@ class RootConfig:
@classmethod
def load_config(
cls: Type[TRootConfig], description: str, argv: List[str]
cls: type[TRootConfig], description: str, argv_options: list[str]
) -> TRootConfig:
"""Parse the commandline and config files
Doesn't support config-file-generation: used by the worker apps.
Args:
description: TODO
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
Returns:
Config object.
"""
config_parser = argparse.ArgumentParser(description=description)
cls.add_arguments_to_parser(config_parser)
obj, _ = cls.load_config_with_parser(config_parser, argv)
obj, _ = cls.load_config_with_parser(config_parser, argv_options)
return obj
@@ -601,14 +601,18 @@ class RootConfig:
@classmethod
def load_config_with_parser(
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv_options: List[str]
) -> Tuple[TRootConfig, argparse.Namespace]:
cls: type[TRootConfig], parser: argparse.ArgumentParser, argv_options: list[str]
) -> tuple[TRootConfig, argparse.Namespace]:
"""Parse the commandline and config files with the given parser
Doesn't support config-file-generation: used by the worker apps.
Used for workers where we want to add extra flags/subcommands.
Note: This is the common denominator for loading config and is also used by
`load_config` and `load_or_generate_config`. Which is why we call
`validate_config()` here.
Args:
parser
argv_options: The options passed to Synapse. Usually `sys.argv[1:]`.
@@ -642,11 +646,15 @@ class RootConfig:
obj.invoke_all("read_arguments", config_args)
# Now that we finally have the full config sections parsed, allow subclasses to
# do some extra validation across the entire config.
obj.validate_config()
return obj, config_args
@classmethod
def load_or_generate_config(
cls: Type[TRootConfig], description: str, argv_options: List[str]
cls: type[TRootConfig], description: str, argv_options: list[str]
) -> Optional[TRootConfig]:
"""Parse the commandline and config files
@@ -842,19 +850,11 @@ class RootConfig:
):
return None
obj.parse_config_dict(
config_dict,
config_dir_path=config_dir_path,
data_dir_path=data_dir_path,
allow_secrets_in_config=config_args.secrets_in_config,
)
obj.invoke_all("read_arguments", config_args)
return obj
return cls.load_config(description, argv_options)
def parse_config_dict(
self,
config_dict: Dict[str, Any],
config_dict: dict[str, Any],
config_dir_path: str,
data_dir_path: str,
allow_secrets_in_config: bool = True,
@@ -879,7 +879,7 @@ class RootConfig:
)
def generate_missing_files(
self, config_dict: Dict[str, Any], config_dir_path: str
self, config_dict: dict[str, Any], config_dir_path: str
) -> None:
self.invoke_all("generate_files", config_dict, config_dir_path)
@@ -911,8 +911,22 @@ class RootConfig:
existing_config.root = None
return existing_config
def validate_config(self) -> None:
"""
Additional config validation across all config sections.
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
Override this in subclasses to add extra validation. This is called once all
config option values have been populated.
XXX: This should only validate, not modify the configuration, as the final
config state is required for proper validation across all config sections.
Raises:
ConfigError: if the config is invalid.
"""
def read_config_files(config_files: Iterable[str]) -> dict[str, Any]:
"""Read the config files and shallowly merge them into a dict.
Successive configurations are shallowly merged into ones provided earlier,
@@ -946,7 +960,7 @@ def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
return specified_config
def find_config_files(search_paths: List[str]) -> List[str]:
def find_config_files(search_paths: list[str]) -> list[str]:
"""Finds config files using a list of search paths. If a path is a file
then that file path is added to the list. If a search path is a directory
then all the "*.yaml" files in that directory are added to the list in
@@ -1000,7 +1014,7 @@ class ShardedWorkerHandlingConfig:
below).
"""
instances: List[str]
instances: list[str]
def should_handle(self, instance_name: str, key: str) -> bool:
"""Whether this instance is responsible for handling the given key."""

View File

@@ -2,15 +2,11 @@ import argparse
from typing import (
Any,
Collection,
Dict,
Iterable,
Iterator,
List,
Literal,
MutableMapping,
Optional,
Tuple,
Type,
TypeVar,
Union,
overload,
@@ -37,6 +33,7 @@ from synapse.config import ( # noqa: F401
key,
logger,
mas,
matrixrtc,
metrics,
modules,
oembed,
@@ -126,9 +123,10 @@ class RootConfig:
auto_accept_invites: auto_accept_invites.AutoAcceptInvitesConfig
user_types: user_types.UserTypesConfig
mas: mas.MasConfig
matrix_rtc: matrixrtc.MatrixRtcConfig
config_classes: List[Type["Config"]] = ...
config_files: List[str]
config_classes: list[type["Config"]] = ...
config_files: list[str]
def __init__(self, config_files: Collection[str] = ...) -> None: ...
def invoke_all(
self, func_name: str, *args: Any, **kwargs: Any
@@ -137,7 +135,7 @@ class RootConfig:
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: Any) -> None: ...
def parse_config_dict(
self,
config_dict: Dict[str, Any],
config_dict: dict[str, Any],
config_dir_path: str,
data_dir_path: str,
allow_secrets_in_config: bool = ...,
@@ -156,11 +154,11 @@ class RootConfig:
) -> str: ...
@classmethod
def load_or_generate_config(
cls: Type[TRootConfig], description: str, argv: List[str]
cls: type[TRootConfig], description: str, argv_options: list[str]
) -> Optional[TRootConfig]: ...
@classmethod
def load_config(
cls: Type[TRootConfig], description: str, argv: List[str]
cls: type[TRootConfig], description: str, argv_options: list[str]
) -> TRootConfig: ...
@classmethod
def add_arguments_to_parser(
@@ -168,8 +166,8 @@ class RootConfig:
) -> None: ...
@classmethod
def load_config_with_parser(
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv: List[str]
) -> Tuple[TRootConfig, argparse.Namespace]: ...
cls: type[TRootConfig], parser: argparse.ArgumentParser, argv_options: list[str]
) -> tuple[TRootConfig, argparse.Namespace]: ...
def generate_missing_files(
self, config_dict: dict, config_dir_path: str
) -> None: ...
@@ -201,16 +199,16 @@ class Config:
def read_template(self, filenames: str) -> jinja2.Template: ...
def read_templates(
self,
filenames: List[str],
filenames: list[str],
custom_template_directories: Optional[Iterable[str]] = None,
) -> List[jinja2.Template]: ...
) -> list[jinja2.Template]: ...
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]: ...
def find_config_files(search_paths: List[str]) -> List[str]: ...
def read_config_files(config_files: Iterable[str]) -> dict[str, Any]: ...
def find_config_files(search_paths: list[str]) -> list[str]: ...
class ShardedWorkerHandlingConfig:
instances: List[str]
def __init__(self, instances: List[str]) -> None: ...
instances: list[str]
def __init__(self, instances: list[str]) -> None: ...
def should_handle(self, instance_name: str, key: str) -> bool: ... # noqa: F811
class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):

View File

@@ -18,7 +18,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
from typing import Any, Dict, Type, TypeVar
from typing import Any, TypeVar
import jsonschema
@@ -79,8 +79,8 @@ Model = TypeVar("Model", bound=BaseModel)
def parse_and_validate_mapping(
config: Any,
model_type: Type[Model],
) -> Dict[str, Model]:
model_type: type[Model],
) -> dict[str, Model]:
"""Parse `config` as a mapping from strings to a given `Model` type.
Args:
config: The configuration data to check
@@ -93,7 +93,7 @@ def parse_and_validate_mapping(
try:
# type-ignore: mypy doesn't like constructing `Dict[str, model_type]` because
# `model_type` is a runtime variable. Pydantic is fine with this.
instances = parse_obj_as(Dict[str, model_type], config) # type: ignore[valid-type]
instances = parse_obj_as(dict[str, model_type], config) # type: ignore[valid-type]
except ValidationError as e:
raise ConfigError(str(e)) from e
return instances

View File

@@ -20,7 +20,7 @@
#
import logging
from typing import Any, Iterable, Optional, Tuple
from typing import Any, Iterable, Optional
from synapse.api.constants import EventTypes
from synapse.config._base import Config, ConfigError
@@ -46,7 +46,7 @@ class ApiConfig(Config):
def _get_prejoin_state_entries(
self, config: JsonDict
) -> Iterable[Tuple[str, Optional[str]]]:
) -> Iterable[tuple[str, Optional[str]]]:
"""Get the event types and state keys to include in the prejoin state."""
room_prejoin_state_config = config.get("room_prejoin_state") or {}

View File

@@ -21,7 +21,7 @@
#
import logging
from typing import Any, Dict, List
from typing import Any
from urllib import parse as urlparse
import yaml
@@ -61,13 +61,13 @@ class AppServiceConfig(Config):
def load_appservices(
hostname: str, config_files: List[str]
) -> List[ApplicationService]:
hostname: str, config_files: list[str]
) -> list[ApplicationService]:
"""Returns a list of Application Services from the config files."""
# Dicts of value -> filename
seen_as_tokens: Dict[str, str] = {}
seen_ids: Dict[str, str] = {}
seen_as_tokens: dict[str, str] = {}
seen_ids: dict[str, str] = {}
appservices = []

View File

@@ -23,7 +23,7 @@ import logging
import os
import re
import threading
from typing import Any, Callable, Dict, Mapping, Optional
from typing import Any, Callable, Mapping, Optional
import attr
@@ -38,7 +38,7 @@ logger = logging.getLogger(__name__)
_CACHE_PREFIX = "SYNAPSE_CACHE_FACTOR"
# Map from canonicalised cache name to cache.
_CACHES: Dict[str, Callable[[float], None]] = {}
_CACHES: dict[str, Callable[[float], None]] = {}
# a lock on the contents of _CACHES
_CACHES_LOCK = threading.Lock()
@@ -104,7 +104,7 @@ class CacheConfig(Config):
_environ: Mapping[str, str] = os.environ
event_cache_size: int
cache_factors: Dict[str, float]
cache_factors: dict[str, float]
global_factor: float
track_memory_usage: bool
expiry_time_msec: Optional[int]

Some files were not shown because too many files have changed in this diff Show More