Broke in #17916, as the signature inspection incorrectly looks at the
wrapper function. We fix this by setting the signature on the wrapper
function to that of the wrapped function via `@functools.wraps`.
Another PR on my quest to a `*_path` variant for every secret. Adds two
config options `admin_token_path` and `client_secret_path` to the
experimental config under `experimental_features.msc3861`. Also includes
tests.
I tried to be a good citizen here by following `attrs` conventions and
not rewriting the corresponding non-path variants in the class, but
instead adding methods to retrieve the value.
Reading secrets from files has the security advantage of separating the
secrets from the config. It also simplifies secrets management in
Kubernetes. Also useful to NixOS users.
When purging history, we try and delete any state groups that become
unreferenced (i.e. there are no longer any events that directly
reference them). When we delete a state group that is referenced by
another state group, we "de-delta" that state group so that it no longer
refers to the state group that is deleted.
There are two bugs with this approach that we fix here:
1. There is a common pattern where we end up storing two state groups
when persisting a state event: the state before and after the new state
event, where the latter is stored as a delta to the former. When
deleting state groups we only deleted the "new" state and left (and
potentially de-deltaed) the old state. This was due to a bug/typo when
trying to find referenced state groups.
2. There are times where we store unreferenced state groups in the DB,
during the purging of history these would not get rechecked and instead
always de-deltaed. Instead, we should check for this case and delete any
unreferenced state groups rather than de-deltaing them.
The effect of the above bugs is that when purging history we'd end up
with lots of unreferenced state groups that had been de-deltaed (i.e.
stored as the full state). This can lead to dramatic increases in
storage space used.
Currently we don't really have anything that stops us from deleting
state groups when an in-flight event references it. This is a fairly
rare race currently, but we want to be able to more aggressively delete
state groups so it is important to address this to ensure that the
database remains valid.
This implements the locking, but doesn't actually use it.
See the class docstring of the new data store for an explanation for how
this works.
---------
Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
This is so workers can call these functions.
This was preventing the [Delete Room Admin
API](https://element-hq.github.io/synapse/latest/admin_api/rooms.html#version-2-new-version)
from succeeding when `block: true` was specified. This was because we
had `run_background_tasks_on` configured to run on a separate worker.
As workers weren't able to call the `block_room` storage function before
this PR, the (delete room) task failed when taken off the queue by the
worker.
Previously, a value like `5q` would be interpreted as 5 milliseconds. We
should just raise an error instead of letting someone run with a
misconfiguration.
This PR changes the logic so that deactivated users are always ignored.
Suspended users were already effectively ignored as Synapse forbids a
join while suspended.
---------
Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.135 to
1.0.137.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/json/releases">serde_json's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.137</h2>
<ul>
<li>Turn on "float_roundtrip" and "unbounded_depth"
features for serde_json in play.rust-lang.org (<a
href="https://redirect.github.com/serde-rs/json/issues/1231">#1231</a>)</li>
</ul>
<h2>v1.0.136</h2>
<ul>
<li>Optimize serde_json::value::Serializer::serialize_map by using
Map::with_capacity (<a
href="https://redirect.github.com/serde-rs/json/issues/1230">#1230</a>,
thanks <a
href="https://github.com/goffrie"><code>@goffrie</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="eb49e28204"><code>eb49e28</code></a>
Release 1.0.137</li>
<li><a
href="51c48ab3b0"><code>51c48ab</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1231">#1231</a>
from dtolnay/playground</li>
<li><a
href="7d8f15b963"><code>7d8f15b</code></a>
Enable "float_roundtrip" and "unbounded_depth"
features in playground</li>
<li><a
href="a46f14cf2e"><code>a46f14c</code></a>
Release 1.0.136</li>
<li><a
href="eb9f3f6387"><code>eb9f3f6</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1230">#1230</a>
from goffrie/patch-1</li>
<li><a
href="513e5b2f74"><code>513e5b2</code></a>
Use Map::with_capacity in value::Serializer::serialize_map</li>
<li>See full diff in <a
href="https://github.com/serde-rs/json/compare/v1.0.135...v1.0.137">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [types-bleach](https://github.com/python/typeshed) from
6.1.0.20240331 to 6.2.0.20241123.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This also happens for rejecting an invite. Basically, any out-of-band membership transition where we first get the membership as an `outlier` and then rely on federation filling us in to de-outlier it.
This PR mainly addresses automated test flakiness, bots/scripts, and options within Synapse like [`auto_accept_invites`](https://element-hq.github.io/synapse/v1.122/usage/configuration/config_documentation.html#auto_accept_invites) that are able to react quickly (before federation is able to push us events), but also helps in generic scenarios where federation is lagging.
I initially thought this might be a Synapse consistency issue (see issues labeled with [`Z-Read-After-Write`](https://github.com/matrix-org/synapse/labels/Z-Read-After-Write)) but it seems to be an event auth logic problem. Workers probably do increase the number of possible race condition scenarios that make this visible though (replication and cache invalidation lag).
Fix https://github.com/element-hq/synapse/issues/15012
(probably fixes https://github.com/matrix-org/synapse/issues/15012 (https://github.com/element-hq/synapse/issues/15012))
Related to https://github.com/matrix-org/matrix-spec/issues/2062
Problems:
1. We don't consider [out-of-band membership](https://github.com/element-hq/synapse/blob/develop/docs/development/room-dag-concepts.md#out-of-band-membership-events) (outliers) in our `event_auth` logic even though we expose them in `/sync`.
1. (This PR doesn't address this point) Perhaps we should consider authing events in the persistence queue as events already in the queue could allow subsequent events to be allowed (events come through many channels: federation transaction, remote invite, remote join, local send). But this doesn't save us in the case where the event is more delayed over federation.
### What happened before?
I wrote some Complement test that stresses this exact scenario and reproduces the problem: https://github.com/matrix-org/complement/pull/757
```
COMPLEMENT_ALWAYS_PRINT_SERVER_LOGS=1 COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestSynapseConsistency
```
We have `hs1` and `hs2` running in monolith mode (no workers):
1. `@charlie1:hs2` is invited and joins the room:
1. `hs1` invites `@charlie1:hs2` to a room which we receive on `hs2` as `PUT /_matrix/federation/v1/invite/{roomId}/{eventId}` (`on_invite_request(...)`) and the invite membership is persisted as an outlier. The `room_memberships` and `local_current_membership` database tables are also updated which means they are visible down `/sync` at this point.
1. `@charlie1:hs2` decides to join because it saw the invite down `/sync`. Because `hs2` is not yet in the room, this happens as a remote join `make_join`/`send_join` which comes back with all of the auth events needed to auth successfully and now `@charlie1:hs2` is successfully joined to the room.
1. `@charlie2:hs2` is invited and and tries to join the room:
1. `hs1` invites `@charlie2:hs2` to the room which we receive on `hs2` as `PUT /_matrix/federation/v1/invite/{roomId}/{eventId}` (`on_invite_request(...)`) and the invite membership is persisted as an outlier. The `room_memberships` and `local_current_membership` database tables are also updated which means they are visible down `/sync` at this point.
1. Because `hs2` is already participating in the room, we also see the invite come over federation in a transaction and we start processing it (not done yet, see below)
1. `@charlie2:hs2` decides to join because it saw the invite down `/sync`. Because `hs2`, is already in the room, this happens as a local join but we deny the event because our `event_auth` logic thinks that we have no membership in the room ❌ (expected to be able to join because we saw the invite down `/sync`)
1. We finally finish processing the `@charlie2:hs2` invite event from and de-outlier it.
- If this finished before we tried to join we would have been fine but this is the race condition that makes this situation visible.
Logs for `hs2`:
```
🗳️ on_invite_request: handling event <FrozenEventV3 event_id=$PRPCvdXdcqyjdUKP_NxGF2CcukmwOaoK0ZR1WiVOZVk, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=invite, outlier=False>
🔦 _store_room_members_txn update room_memberships: <FrozenEventV3 event_id=$PRPCvdXdcqyjdUKP_NxGF2CcukmwOaoK0ZR1WiVOZVk, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=invite, outlier=True>
🔦 _store_room_members_txn update local_current_membership: <FrozenEventV3 event_id=$PRPCvdXdcqyjdUKP_NxGF2CcukmwOaoK0ZR1WiVOZVk, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=invite, outlier=True>
📨 Notifying about new event <FrozenEventV3 event_id=$PRPCvdXdcqyjdUKP_NxGF2CcukmwOaoK0ZR1WiVOZVk, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=invite, outlier=True>
✅ on_invite_request: handled event <FrozenEventV3 event_id=$PRPCvdXdcqyjdUKP_NxGF2CcukmwOaoK0ZR1WiVOZVk, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=invite, outlier=True>
🧲 do_invite_join for @user-2-charlie1:hs2 in !sfZVBdLUezpPWetrol:hs1
🔦 _store_room_members_txn update room_memberships: <FrozenEventV3 event_id=$bwv8LxFnqfpsw_rhR7OrTjtz09gaJ23MqstKOcs7ygA, type=m.room.member, state_key=@user-1-alice:hs1, membership=join, outlier=True>
🔦 _store_room_members_txn update room_memberships: <FrozenEventV3 event_id=$oju1ts3G3pz5O62IesrxX5is4LxAwU3WPr4xvid5ijI, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=join, outlier=False>
📨 Notifying about new event <FrozenEventV3 event_id=$oju1ts3G3pz5O62IesrxX5is4LxAwU3WPr4xvid5ijI, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=join, outlier=False>
...
🗳️ on_invite_request: handling event <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=False>
🔦 _store_room_members_txn update room_memberships: <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=True>
🔦 _store_room_members_txn update local_current_membership: <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=True>
📨 Notifying about new event <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=True>
✅ on_invite_request: handled event <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=True>
📬 handling received PDU in room !sfZVBdLUezpPWetrol:hs1: <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=False>
📮 handle_new_client_event: handling <FrozenEventV3 event_id=$WNVDTQrxy5tCdPQHMyHyIn7tE4NWqKsZ8Bn8R4WbBSA, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=join, outlier=False>
❌ Denying new event <FrozenEventV3 event_id=$WNVDTQrxy5tCdPQHMyHyIn7tE4NWqKsZ8Bn8R4WbBSA, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=join, outlier=False> because 403: You are not invited to this room.
synapse.http.server - 130 - INFO - POST-16 - <SynapseRequest at 0x7f460c91fbf0 method='POST' uri='/_matrix/client/v3/join/%21sfZVBdLUezpPWetrol:hs1?server_name=hs1' clientproto='HTTP/1.0' site='8080'> SynapseError: 403 - You are not invited to this room.
📨 Notifying about new event <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=False>
✅ handled received PDU in room !sfZVBdLUezpPWetrol:hs1: <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=False>
```
Bumps
[dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact)
from 7 to 8.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dawidd6/action-download-artifact/releases">dawidd6/action-download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v8</h2>
<h2>New features</h2>
<ul>
<li><code>use_unzip</code> boolean input (defaulting to false) - if set
to true, the action will use system provided <code>unzip</code> utility
for unpacking downloaded artifact(s) (note that the action will first
download the .zip artifact file, then unpack it and remove the .zip
file)</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>README: v7 by <a
href="https://github.com/haines"><code>@haines</code></a> in <a
href="https://redirect.github.com/dawidd6/action-download-artifact/pull/318">dawidd6/action-download-artifact#318</a></li>
<li>Unzip by <a
href="https://github.com/dawidd6"><code>@dawidd6</code></a> in <a
href="https://redirect.github.com/dawidd6/action-download-artifact/pull/325">dawidd6/action-download-artifact#325</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/haines"><code>@haines</code></a> made
their first contribution in <a
href="https://redirect.github.com/dawidd6/action-download-artifact/pull/318">dawidd6/action-download-artifact#318</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/dawidd6/action-download-artifact/compare/v7...v8">https://github.com/dawidd6/action-download-artifact/compare/v7...v8</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="20319c5641"><code>20319c5</code></a>
README: v8</li>
<li><a
href="e58a9e5d14"><code>e58a9e5</code></a>
Unzip (<a
href="https://redirect.github.com/dawidd6/action-download-artifact/issues/325">#325</a>)</li>
<li><a
href="6d05268723"><code>6d05268</code></a>
node_modules: update</li>
<li><a
href="c03fb0c928"><code>c03fb0c</code></a>
README: v7 (<a
href="https://redirect.github.com/dawidd6/action-download-artifact/issues/318">#318</a>)</li>
<li>See full diff in <a
href="80620a5d27...20319c5641">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Otherwise these can race with other long running queries and lock out
all other queries.
This caused problems in v1.22.0 as we added an index to `events` table
in #17948, but that got interrupted and so next time we ran the
background update we needed to delete the half-finished index. However,
that got blocked behind some long running queries and then locked other
queries out (stopping workers from even starting).
This is particularly a problem in a state reset scenario where the membership
might change without a corresponding event.
This PR is targeting a scenario where a state reset happens which causes
room membership to change. Previously, the cache would just hold onto
stale data and now we properly bust the cache in this scenario.
We have a few tests for these scenarios which you can see are now fixed
because we can remove the `FIXME` where we were previously manually
busting the cache in the test itself.
This is a general Synapse thing so by it's nature it helps out Sliding
Sync.
Fix https://github.com/element-hq/synapse/issues/17368
Prerequisite for https://github.com/element-hq/synapse/issues/17929
---
Match when are busting `_curr_state_delta_stream_cache`
Adds a query param `type` to `/_synapse/admin/v1/rooms/{room_id}/state`
that filters the state event query by state event type.
---------
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Refactor `get_profile` to avoid returning "empty" (`None` / `null`)
fields. Currently this is not very important, but will be more useful
once #17488 lands. It does update the servlet to use this now which has
a minor change in behavior: additional fields served over federation
will now be properly sent back to clients.
It also adds constants for `avatar_url` / `displayname` although I did
not attempt to use it everywhere possible.
`defer.returnValue` was only needed in Python 2; in Python 3, a simple
`return` is fine.
`twisted.internet.defer.returnValue` is deprecated as of Twisted 24.7.0.
Most uses of `returnValue` in synapse were removed a while back; this
cleans up some remaining bits.
This is essentially matrix-org/synapse#14392. I didn't see anything in
there about updating sytest or complement.
The main driver of this is so that I can use `jsonb_path_exists` in
#17488. 😄
Fixes various `mypy` errors associated with Twisted `24.11.0`.
Hopefully addresses https://github.com/element-hq/synapse/issues/17075,
though I've yet to test against `trunk`.
Changes should be compatible with our currently pinned Twisted version
of `24.7.0`.
Supersedes https://github.com/element-hq/synapse/pull/17958.
Awkwardly, the changes made to fix the mypy errors in 1.12.1 cause
errors in 1.11.2. So you'll need to update your mypy version to 1.12.1
to eliminate typechecking errors during developing.
The existing `email.smtp_host` config option is used for two distinct
purposes: it is resolved into the IP address to connect to, and used to
(request via SNI and) validate the server's certificate if TLS is
enabled. This new option allows specifying a different name for the
second purpose.
This is especially helpful, if `email.smtp_host` isn't a global FQDN,
but something that resolves only locally (e.g. "localhost" to connect
through the loopback interface, or some other internally routed name),
that one cannot get a valid certificate for.
Alternatives would of course be to specify a global FQDN as
`email.smtp_host`, or to disable TLS entirely, both of which might be
undesirable, depending on the SMTP server configuration.
Another config option on my quest to a `*_path` variant for every
secret. This time it’s `macaroon_secret_key_path`.
Reading secrets from files has the security advantage of separating the secrets from the config. It also simplifies secrets management in Kubernetes. Also useful to NixOS users.
- Fetch the number of invites the provided user has sent after a given
timestamp
- Fetch the number of rooms the provided user has joined after a given
timestamp, regardless if they have left/been banned from the rooms
subsequently
- Get report IDs of event reports where the provided user was the sender
of the reported event
When rejecting a withdrew invite through federation, an out of band
event needs to be created.
When doing so with a third_party_rules module installed,
`get_prev_state_ids` [is
called](e0fdb862cb/synapse/module_api/callbacks/third_party_event_rules_callbacks.py (L285))
on the context to calculate the state to pass at `check_event_allowed`
callbacks.
The context for outliers is defined
[here](e0fdb862cb/synapse/events/snapshot.py (L168)),
and `state_group_before_event` is None.
This change makes the behavior of `get_prev_state_ids` and
`get_current_state_ids` match the one presented in the docstring
regarding null state_group.
POST requests for account data, receipts and presence require the worker
to be configured as a stream writer. The regular expressions in the
default list don't assume any HTTP method, so if the worker is not a
stream writer, the request fails.
The stream writer section of the documentation lists the same regexps as
the one I'm removing, so people configuring stream writers can still
configure their routing properly.
More context:
https://github.com/element-hq/synapse/issues/17243#issuecomment-2493621645
This is an implementation of MSC4190, which allows appservices to manage
their user's devices without /login & /logout.
---------
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Be able to test `/login/sso/redirect` in Complement
Spawning from
https://github.com/element-hq/sbg/pull/421#discussion_r1854926218 where
we have a proxy that intercepts responses to
`/_matrix/client/v3/login/sso/redirect(/{idpId})` in order to upgrade
them to use OAuth 2.0 Pushed Authorization Requests (PAR). We have some
Complement tests in that codebase that go over this flow and these
changes are required [in order for the URL's to line
up](d648c8ce3f/synapse/rest/client/login.py (L652-L673)).
Currently, when a new scheduled task is added and its scheduled time has
already passed, we set it to ACTIVE. This is problematic, because it
means it will jump the queue ahead of all other SCHEDULED tasks;
furthermore, if the Synapse process gets restarted, it will jump ahead
of any ACTIVE tasks which have been started but are taking a while to
run.
Instead, we leave it set to SCHEDULED, but kick off a call to
`_launch_scheduled_tasks`, which will decide if we actually have
capacity to start a new task, and start the newly-added task if so.
This is a workaround for some proxy setup, where the ETag header gets
stripped from the response headers unless there is a Content-Type header
set.
In particular, we saw this bug when putting Cloudflare in front of
Synapse.
I'm pretty sure this is a Cloudflare bug, as this behaviour isn't
documented anywhere, and doesn't make sense whatsoever.
---------
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
In a worker-mode deployment, the `E2eKeysHandler` is not necessarily
loaded, which means the handler for the `delete_old_otks` task will not
be registered. Make sure we load the handler.
Introduced in https://github.com/element-hq/synapse/pull/17934
For context of why we delay read receipts, see
https://github.com/matrix-org/synapse/issues/4730.
Element Web often sends read receipts in quick succession, if it reloads
the timeline it'll send one for the last message in the old timeline and
again for the last message in the new timeline. This caused remote users
to see a read receipt for older messages come through quickly, but then
the second read receipt taking a while to arrive for the most recent
message.
There are two things going on in this PR:
1. There was a mismatch between seconds and milliseconds, and so we
ended up delaying for far longer than intended.
2. Changing the logic to reuse the `DestinationWakeupQueue` (used for
presence)
The changes in logic are:
- Treat the first receipt and subsequent receipts in a room in the same
way
- Whitelist certain classes of receipts to never delay being sent, i.e.
receipts in small rooms, receipts for events that were sent within the
last 60s, and sending receipts to the event sender's server.
- The maximum delay a receipt can have before being sent to a server is
30s, and we'll send out receipts to remotes at least at 50Hz (by
default)
The upshot is that this should make receipts feel more snappy over
federation.
This new logic should send roughly between 10%–20% of transactions
immediately on matrix.org.
To work around the fact that,
pre-https://github.com/element-hq/synapse/pull/17903, our database may
have old one-time-keys that the clients have long thrown away the
private keys for, we want to delete OTKs that look like they came from
libolm.
To spread the load a bit, without holding up other background database
updates, we use a scheduled task to do the work.
We were pinned to an old version that had deprecation warnings.
In new versions of the action leaving off properties (i.e. `draft` and
`prerelease`) tells the action to not modify those properties of the
release.
There was a bug that meant we would return the full state of the room on
incremental syncs when using lazy loaded members and there were no
entries in the timeline.
This was due to trying to use `state_filter or state_filter.all()` as a
short hand for handling `None` case, however `state_filter` implements
`__bool__` so if the state filter was empty it would be set to full.
c.f. MSC4222 and #17888
The latest Twisted release changed how they implemented `__await__` on
deferreds, which broke the machinery we used to test cancellation.
This PR changes things a bit to instead patch the `__await__` method,
which is a stable API. This mostly doesn't change the core logic, except
for fixing two bugs:
- We previously did not intercept all await points
- After cancellation we now need to not only unblock currently blocked
await points, but also make sure we don't block any future await points.
c.f. https://github.com/twisted/twisted/pull/12226
---------
Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
Currently, one-time-keys are issued in a somewhat random order. (In
practice, they are issued according to the lexicographical order of
their key IDs.) That can lead to a situation where a client gives up
hope of a given OTK ever being used, whilst it is still on the server.
Related: https://github.com/element-hq/element-meta/issues/2356
Update version constraint to allow the latest `poetry-core` `1.9.1`
Context:
> I am working on updating poetry-core in Fedora and synapse is one of
affected packages. Please run a CI to see if it works properly. Thank
you.
Mergeable version of https://github.com/element-hq/synapse/pull/17848
When entries insert in the end of timer queue, then unnecessary entry
inserted (with duplicated key).
This can lead to some timeouts expired early and consume memory.
Basically, if the client sets a special query param on `/sync` v2
instead of responding with `state` at the *start* of the timeline, we
instead respond with `state_after` at the *end* of the timeline.
We do this by using the `current_state_delta_stream` table, which is
actually reliable, rather than messing around with "state at" points on
the timeline.
c.f. MSC4222
The main change here is to add a helper function
`gather_optional_coroutines`, which works in a similar way as
`yieldable_gather_results` but takes a set of coroutines rather than a
function
Fixes#17823
While we're at it, makes a change where the redactions are sent as the
admin if the user is not a member of the server (otherwise these fail
with a "User must be our own" message).
Reset `sliding_sync_membership_snapshots` -> `forgotten` status when
membership changes (like rejoining a room).
Fix https://github.com/element-hq/synapse/issues/17781
### What was the problem before?
Previously, if someone used `/forget` on one of their rooms, it would
update `sliding_sync_membership_snapshots` as expected but when someone
rejoined the room (or had any membership change), the upsert didn't
overwrite and reset the `forgotten` status so it remained `forgotten`
and invisible down the Sliding Sync endpoint.
This adds Python 3.13.0 to the trial test matrix
Also updates `cffi` and `zope.interface` in the locked dependencies to
make sure we have versions compatible with Python 3.13. For some
reasons, they are not being picked up by dependabot.
Bumps [mypy](https://github.com/python/mypy) from 1.10.1 to 1.11.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/python/mypy/blob/master/CHANGELOG.md">mypy's
changelog</a>.</em></p>
<blockquote>
<h3>Mypy 1.11.2</h3>
<ul>
<li>Alternative fix for a union-like literal string (Ivan Levkivskyi, PR
<a
href="https://redirect.github.com/python/mypy/pull/17639">17639</a>)</li>
<li>Unwrap <code>TypedDict</code> item types before storing (Ivan
Levkivskyi, PR <a
href="https://redirect.github.com/python/mypy/pull/17640">17640</a>)</li>
</ul>
<h3>Acknowledgements</h3>
<p>Thanks to all mypy contributors who contributed to this release:</p>
<ul>
<li>Alex Waygood</li>
<li>Alexander Leopold Shon</li>
<li>Ali Hamdan</li>
<li>Anders Kaseorg</li>
<li>Ben Brown</li>
<li>Bénédikt Tran</li>
<li>bzoracler</li>
<li>Christoph Tyralla</li>
<li>Christopher Barber</li>
<li>dexterkennedy</li>
<li>gilesgc</li>
<li>GiorgosPapoutsakis</li>
<li>Ivan Levkivskyi</li>
<li>Jelle Zijlstra</li>
<li>Jukka Lehtosalo</li>
<li>Marc Mueller</li>
<li>Matthieu Devlin</li>
<li>Michael R. Crusoe</li>
<li>Nikita Sobolev</li>
<li>Seo Sanghyeon</li>
<li>Shantanu</li>
<li>sobolevn</li>
<li>Steven Troxler</li>
<li>Tadeu Manoel</li>
<li>Tamir Duberstein</li>
<li>Tushar Sadhwani</li>
<li>urnest</li>
<li>Valentin Stanciu</li>
</ul>
<p>I’d also like to thank my employer, Dropbox, for supporting mypy
development.</p>
<h2>Mypy 1.10</h2>
<p>We’ve just uploaded mypy 1.10 to the Python Package Index (<a
href="https://pypi.org/project/mypy/">PyPI</a>). Mypy is a static type
checker for Python. This release includes new features, performance
improvements and bug fixes. You can install it as follows:</p>
<pre><code>python3 -m pip install -U mypy
</code></pre>
<p>You can read the full documentation for this release on <a
href="http://mypy.readthedocs.io">Read the Docs</a>.</p>
<h3>Support TypeIs (PEP 742)</h3>
<p>Mypy now supports <code>TypeIs</code> (<a
href="https://peps.python.org/pep-0742/">PEP 742</a>), which allows</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="789f02c83a"><code>789f02c</code></a>
Bump version to 1.11.2</li>
<li><a
href="917cc75fd6"><code>917cc75</code></a>
An alternative fix for a union-like literal string (<a
href="https://redirect.github.com/python/mypy/issues/17639">#17639</a>)</li>
<li><a
href="7d805b364e"><code>7d805b3</code></a>
Unwrap TypedDict item types before storing (<a
href="https://redirect.github.com/python/mypy/issues/17640">#17640</a>)</li>
<li><a
href="32675dddfa"><code>32675dd</code></a>
Revert "Fix Literal strings containing pipe characters" (<a
href="https://redirect.github.com/python/mypy/issues/17638">#17638</a>)</li>
<li><a
href="778542b93a"><code>778542b</code></a>
Revert "Fix <code>RawExpressionType.accept</code> crash with
<code>--cache-fine-grained</code>" (<a
href="https://redirect.github.com/python/mypy/issues/1">#1</a>...</li>
<li><a
href="14ab742dec"><code>14ab742</code></a>
Bump version to 1.11.2+dev</li>
<li><a
href="570b90a7a3"><code>570b90a</code></a>
Bump version to 1.11</li>
<li><a
href="b3a102ef31"><code>b3a102e</code></a>
Fix <code>RawExpressionType.accept</code> crash with
<code>--cache-fine-grained</code> (<a
href="https://redirect.github.com/python/mypy/issues/17588">#17588</a>)</li>
<li><a
href="aec04c7448"><code>aec04c7</code></a>
Fix PEP 604 isinstance caching (<a
href="https://redirect.github.com/python/mypy/issues/17563">#17563</a>)</li>
<li><a
href="cb44e4d8f1"><code>cb44e4d</code></a>
Fix <code>typing.TypeAliasType</code> being undefined on python <
3.12 (<a
href="https://redirect.github.com/python/mypy/issues/17558">#17558</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/python/mypy/compare/v1.10.1...v1.11.2">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [types-requests](https://github.com/python/typeshed) from
2.32.0.20240914 to 2.32.0.20241016.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [psycopg2](https://github.com/psycopg/psycopg2) from 2.9.9 to
2.9.10.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/psycopg/psycopg2/blob/master/NEWS">psycopg2's
changelog</a>.</em></p>
<blockquote>
<h2>Current release</h2>
<p>What's new in psycopg 2.9.10
^^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Add support for Python 3.13.</li>
<li>Receive notifications on commit
(🎫<code>[#1728](https://github.com/psycopg/psycopg2/issues/1728)</code>).</li>
<li><code>~psycopg2.errorcodes</code> map and
<code>~psycopg2.errors</code> classes updated to
PostgreSQL 17.</li>
<li>Drop support for Python 3.7.</li>
</ul>
<p>What's new in psycopg 2.9.9
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Add support for Python 3.12.</li>
<li>Drop support for Python 3.6.</li>
</ul>
<p>What's new in psycopg 2.9.8
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Wheel package bundled with PostgreSQL 16 libpq in order to add
support for
recent features, such as <code>sslcertmode</code>.</li>
</ul>
<p>What's new in psycopg 2.9.7
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Fix propagation of exceptions raised during module initialization
(🎫<code>[#1598](https://github.com/psycopg/psycopg2/issues/1598)</code>).</li>
<li>Fix building when pg_config returns an empty string
(🎫<code>[#1599](https://github.com/psycopg/psycopg2/issues/1599)</code>).</li>
<li>Wheel package bundled with OpenSSL 1.1.1v.</li>
</ul>
<p>What's new in psycopg 2.9.6
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Package manylinux 2014 for aarch64 and ppc64le platforms, in order
to
include libpq 15 in the binary package
(🎫<code>[#1396](https://github.com/psycopg/psycopg2/issues/1396)</code>).</li>
<li>Wheel package bundled with OpenSSL 1.1.1t.</li>
</ul>
<p>What's new in psycopg 2.9.5
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Add support for Python 3.11.</li>
<li>Add support for rowcount in MERGE statements in binary packages
(🎫<code>[#1497](https://github.com/psycopg/psycopg2/issues/1497)</code>).</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/psycopg/psycopg2/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Spawning from @kegsay [pointing
out](https://matrix.to/#/!cnVVNLKqgUzNTOFQkz:matrix.org/$ExOO7J8uPUQSyH-9Uxc_QCa8jlXX9uK4VRtkSC0EI3o?via=element.io&via=matrix.org&via=jki.re)
that the Sliding Sync endpoint doesn't handle a large room with a lot of
state well on initial sync (requesting all state via `required_state: [
["*","*"] ]`) (it just takes forever).
After investigating further, the slow part is just
`get_events_as_list(...)` fetching all of the current state ID's out for
the room (which can be 100k+ events for rooms with a lot of membership).
This is just a slow thing in Synapse in general and the same thing
happens in Sync v2 or the `/state` endpoint.
---
The only idea I had to improve things was to use `batch_iter` to only
try fetching a fixed amount at a time instead of working with large
maps, lists, and sets. This doesn't seem to have much effect though.
There is already a `batch_iter(event_ids, 200)` in
`_fetch_event_rows(...)` for when we actually have to touch the database
and that's inside a queue to deduplicate work.
I did notice one slight optimization to use `get_events_as_list(...)`
directly instead of `get_events(...)`. `get_events(...)` just turns the
result from `get_events_as_list(...)` into a dict and since we're just
iterating over the events, we don't need the dict/map.
Fixes https://github.com/element-hq/synapse/issues/17698
This handles `required_state` changes by checking if new state has been
added to the config, and if so fetching and returning that from the
current state.
This also takes care to ensure that given a state entry S that is added,
removed and then re-added that we do *not* send S down a second time if
there have been no changes to S in the current state. This is fine for
Rust SDK (as it just remembers all state), but we might decide not to do
this behaviour in the MSC. If we decide to always send down S then its
easy enough to rip out all the code.
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
c.f. https://wiki.ubuntu.com/Releases for the currently supported Ubuntu
releases.
Note: this removes support for 23.04 and 23.10, which are EOL.
Fixes#17811
- better validation on user input
- fix an early task completion
- when checking membership in rooms, check for rooms user has been
banned from as well
Two changes: a) use a batch lookup function instead of a loop, b) check
existing data to see if we already have what we need and only fetch what
we don't.
Adds the option to load the Redis password from a file, instead of
giving it in the config directly. The code is similar to how it’s done
for `registration_shared_secret_path`. I changed the example in the
documentation to represent the best practice regarding the handling of
secrets.
Reading secrets from files has the security advantage of separating the
secrets from the config. It also simplifies secrets management in
Kubernetes.
Added a note in the documentation suggesting that users may set
`PYTHONMALLOC=malloc` when using `jemalloc`. This allows jemalloc to
track memory usage more accurately by bypassing Python's internal
small-object allocator (`pymalloc`), helping to ensure that
`cache_autotuning` functions as expected.
This doc change aims to provide more clarity for users configuring
jemalloc with Synapse.
Based on:
4ac783549c/synapse/metrics/jemalloc.py (L198-L201)
There is a bug with the `StreamChangeCache` where it would incorrectly
return that all entities had changed if asked for entities changed
*since* the earliest stream position.
Note that for streams we use the inequalities: `$min_stream_id <
stream_id <= $max_stream_id`, i.e. when we ask the stream change cache
for all things that have changed since `$stream_id` we don't care for
events that happened *at* `$stream_id`.
Specifically: `_earliest_known_stream_pos` is the position at which we
know that we'll have entries for all changes since that point, we can
use the cache for any stream IDs that equal
`_earliest_known_stream_pos`.
`_earliest_known_stream_pos` is set in three places:
- On startup we set it either to:
- the current maximum stream ID, with not prefilled values; or
- the minimum of the latest N values we pulled from the DB
- When we evict items from the bottom, we set it to the stream ID of the
evicted items.
This was changed in https://github.com/matrix-org/synapse/pull/14435,
but I think we were overly conservative there.
---------
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Based on #17765.
Basically the idea is to reduce the overhead of calling
`ObservableDeferred` in a loop. The two gains are: a) just using a list
of deferreds rather than the machinery of `ObservableDeferred`, and b)
only calling `PreseverLoggingContext` once.
`PreseverLoggingContext` in particular is expensive to call a lot as
each time it needs to call `get_thread_resource_usage` twice, so that it
an update the CPU metrics of the log context.
The notifier is quite inefficient when it has to wake up many user
streams all at once
From a silly benchmark this takes the time to notify 1M user streams
from ~30s to ~5s
This works as instead of passing *all* rooms to `record_sent_rooms` we
only need to pass rooms that were previously not in the LIVE state.
This came from a py-spy where we were spending ~10% CPU calling these
functions. Note that `record_sent_rooms` is a no-op for rooms that are
already in the `LIVE` state, so we only need to call them for
`PREVIOUSLY` or `INITIAL` rooms.
This was a note added in the PR to move to AGPL, which we failed to
remove before landing.
(The context for this was that we needed to decide if we were going to
change which debian repository we published too, but decided not to in
the end)
Bumps [types-setuptools](https://github.com/python/typeshed) from
74.1.0.20240907 to 75.1.0.20240917.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [types-pyyaml](https://github.com/python/typeshed) from
6.0.12.20240808 to 6.0.12.20240917.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [pyasn1-modules](https://github.com/pyasn1/pyasn1-modules) from
0.4.0 to 0.4.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pyasn1/pyasn1-modules/releases">pyasn1-modules's
releases</a>.</em></p>
<blockquote>
<h2>Release 0.4.1</h2>
<p>It's a minor release.</p>
<ul>
<li>Added support for Python 3.13.</li>
</ul>
<p>All changes are noted in the <a
href="https://github.com/pyasn1/pyasn1-modules/blob/main/CHANGES.txt">CHANGELOG</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyasn1/pyasn1-modules/blob/main/CHANGES.txt">pyasn1-modules's
changelog</a>.</em></p>
<blockquote>
<h2>Revision 0.4.1, released 10-09-2024</h2>
<ul>
<li>Added support for Python 3.13</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="36b036311a"><code>36b0363</code></a>
Prepare release 0.4.1</li>
<li><a
href="b0d849798a"><code>b0d8497</code></a>
Add support for Python 3.13 (<a
href="https://redirect.github.com/pyasn1/pyasn1-modules/issues/17">#17</a>)</li>
<li>See full diff in <a
href="https://github.com/pyasn1/pyasn1-modules/compare/v0.4.0...v0.4.1">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.7.1 to 1.7.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/releases">bytes's
releases</a>.</em></p>
<blockquote>
<h2>Bytes 1.7.2</h2>
<h1>1.7.2 (September 17, 2024)</h1>
<h3>Fixed</h3>
<ul>
<li>Fix default impl of <code>Buf::{get_int, get_int_le}</code> (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
</ul>
<h3>Documented</h3>
<ul>
<li>Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
</ul>
<h3>Internal changes</h3>
<ul>
<li>Ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md">bytes's
changelog</a>.</em></p>
<blockquote>
<h1>1.7.2 (September 17, 2024)</h1>
<h3>Fixed</h3>
<ul>
<li>Fix default impl of <code>Buf::{get_int, get_int_le}</code> (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
</ul>
<h3>Documented</h3>
<ul>
<li>Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
</ul>
<h3>Internal changes</h3>
<ul>
<li>Ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d7c1d658d9"><code>d7c1d65</code></a>
chore: prepare bytes v1.7.2 (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/736">#736</a>)</li>
<li><a
href="ac46ebdd46"><code>ac46ebd</code></a>
ci: update nightly to nightly-2024-09-15 (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/734">#734</a>)</li>
<li><a
href="79fb85323c"><code>79fb853</code></a>
fix: apply sign extension when decoding int (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
<li><a
href="291df5acc9"><code>291df5a</code></a>
Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
<li><a
href="ed7d5ff39e"><code>ed7d5ff</code></a>
test: ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
<li>See full diff in <a
href="https://github.com/tokio-rs/bytes/compare/v1.7.1...v1.7.2">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This is basically exactly the same logic as for receipts. Essentially we
just need to track which room account data we have and haven't sent down
to clients, and use that when we pull stuff out.
I think this just needs a couple of extra tests written
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
Performance optimization: We can avoid fetching rooms that the user has
left themselves (which could be a significant amount), then only add
back rooms that the user has `newly_left` (left in the token range of an
incremental sync). It's a lot faster to fetch less rooms than fetch them
all and throw them away in most cases. Since the user only leaves a room
(or is state reset out) once in a blue moon, we can avoid a lot of work.
Based on @erikjohnston's branch, erikj/ss_perf
---------
Co-authored-by: Erik Johnston <erik@matrix.org>
Add cache to `get_tags_for_room(...)`
This helps Sliding Sync because `get_tags_for_room(...)` is going to be
used in https://github.com/element-hq/synapse/pull/17695
Essentially, we're just trying to match `get_account_data_for_room(...)`
which already has a tree cache.
No need to sort if the range is large enough to cover all of the rooms
in the list. Previously, we would only do this optimization if the range
was exactly large enough.
Follow-up to https://github.com/element-hq/synapse/pull/17672
This appears to be enough to make Element Web work (or at least move it
on to the next hurdle)
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
Move filters tests to rest layer in order to test the new (with sliding
sync tables) and fallback paths that Sliding Sync can use.
Also found a bug in the new path because it's not being tested which is
also fixed in this PR. We now take into account `has_known_state` when
filtering.
Spawning from
https://github.com/element-hq/synapse/pull/17662#discussion_r1755574791.
This should have been done when we started using the new sliding sync
tables in https://github.com/element-hq/synapse/pull/17630
This PR changes `from pydantic import BaseModel` to `from
synapse._pydantic_compat import BaseModel` (as well as `constr`,
`conbytes`, `conint`, `confloat`).
It allows `check_pydantic_models.py` to mock those pydantic objects only
in the synapse module, and not interfere with pydantic objects in
external dependencies.
This should solve the CI problems for #17144, which breaks because
`check_pydantic_models.py` patches pydantic models from
[scim2-models](https://scim2-models.readthedocs.io/).
/cc @DMRobertson @gotmax23
fixes#17659
### Pull Request Checklist
<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->
* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
- Use markdown where necessary, mostly for `code blocks`.
- End with either a period (.) or an exclamation mark (!).
- Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
We need to bust the `get_sliding_sync_rooms_for_user`
cache when the room encryption is updated and any
other field that is used in the query.
Follow-up to https://github.com/element-hq/synapse/pull/17630
- Bust cache for membership change (cross-reference
`get_rooms_for_user`)
- Bust cache for room `encryption` (cross-reference
`get_room_encryption`)
- Bust cache for `forgotten` (cross-reference
`did_forget`/`get_forgotten_rooms_for_user`)
For rooms with a name we can skip fetching a full room summary, as we
don't need to calculate heroes, and instead just fetch the room counts
directly.
This also changes things to not return counts and heroes for non-joined
rooms. For left/banned rooms we were returning zero values anyway, and
for invite/knock rooms we don't really want to leak such information
(even if some of is included in the stripped state).
For rooms with a name we can skip fetching a full room summary, as we
don't need to calculate heroes, and instead just fetch the room counts
directly.
This also changes things to not return counts and heroes for non-joined
rooms. For left/banned rooms we were returning zero values anyway, and
for invite/knock rooms we don't really want to leak such information
(even if some of is included in the stripped state).
Speed up incremental sync by avoiding extra work. We first look at the
state delta changes and only fetch and calculate further derived things
if they have changed.
Instead of having a large cache of `room_id -> bool` about whether a
room is partially stated, replace with a "fetch rooms the user is which
are partially-stated". This is a lot faster as the set of partially
stated rooms at any point across the whole server is small, and so such
a query is fast.
The main issue with the bulk cache lookup is the CPU time looking all
the rooms up in the cache.
We ended up spending ~10% CPU creating a new dictionary and
`_RoomMembershipForUser`, so let's avoid creating new dicts and copying
by returning `newly_joined`, `newly_left` and `is_dm` as sets directly.
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
I thought ruff check would also format, but it doesn't.
This runs ruff format in CI and dev scripts. The first commit is just a
run of `ruff format .` in the root directory.
This is to make it easier to reuse the logic when adding support for the
new tables
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
Regressed in #17543.
The `max_download_size` config is not available on workers that don't
load the media repo.
Besides, we should honour the max_size param that was passed into the
function.
This will help mitigating any discrepancies between the issuer
configured and the one returned by the OIDC provider.
This also removes the need for configuring the `account_management_url`
explicitely, as it will now be loaded from the OIDC discovery, as per
MSC2965.
Because we may now fetch stuff for the .well-known/matrix/client
endpoint, this also transforms the client well-known resource to be
asynchronous.
This is so that we can cache it.
We also move the sliding sync types to
`synapse/types/handlers/sliding_sync.py`. This is mainly in-prep for
The only change in behaviour is that
`RoomSyncConfig.combine_sync_config(..)` now returns a new room sync
config rather than mutating in-place.
Reviewable commit-by-commit.
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
This will help mitigating any discrepancies between the issuer
configured and the one returned by the OIDC provider.
This also removes the need for configuring the `account_management_url`
explicitely, as it will now be loaded from the OIDC discovery, as per
MSC2965.
Because we may now fetch stuff for the .well-known/matrix/client
endpoint, this also transforms the client well-known resource to be
asynchronous.
Fix outlier re-persisting causing problems with sliding sync tables
Follow-up to https://github.com/element-hq/synapse/pull/17512
When running on `matrix.org`, we discovered that a remote invite is
first persisted as an `outlier` and then re-persisted again where it is
de-outliered. The first the time, the `outlier` is persisted with one
`stream_ordering` but when persisted again and de-outliered, it is
assigned a different `stream_ordering` that won't end up being used.
Since we call `_calculate_sliding_sync_table_changes()` before
`_update_outliers_txn()` which fixes this discrepancy (always use the
`stream_ordering` from the first time it was persisted), we're working
with an unreliable `stream_ordering` value that will possibly be unused
and not make it into the `events` table.
This is so that we can cache it.
We also move the sliding sync types to
`synapse/types/handlers/sliding_sync.py`. This is mainly in-prep for
#17599 to avoid circular imports.
The only change in behaviour is that
`RoomSyncConfig.combine_sync_config(..)` now returns a new room sync
config rather than mutating in-place.
Reviewable commit-by-commit.
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
Pre-populate room data for quick filtering/sorting in the Sliding Sync
API
Spawning from
https://github.com/element-hq/synapse/pull/17450#discussion_r1697335578
This PR is acting as the Synapse version `N+1` step in the gradual
migration being tracked by
https://github.com/element-hq/synapse/issues/17623
Adding two new database tables:
- `sliding_sync_joined_rooms`: A table for storing room meta data that
the local server is still participating in. The info here can be shared
across all `Membership.JOIN`. Keyed on `(room_id)` and updated when the
relevant room current state changes or a new event is sent in the room.
- `sliding_sync_membership_snapshots`: A table for storing a snapshot of
room meta data at the time of the local user's membership. Keyed on
`(room_id, user_id)` and only updated when a user's membership in a room
changes.
Also adds background updates to populate these tables with all of the
existing data.
We want to have the guarantee that if a row exists in the sliding sync
tables, we are able to rely on it (accurate data). And if a row doesn't
exist, we use a fallback to get the same info until the background
updates fill in the rows or a new event comes in triggering it to be
fully inserted. This means we need a couple extra things in place until
we bump `SCHEMA_COMPAT_VERSION` and run the foreground update in the
`N+2` part of the gradual migration. For context on why we can't rely on
the tables without these things see [1].
1. On start-up, block until we clear out any rows for the rooms that
have had events since the max-`stream_ordering` of the
`sliding_sync_joined_rooms` table (compare to max-`stream_ordering` of
the `events` table). For `sliding_sync_membership_snapshots`, we can
compare to the max-`stream_ordering` of `local_current_membership`
- This accounts for when someone downgrades their Synapse version and
then upgrades it again. This will ensure that we don't have any
stale/out-of-date data in the
`sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` tables
since any new events sent in rooms would have also needed to be written
to the sliding sync tables. For example a new event needs to bump
`event_stream_ordering` in `sliding_sync_joined_rooms` table or some
state in the room changing (like the room name). Or another example of
someone's membership changing in a room affecting
`sliding_sync_membership_snapshots`.
1. Add another background update that will catch-up with any rows that
were just deleted from the sliding sync tables (based on the activity in
the `events`/`local_current_membership`). The rooms that need
recalculating are added to the
`sliding_sync_joined_rooms_to_recalculate` table.
1. Making sure rows are fully inserted. Instead of partially inserting,
we need to check if the row already exists and fully insert all data if
not.
All of this extra functionality can be removed once the
`SCHEMA_COMPAT_VERSION` is bumped with support for the new sliding sync
tables so people can no longer downgrade (the `N+2` part of the gradual
migration).
<details>
<summary><sup>[1]</sup></summary>
For `sliding_sync_joined_rooms`, since we partially insert rows as state
comes in, we can't rely on the existence of the row for a given
`room_id`. We can't even rely on looking at whether the background
update has finished. There could still be partial rows from when someone
reverted their Synapse version after the background update finished, had
some state changes (or new rooms), then upgraded again and more state
changes happen leaving a partial row.
For `sliding_sync_membership_snapshots`, we insert items as a whole
except for the `forgotten` column ~~so we can rely on rows existing and
just need to always use a fallback for the `forgotten` data. We can't
use the `forgotten` column in the table for the same reasons above about
`sliding_sync_joined_rooms`.~~ We could have an out-of-date membership
from when someone reverted their Synapse version. (same problems as
outlined for `sliding_sync_joined_rooms` above)
Discussed in an [internal
meeting](https://docs.google.com/document/d/1MnuvPkaCkT_wviSQZ6YKBjiWciCBFMd-7hxyCO-OCbQ/edit#bookmark=id.dz5x6ef4mxz7)
</details>
### TODO
- [x] Update `stream_ordering`/`bump_stamp`
- [x] Handle remote invites
- [x] Handle state resets
- [x] Consider adding `sender` so we can filter `LEAVE` memberships and
distinguish from kicks.
- [x] We should add it to be able to tell leaves from kicks
- [x] Consider adding `tombstone` state to help address
https://github.com/element-hq/synapse/issues/17540
- [x] We should add it `tombstone_successor_room_id`
- [x] Consider adding `forgotten` status to avoid extra
lookup/table-join on `room_memberships`
- [x] We should add it
- [x] Background update to fill in values for all joined rooms and
non-join membership
- [x] Clean-up tables when room is deleted
- [ ] Make sure tables are useful to our use case
- First explored in
https://github.com/element-hq/synapse/compare/erikj/ss_use_new_tables
- Also explored in
76b5a576eb
- [x] Plan for how can we use this with a fallback
- See plan discussed above in main area of the issue description
- Discussed in an [internal
meeting](https://docs.google.com/document/d/1MnuvPkaCkT_wviSQZ6YKBjiWciCBFMd-7hxyCO-OCbQ/edit#bookmark=id.dz5x6ef4mxz7)
- [x] Plan for how we can rely on this new table without a fallback
- Synapse version `N+1`: (this PR) Bump `SCHEMA_VERSION` to `87`. Add
new tables and background update to backfill all rows. Since this is a
new table, we don't have to add any `NOT VALID` constraints and validate
them when the background update completes. Read from new tables with a
fallback in cases where the rows aren't filled in yet.
- Synapse version `N+2`: Bump `SCHEMA_VERSION` to `88` and bump
`SCHEMA_COMPAT_VERSION` to `87` because we don't want people to
downgrade and miss writes while they are on an older version. Add a
foreground update to finish off the backfill so we can read from new
tables without the fallback. Application code can now rely on the new
tables being populated.
- Discussed in an [internal
meeting](https://docs.google.com/document/d/1MnuvPkaCkT_wviSQZ6YKBjiWciCBFMd-7hxyCO-OCbQ/edit#bookmark=id.hh7shg4cxdhj)
### Dev notes
```
SYNAPSE_TEST_LOG_LEVEL=INFO poetry run trial tests.storage.test_events.SlidingSyncPrePopulatedTablesTestCase
SYNAPSE_POSTGRES=1 SYNAPSE_POSTGRES_USER=postgres SYNAPSE_TEST_LOG_LEVEL=INFO poetry run trial tests.storage.test_events.SlidingSyncPrePopulatedTablesTestCase
```
```
SYNAPSE_TEST_LOG_LEVEL=INFO poetry run trial tests.handlers.test_sliding_sync.FilterRoomsTestCase
```
Reference:
- [Development docs on background updates and worked examples of gradual
migrations
](1dfa59b238/docs/development/database_schema.md (background-updates))
- A real example of a gradual migration:
https://github.com/matrix-org/synapse/pull/15649#discussion_r1213779514
- Adding `rooms.creator` field that needed a background update to
backfill data, https://github.com/matrix-org/synapse/pull/10697
- Adding `rooms.room_version` that needed a background update to
backfill data, https://github.com/matrix-org/synapse/pull/6729
- Adding `room_stats_state.room_type` that needed a background update to
backfill data, https://github.com/matrix-org/synapse/pull/13031
- Tables from MSC2716: `insertion_events`, `insertion_event_edges`,
`insertion_event_extremities`, `batch_events`
- `current_state_events` updated in
`synapse/storage/databases/main/events.py`
---
```
persist_event (adds to queue)
_persist_event_batch
_persist_events_and_state_updates (assigns `stream_ordering` to events)
_persist_events_txn
_store_event_txn
_update_metadata_tables_txn
_store_room_members_txn
_update_current_state_txn
```
---
> Concatenated Indexes [...] (also known as multi-column, composite or
combined index)
>
> [...] key consists of multiple columns.
>
> We can take advantage of the fact that the first index column is
always usable for searching
>
> *--
https://use-the-index-luke.com/sql/where-clause/the-equals-operator/concatenated-keys*
---
Dealing with `portdb` (`synapse/_scripts/synapse_port_db.py`),
https://github.com/element-hq/synapse/pull/17512#discussion_r1725998219
---
<details>
<summary>SQL queries:</summary>
Both of these are equivalent and work in SQLite and Postgres
Options 1:
```sql
WITH data_table (room_id, user_id, membership_event_id, membership, event_stream_ordering, {", ".join(insert_keys)}) AS (
VALUES (
?, ?, ?,
(SELECT membership FROM room_memberships WHERE event_id = ?),
(SELECT stream_ordering FROM events WHERE event_id = ?),
{", ".join("?" for _ in insert_values)}
)
)
INSERT INTO sliding_sync_non_join_memberships
(room_id, user_id, membership_event_id, membership, event_stream_ordering, {", ".join(insert_keys)})
SELECT * FROM data_table
WHERE membership != ?
ON CONFLICT (room_id, user_id)
DO UPDATE SET
membership_event_id = EXCLUDED.membership_event_id,
membership = EXCLUDED.membership,
event_stream_ordering = EXCLUDED.event_stream_ordering,
{", ".join(f"{key} = EXCLUDED.{key}" for key in insert_keys)}
```
Option 2:
```sql
INSERT INTO sliding_sync_non_join_memberships
(room_id, user_id, membership_event_id, membership, event_stream_ordering, {", ".join(insert_keys)})
SELECT
column1 as room_id,
column2 as user_id,
column3 as membership_event_id,
column4 as membership,
column5 as event_stream_ordering,
{", ".join("column" + str(i) for i in range(6, 6 + len(insert_keys)))}
FROM (
VALUES (
?, ?, ?,
(SELECT membership FROM room_memberships WHERE event_id = ?),
(SELECT stream_ordering FROM events WHERE event_id = ?),
{", ".join("?" for _ in insert_values)}
)
) as v
WHERE membership != ?
ON CONFLICT (room_id, user_id)
DO UPDATE SET
membership_event_id = EXCLUDED.membership_event_id,
membership = EXCLUDED.membership,
event_stream_ordering = EXCLUDED.event_stream_ordering,
{", ".join(f"{key} = EXCLUDED.{key}" for key in insert_keys)}
```
If we don't need the `membership` condition, we could use:
```sql
INSERT INTO sliding_sync_non_join_memberships
(room_id, membership_event_id, user_id, membership, event_stream_ordering, {", ".join(insert_keys)})
VALUES (
?, ?, ?,
(SELECT membership FROM room_memberships WHERE event_id = ?),
(SELECT stream_ordering FROM events WHERE event_id = ?),
{", ".join("?" for _ in insert_values)}
)
ON CONFLICT (room_id, user_id)
DO UPDATE SET
membership_event_id = EXCLUDED.membership_event_id,
membership = EXCLUDED.membership,
event_stream_ordering = EXCLUDED.event_stream_ordering,
{", ".join(f"{key} = EXCLUDED.{key}" for key in insert_keys)}
```
</details>
### Pull Request Checklist
<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->
* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
- Use markdown where necessary, mostly for `code blocks`.
- End with either a period (.) or an exclamation mark (!).
- Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
---------
Co-authored-by: Erik Johnston <erik@matrix.org>
Regressed in #17543.
The `max_download_size` config is not available on workers that don't
load the media repo.
Besides, we should honour the max_size param that was passed into the
function.
When returning receipts in sliding sync for initial rooms we should
always include our own receipts in the room (even if they don't match
any timeline events).
Reviewable commit-by-commit.
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
Move calculating of the room lists out of the core handler. This should
make it easier to switch things around to start using the tables in
#17512.
This is just moving code between files and methods.
Reviewable commit-by-commit
`hash_password` now actually accepts password from stdin. The `getpass`
reads from TTY, and does NOT accept stdin in any way.
The manpage has been updated to reflect that.
That file was getting long.
The changes are non functional, and simply split things up into:
- the main class
- the connection store
- the extensions
- the types
This supersedes #17503, given the per-connection state is being heavily
rewritten it felt easier to recreate the PR on top of that work.
This correctly handles the case of timeline limits going up and down.
This does not handle changes in `required_state`, but that can be done
as a separate PR.
Based on #17575.
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
This is some prep work ahead of correctly tracking receipts, where we
will also want to track the room status in terms of last receipt we had
sent down.
Essentially, we add two classes `PerConnectionState` and a mutable
version, and then operate on those.
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
Results in:
```
AssertionError: null
File "synapse/http/server.py", line 332, in _async_render_wrapper
callback_return = await self._async_render(request)
File "synapse/http/server.py", line 544, in _async_render
callback_return = await raw_callback_return
File "synapse/federation/transport/server/_base.py", line 369, in new_func
response = await func(
File "synapse/federation/transport/server/federation.py", line 826, in on_GET
await self.media_repo.get_local_media(
File "synapse/media/media_repository.py", line 473, in get_local_media
await respond_with_multipart_responder(
File "synapse/media/_base.py", line 353, in respond_with_multipart_responder
assert content_length is not None
```
So that clients can check for support. Note that if the feature is only
enabled for some users, the `/versions` request must be authenticated to
pick up that SSS is enabled for the user
`old_verify_keys` isn't marked as required in
https://spec.matrix.org/v1.11/server-server-api/#get_matrixkeyv2server
and there's no functional difference between an empty object and
omitting the object, so I don't think there's any reason synapse should
explode when the field is omitted.
Follow on from #17558
Basically, we want to reduce the number of threads we want to use at a
time, i.e. reduce the number of threads that are paused/blocked. We do
this by returning from the thread when the consumer pauses the producer,
rather than pausing in the thread.
---------
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Previously, we just had very basic partial room exclusion based on
whether we were lazy-loading room members. Now with this PR, we added
`must_await_full_state(...)` with rules to check if we have a we're only
requesting `required_state` which is completely satisfied even with
partial state.
Partially-stated rooms should have all state events except for remote
membership events so if we require a remote membership event anywhere,
then we need to return `True`.
We do this by reusing the code from sync v2.
Reviewable commit-by-commit. The function `get_user_ids_changed` has
been rewritten entirely, so I would recommend not looking at the diff.
Spawning from looking at a couple traces and wanting a little more info.
Follow-up to github.com/element-hq/synapse/pull/17501
The changes in this PR allow you to find slow Sliding Sync traces ignoring the
`wait_for_events` time. In Jaeger, you can now filter for the `current_sync_for_user`
operation with `RESULT.result=true` indicating that it actually returned non-empty results.
If you want to find traces for your own user, you can use
`RESULT.result=true ARG.sync_config.user="@madlittlemods:matrix.org"`
This triggers the client to start a new sliding sync connection. If we
don't do this and the client asks for the full range of rooms, we end up
sending down all rooms and their state from scratch (which can be very
slow)
This causes things like
https://github.com/element-hq/element-x-ios/issues/3115 after we restart
the server
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
Update `filters.is_encrypted` and `filters.types`/`filters.not_types` to
be robust when dealing with remote invite rooms in Sliding Sync.
Part of
[MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575):
Sliding Sync
Follow-up to https://github.com/element-hq/synapse/pull/17434
We now take into account current state, fallback to stripped state
for invite/knock rooms, then historical state. If we can't determine
the info needed to filter a room (either from state or stripped state),
it is filtered out.
Rather than always including all rooms in range.
Also adds a pre-filter to rooms that checks the stream change cache to
see if anything might have happened.
Based on #17447
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
The basic idea is that we introduce a new token for a sliding sync
connection, which stores the mapping of room to room "status" (i.e. have
we sent the room down?). This token allows us to handle duplicate
requests properly. In future it can be used to store more
"per-connection" information safely.
In future this should be migrated into the DB, so its important that we
try to reduce the number of syncs where we need to update the
per-connection information. In this PoC this only happens when we: a)
send down a set of room for the first time, or b) we have previously
sent down a room and there are updates but we are not sending the room
down the sync (due to not falling in a list range)
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
Backfill events have a negative stream ordering, and so its not useful
to use to compare with other (positive) stream orderings.
Plus, the Rust SDK currently assumes `bump_stamp` is positive.
Introduced in: #17215
This caused us a minor bit of grief as the volume of logs produced was
much higher than normal
---------
Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
This is to address an issue in which `m.presence` results on initial
sync are not returning entries of users who are currently offline.
The original behaviour was from
https://github.com/element-hq/synapse/issues/1535
This change is useful for applications that use the
presence system for tracking user profile information/updates (e.g.
https://github.com/element-hq/synapse/pull/16992 or for profile status
messages).
This is gated behind a new configuration option to avoid performance
impact for applications that don't need this, as a pragmatic solution
for now.
Bumps [pyopenssl](https://github.com/pyca/pyopenssl) from 24.1.0 to
24.2.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/pyopenssl/blob/main/CHANGELOG.rst">pyopenssl's
changelog</a>.</em></p>
<blockquote>
<h2>24.2.1 (2024-07-20)</h2>
<p>Backward-incompatible changes:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<p>Deprecations:
^^^^^^^^^^^^^</p>
<p>Changes:
^^^^^^^^</p>
<ul>
<li>Fixed changelog to remove sphinx specific restructured text
strings.</li>
</ul>
<h2>24.2.0 (2024-07-20)</h2>
<p>Backward-incompatible changes:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<p>Deprecations:
^^^^^^^^^^^^^</p>
<ul>
<li>Deprecated <code>OpenSSL.crypto.X509Req</code>,
<code>OpenSSL.crypto.load_certificate_request</code>,
<code>OpenSSL.crypto.dump_certificate_request</code>. Instead,
<code>cryptography.x509.CertificateSigningRequest</code>,
<code>cryptography.x509.CertificateSigningRequestBuilder</code>,
<code>cryptography.x509.load_der_x509_csr</code>, or
<code>cryptography.x509.load_pem_x509_csr</code> should be used.</li>
</ul>
<p>Changes:
^^^^^^^^</p>
<ul>
<li>Added type hints for the <code>SSL</code> module.
<code>[#1308](https://github.com/pyca/pyopenssl/issues/1308)
<https://github.com/pyca/pyopenssl/pull/1308></code>_.</li>
<li>Changed <code>OpenSSL.crypto.PKey.from_cryptography_key</code> to
accept public and private EC, ED25519, ED448 keys.
<code>[#1310](https://github.com/pyca/pyopenssl/issues/1310)
<https://github.com/pyca/pyopenssl/pull/1310></code>_.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8dd9457865"><code>8dd9457</code></a>
24.2.1 (<a
href="https://redirect.github.com/pyca/pyopenssl/issues/1320">#1320</a>)</li>
<li><a
href="19f093e0c3"><code>19f093e</code></a>
make changelog vanilla rst (<a
href="https://redirect.github.com/pyca/pyopenssl/issues/1319">#1319</a>)</li>
<li><a
href="e265b2867b"><code>e265b28</code></a>
Prepare for 24.2.0 release (<a
href="https://redirect.github.com/pyca/pyopenssl/issues/1318">#1318</a>)</li>
<li><a
href="6943ee524e"><code>6943ee5</code></a>
Deprecate CSR support in pyOpenSSL (<a
href="https://redirect.github.com/pyca/pyopenssl/issues/1316">#1316</a>)</li>
<li><a
href="01b9b56373"><code>01b9b56</code></a>
Add more type definitions for <code>SSL</code> module, check with mypy
(<a
href="https://redirect.github.com/pyca/pyopenssl/issues/1313">#1313</a>)</li>
<li><a
href="cdcb48baf7"><code>cdcb48b</code></a>
Prune redundant <code>:rtype:</code> from SSL module (<a
href="https://redirect.github.com/pyca/pyopenssl/issues/1315">#1315</a>)</li>
<li><a
href="b86914d37f"><code>b86914d</code></a>
Fix <code>ruff</code> invocation (<a
href="https://redirect.github.com/pyca/pyopenssl/issues/1314">#1314</a>)</li>
<li><a
href="caa1ab3ac5"><code>caa1ab3</code></a>
Update changelog for PR <a
href="https://redirect.github.com/pyca/pyopenssl/issues/1308">#1308</a>
and <a
href="https://redirect.github.com/pyca/pyopenssl/issues/1310">#1310</a>
(<a
href="https://redirect.github.com/pyca/pyopenssl/issues/1311">#1311</a>)</li>
<li><a
href="9a2105501f"><code>9a21055</code></a>
Allow loading EC, ED25519, ED448 public keys from cryptography (<a
href="https://redirect.github.com/pyca/pyopenssl/issues/1310">#1310</a>)</li>
<li><a
href="9eaa107362"><code>9eaa107</code></a>
Add type annotations for the <code>SSL</code> module (<a
href="https://redirect.github.com/pyca/pyopenssl/issues/1308">#1308</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/pyopenssl/compare/24.1.0...24.2.1">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
As it gets used in sliding sync.
We basically invalidate it in all the same places as
`get_rooms_for_user`. Most of the changes are due to needing the
arguments you pass in to be hashable (which lists aren't)
Reverts element-hq/synapse#17448
We hit a bug when deploying with synctl:
```
Traceback (most recent call last):
File "/home/synapse/env-python311/bin/synctl", line 33, in <module>
sys.exit(load_entry_point('matrix-synapse', 'console_scripts', 'synctl')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/synapse/env-python311/bin/synctl", line 25, in importlib_load_entry_point
return next(matches).load()
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/metadata/__init__.py", line 202, in load
module = import_module(match.group('module'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/synapse/src/synapse/_scripts/synctl.py", line 37, in <module>
from synapse.config import find_config_files
File "/home/synapse/src/synapse/config/__init__.py", line 22, in <module>
from ._base import ConfigError, find_config_files
File "/home/synapse/src/synapse/config/_base.py", line 49, in <module>
import pkg_resources
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 3282, in <module>
@_call_aside
^^^^^^^^^^^
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 3266, in _call_aside
f(*args, **kwargs)
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 3295, in _initialize_master_working_set
working_set = _declare_state('object', 'working_set', WorkingSet._build_master())
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 589, in _build_master
ws.require(__requires__)
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 926, in require
needed = self.resolve(parse_requirements(requirements))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 787, in resolve
dist = self._resolve_dist(
^^^^^^^^^^^^^^^^^^^
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 816, in _resolve_dist
env = Environment(self.entries)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 1014, in __init__
self.scan(search_path)
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 1046, in scan
for dist in find_distributions(item):
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 2091, in find_on_path
yield from factory(fullpath)
^^^^^^^^^^^^^^^^^
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 2183, in resolve_egg_link
return next(dist_groups, ())
^^^^^^^^^^^^^^^^^^^^^
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 2179, in <genexpr>
resolved_paths = (
^
File "/home/synapse/env-python311/lib/python3.11/site-packages/pkg_resources/__init__.py", line 2167, in non_empty_lines
for line in _read_utf8_with_fallback(path).splitlines():
^^^^^^^^^^^^^^^^^^^^^^^^
NameError: name '_read_utf8_with_fallback' is not defined
```
Prior to this PR, remote downloads which did not provide a
`content-length` were decremented from the remote download ratelimiter
at the max allowable size, leading to excessive ratelimiting - see
https://github.com/element-hq/synapse/issues/17394.
This PR adds a linearizer to limit concurrent remote downloads to 6 per
IP address, and decrements remote downloads without a `content-length`
from the ratelimiter *after* the download is complete and the response
length is known.
Also adds logic to ensure that responses with a known length respect the
`max_download_size`.
This removes the `enable_media_repo` attribute on the server config in
favour of always using the `can_load_media_repo` in the media config.
This should avoid issues like in #17420 in the future
Add room subscriptions to Sliding Sync `/sync`
Based on
[MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575):
Sliding Sync
Currently, you can only subscribe to rooms you have had *any* membership
in before.
In the future, we will allow `world_readable` rooms to be subscribed to
without joining.
We can only fetch room types for rooms the server is in, so we need to
only filter rooms that we're joined to.
Also includes a perf fix to bulk fetch room types.
This looks like a copy/paste error: the function doesn't reject
anything, but instead allows the action count to go through regardless.
The remainder of the function's documentation appears correct.
Added RHEL/Rocky install instructions (PyPI). Instructions cover
versions 8 and 9 which are the only supported ones - except for RHEL7
which is now on extended life cycle support phase.
Large part of the guide is for installing Python 3.11 or 3.12. RHEL8
ships with Python 3.6 and RHEL9 ships with 3.9. Newer Python versions
can be installed easily as they don't interfere with OS software that
still relies on the default Python version.
I was first planning to add prerequisites part to the prerequisites
section and then install instructions on the top of the page but that
section is for pre-built packages so it just didn't sound right. So I
just dumped everything to the PyPI section of the page. But suggestions
to change are welcome.
I also didn't combine these with Fedora section. I haven't tested those
packages on RHEL and Fedora ships with Python 3.12 out-of-box.
We don't necessarily have `instance_name` for old events (before we
support multiple event persisters). We treat those as if the
`instance_name` was "master".
---------
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
`bump_stamp` corresponds to the `stream_ordering` of the latest `DEFAULT_BUMP_EVENT_TYPES` in the room. This helps clients sort more readily without them needing to pull in a bunch of the timeline to determine the last activity. `bump_event_types` is a thing because for example, we don't want display name changes to mark the room as unread and bump it to the top. For encrypted rooms, we just have to consider any activity as a bump because we can't see the content and the client has to figure it out for themselves.
Outside of Synapse, `bump_stamp` is just a free-form counter so other implementations could use `received_ts`or `origin_server_ts` (see the [*Security considerations* section in MSC3575 about the potential pitfalls of using `origin_server_ts`](https://github.com/matrix-org/matrix-spec-proposals/blob/kegan/sync-v3/proposals/3575-sync.md#security-considerations)). It doesn't have any guarantee about always going up. In the Synapse case, it could go down if an event was redacted/removed (or purged in cases of retention policies).
In the future, we could add `bump_event_types` as [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) mentions if people need to customize the event types.
---
In the Sliding Sync proxy, a similar [`timestamp` field was added](https://github.com/matrix-org/sliding-sync/pull/247) for the same purpose but the name is not obvious what it pertains to or what it's for.
The `timestamp` field was also added to Ruma in https://github.com/ruma/ruma/pull/1622
Follows on from @H-Shay's great work at
https://github.com/matrix-org/synapse/pull/15344 and MSC4026.
Also enables its use for MSC3881, mainly as an easy but concrete example
of how to use it.
This can help ensure that the rooms are eventually purged if the other
local users also forget them. Synapse already clears some of the room
information as part of the `_background_remove_left_rooms` background
task, but this doesn't catch `events`, `event_json`, etc.
Fixes https://github.com/element-hq/synapse/issues/17274, hopefully.
Basically, old versions of Synapse could advance streams without
persisting anything in the DB (fixed in #17229). On restart those
updates would get lost, and so the position of the stream would revert
to an older position. If this happened across an upgrade to a later
Synapse version which included #17215, then sync could get blocked
indefinitely (until the stream advanced to the position in the token).
We fix this by bounding the stream positions we'll wait for to the
maximum position of the underlying stream ID generator.
Fixes https://github.com/element-hq/synapse/issues/17274, hopefully.
Basically, old versions of Synapse could advance streams without
persisting anything in the DB (fixed in #17229). On restart those
updates would get lost, and so the position of the stream would revert
to an older position. If this happened across an upgrade to a later
Synapse version which included #17215, then sync could get blocked
indefinitely (until the stream advanced to the position in the token).
We fix this by bounding the stream positions we'll wait for to the
maximum position of the underlying stream ID generator.
If we leave the `.so` in place it causes the tests to fail, as it gets
picked up (instead of the newly built .so) and so fails with mismatched
GLIBC errors.
Sid now defaults to python3.12, and our pinned version of cffi (1.5.1)
does not have wheels for 3.12. This installing cffi to fail as we did
not have the correct libs installed to build from source.
Fix bug where we don't get new to-device from remote if they resent a
message we've already persisted and have recorded in the DB twice.
`device_federation_inbox` table doesn't have a unique index, and so we
can race and store an entry in there twice. If we do so then
`simple_select_one_txn` will throw an error due to the query returning
more than one row. We should add an unique index, but it doesn't really
matter so lets just handle the case of multiple rows correctly for now.
Fixes up #17333, where we failed to actually send less data (the
`DISTINCT` didn't work due to `stream_id` being different).
We fix this by making it so that every device list outbound poke for a
given user ID has the same stream ID. We can't change the query to only
return e.g. max stream ID as the receivers look up the destinations to
send to by doing `SELECT WHERE stream_id = ?`
This is #17291 (which got reverted), with some added fixups, and change
so that tests actually pick up the error.
The problem was that we were not calculating any new chain IDs due to a
missing `not` in a condition.
Reduce the replication traffic of device lists, by not sending every
destination that needs to be sent the device list update over
replication. Instead a "hosts to send to have been calculated"
notification over replication, and then federation senders read the
destinations from the DB.
For non federation senders this should heavily reduce the impact of a
user in many large rooms changing a device.
The parse_integer function was previously made to reject negative values by
default in https://github.com/element-hq/synapse/pull/16920, but the
documentation stated otherwise. This fixes the documentation and also:
- Removes explicit negative=False parameters from call sites.
- Brings the negative default of parse_integer_from_args in alignment with
parse_integer.
This reverts commit bdf82efea5 (#17291)
This seems to have stopped persisting auth chains for new events, and so
is causing state res to fall back to the slow methods
We calculate the auth chain links outside of the main persist event
transaction to ensure that we do not block other event sending during
the calculation.
This changes the release artefacts workflow to use `macos-12` runners
instead of `macos-11`, as the latter will be fully deprecated in a few
days.
This also updates `cibuildwheel` to a newer version, as it would not
'repair' the macOS wheels correctly
The difference is that now instead of outputting a macOS 11+ compatible
wheel, we output a macOS 12+ compatible one. This is fine, as macOS 11
is considered EOL since September 2023.
We can also expect that macOS 12 will be considered EOL in September
2024, as Apple usually supports the last 3 macOS version, and macOS 15
is scheduled to be released around that time.
Sort is no longer configurable and we always sort rooms by the `stream_ordering` of the last event in the room or the point where the user can see up to in cases of leave/ban/invite/knock.
Add `event.internal_metadata.instance_name` (the worker instance that persisted the event) to go alongside the existing `event.internal_metadata.stream_ordering`.
`instance_name` is useful to properly compare and query for events with a token since you need to compare both the `stream_ordering` and `instance_name` against the vector clock/`instance_map` in the `RoomStreamToken`.
This is pre-requisite work and may be used in https://github.com/element-hq/synapse/pull/17293
Adding `event.internal_metadata.instance_name` was first mentioned in the initial Sliding Sync PR while pairing with @erikjohnston, see 09609cb0db (diff-5cd773fb307aa754bd3948871ba118b1ef0303f4d72d42a2d21e38242bf4e096R405-R410)
PR where this was introduced: https://github.com/matrix-org/synapse/pull/14817
### What does this affect?
`get_last_event_in_room_before_stream_ordering(...)` is used in Sync v2 in a lot of different state calculations.
`get_last_event_in_room_before_stream_ordering(...)` is also used in `/rooms/{roomId}/members`
https://github.com/matrix-org/matrix-spec-proposals/pull/4151
This is intended to be enabled by default for immediate use. When FCP is
complete, the unstable endpoint will be dropped and stable endpoint
supported instead - no backwards compatibility is expected for the
unstable endpoint.
Spawning from https://github.com/element-hq/synapse/pull/17187#discussion_r1619492779 around wanting to put `SlidingSyncBody` (parse the request in the rest layer), `SlidingSyncConfig` (from the rest layer, pass to the handler), `SlidingSyncResponse` (pass the response from the handler back to the rest layer to respond) somewhere that doesn't contaminate the imports and cause circular import issues.
- Moved Pydantic parsing models to `synapse/types/rest`
- Moved handler types to `synapse/types/handlers`
If the stream ID in the unconverted table is ahead of the device lists
ID gen, then it can break all /sync requests that had an ID from ahead
of the table.
The fix is to make sure we add the unconverted table to the list of
tables we check at start up.
Broke in https://github.com/element-hq/synapse/pull/17229
Fixes: #17013
Add logging for whether room keys are replaced
This is motivated by the Crypto team who need to diagnose crypto issues.
The existing opentracing logging is not enough because it is not enabled
for all users.
Based on [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575): Sliding Sync
This iteration only focuses on returning the list of room IDs in the sliding window API (without sorting/filtering).
Rooms appear in the Sliding sync response based on:
- `invite`, `join`, `knock`, `ban` membership events
- Kicks (`leave` membership events where `sender` is different from the `user_id`/`state_key`)
- `newly_left` (rooms that were left during the given token range, > `from_token` and <= `to_token`)
- In order for bans/kicks to not show up, you need to `/forget` those rooms. This doesn't modify the event itself though and only adds the `forgotten` flag to `room_memberships` in Synapse. There isn't a way to tell when a room was forgotten at the moment so we can't factor it into the from/to range.
### Example request
`POST http://localhost:8008/_matrix/client/unstable/org.matrix.msc3575/sync`
```json
{
"lists": {
"foo-list": {
"ranges": [ [0, 99] ],
"sort": [ "by_notification_level", "by_recency", "by_name" ],
"required_state": [
["m.room.join_rules", ""],
["m.room.history_visibility", ""],
["m.space.child", "*"]
],
"timeline_limit": 100
}
}
}
```
Response:
```json
{
"next_pos": "s58_224_0_13_10_1_1_16_0_1",
"lists": {
"foo-list": {
"count": 1,
"ops": [
{
"op": "SYNC",
"range": [0, 99],
"room_ids": [
"!MmgikIyFzsuvtnbvVG:my.synapse.linux.server"
]
}
]
}
},
"rooms": {},
"extensions": {}
}
```
Use fully-qualified `PersistedEventPosition` (`instance_name` and `stream_ordering`) when returning `RoomsForUser` to facilitate proper comparisons and `RoomStreamToken` generation.
Spawning from https://github.com/element-hq/synapse/pull/17187 where we want to utilize this change
Otherwise things will get confused.
An alternative would be to make sure that for lagging stream we don't
return anything (and make sure the returned next_batch token doesn't go
backwards). But that is a faff.
We try and deduplicate in two places: 1) really early on, and 2) just
before we persist the event. The first case was broken due to it
occuring before the profile information was added, and so it thought the
event contents were different.
The second case did catch it and handle it correctly, however doing so
creates a redundant state group leading to bloat.
Fixes#3791
Fixes up #17239
We need to keep the spam check within the `try/except` block. Also makes
it so that we don't enter the top span twice.
Also also ensures that we get the right thumbnail length.
There is a problem with `StreamIdGenerator` where it can go backwards
over restarts when a stream ID is requested but then not inserted into
the DB. This is problematic if we want to land #17215, and is generally
a potential cause for all sorts of nastiness.
Instead of trying to fix `StreamIdGenerator`, we may as well move to
`MultiWriterIdGenerator` that does not suffer from this problem (the
latest positions are stored in `stream_positions` table). This involves
adding SQLite support to the class.
This only changes id generators that were already using
`MultiWriterIdGenerator` under postgres, a separate PR will move the
rest of the uses of `StreamIdGenerator` over.
Currently sending a to-device message to a user ID with a dodgy
destination is accepted, but then ends up spamming the logs when we try
and send to the destination.
An alternative would be to reject the request, but I'm slightly nervous
that could break things.
When a module rejects a piece of media we end up trying to close the
same logging context twice.
Instead of fixing the existing code we refactor to use an async context
manager, which is easier to write correctly.
The log format is the same as the request log format, except:
- fields that are specific to HTTP requests have been removed
- the task's params are included at the end of the log line.
These log lines are emitted:
- when the task function finishes — both completion and failure (and I
suppose it is possible for a task to become schedulable again?)
- every 5 minutes whilst it is running
Closes#17217.
---------
Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
This PR ports the logic from the
[synapse_auto_accept_invite](https://github.com/matrix-org/synapse-auto-accept-invite)
module into synapse.
I went with the naive approach of injecting the "module" next to where
third party modules are currently loaded. If there is a better/preferred
way to handle this, I'm all ears. It wasn't obvious to me if there was a
better location to add this logic that would cleanly apply to all
incoming invite events.
Relies on https://github.com/element-hq/synapse/pull/17166 to fix linter
errors.
Re-introduces #17191, and includes #17197 and #17214
The basic idea is to stop calling `get_rooms_for_user` everywhere, and
instead use the table `device_lists_changes_in_room`.
Commits reviewable one-by-one.
Removed `request_key` from the `SyncConfig` (moved outside as its own function parameter) so it doesn't have to flow into `_generate_sync_entry_for_xxx` methods. This way we can separate the concerns of caching from generating the response and reuse the `_generate_sync_entry_for_xxx` functions as we see fit. Plus caching doesn't really have anything to do with the config of sync.
Split from https://github.com/element-hq/synapse/pull/17167
Spawning from https://github.com/element-hq/synapse/pull/17167#discussion_r1601497279
It's almost always more efficient to query the rooms that have device
list changes, rather than looking at the list of all users whose devices
have changed and then look for shared rooms.
Linter errors are showing up in #17147 that are unrelated to that PR.
The errors do not currently show up on develop.
This PR aims to resolve the linter errors separately from #17147.
This version change requires a migration to a new API. See
https://pyo3.rs/v0.21.2/migration#from-020-to-021
This will fix the annoying warnings added when using the recent rust
nightly:
> warning: non-local `impl` definition, they should be avoided as they
go against expectation
Bumps [jsonschema](https://github.com/python-jsonschema/jsonschema) from
4.21.1 to 4.22.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/python-jsonschema/jsonschema/releases">jsonschema's
releases</a>.</em></p>
<blockquote>
<h2>v4.22.0</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<ul>
<li>Improve <code>best_match</code> (and thereby error messages from
<code>jsonschema.validate</code>) in cases where there are multiple
<em>sibling</em> errors from applying <code>anyOf</code> /
<code>allOf</code> -- i.e. when multiple elements of a JSON array have
errors, we now do prefer showing errors from earlier elements rather
than simply showing an error for the full array (<a
href="https://redirect.github.com/python-jsonschema/jsonschema/issues/1250">#1250</a>).</li>
<li>(Micro-)optimize equality checks when comparing for JSON Schema
equality by first checking for object identity, as <code>==</code>
would.</li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/shinnar"><code>@shinnar</code></a> made
their first contribution in <a
href="https://redirect.github.com/python-jsonschema/jsonschema/pull/1224">python-jsonschema/jsonschema#1224</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/python-jsonschema/jsonschema/compare/v4.21.1...v4.22.0">https://github.com/python-jsonschema/jsonschema/compare/v4.21.1...v4.22.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/python-jsonschema/jsonschema/blob/main/CHANGELOG.rst">jsonschema's
changelog</a>.</em></p>
<blockquote>
<h1>v4.22.0</h1>
<ul>
<li>Improve <code>best_match</code> (and thereby error messages from
<code>jsonschema.validate</code>) in cases where there are multiple
<em>sibling</em> errors from applying <code>anyOf</code> /
<code>allOf</code> -- i.e. when multiple elements of a JSON array have
errors, we now do prefer showing errors from earlier elements rather
than simply showing an error for the full array (<a
href="https://redirect.github.com/python-jsonschema/jsonschema/issues/1250">#1250</a>).</li>
<li>(Micro-)optimize equality checks when comparing for JSON Schema
equality by first checking for object identity, as <code>==</code>
would.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9882dbeb1a"><code>9882dbe</code></a>
Add / ignore the new specification test suite property.</li>
<li><a
href="ebc90bb2df"><code>ebc90bb</code></a>
Merge commit '8fcfc3a674a7188a4fcc822b7a91efb3e0422a20'</li>
<li><a
href="8fcfc3a674"><code>8fcfc3a</code></a>
Squashed 'json/' changes from b41167c74..54f3784a8</li>
<li><a
href="30b7537944"><code>30b7537</code></a>
Pin pyenchant to pre from below until <a
href="https://redirect.github.com/pyenchant/pyenchant/issues/302">pyenchant/pyenchant#302</a>
is released.</li>
<li><a
href="c3729db732"><code>c3729db</code></a>
Enable doctests for the rest of the referencing page.</li>
<li><a
href="70a994ceab"><code>70a994c</code></a>
Remove a now-unneeded noqa since apparently this is fixed in new
ruff.</li>
<li><a
href="e6d0ef1cff"><code>e6d0ef1</code></a>
Fix a minor typo in the referencing example docs.</li>
<li><a
href="bceaf41a7d"><code>bceaf41</code></a>
Another placeholder benchmark for future optimization.</li>
<li><a
href="b20234e86c"><code>b20234e</code></a>
Consider errors from earlier indices (in instances) to be better
matches</li>
<li><a
href="41b49c68e5"><code>41b49c6</code></a>
Minor improvement to test failure message when a best match test
fails.</li>
<li>Additional commits viewable in <a
href="https://github.com/python-jsonschema/jsonschema/compare/v4.21.1...v4.22.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
When there have been lots of changes compared with the number of
entities, we can do a fast(er) path.
Locally I ran some benchmarking, and the comparison seems to give the
best determination of which method we use.
This change will apply the `email` & `picture` provided by OIDC to the
new user account when registering a new user via OIDC. If the user is
directed to the account details form, this change makes sure they have
been selected before applying them, otherwise they are omitted. In
particular, this change ensures the values are carried through when
Synapse has consent configured, and the redirect to the consent form/s
are followed.
I have tested everything manually. Including:
- with/without consent configured
- allowing/not allowing the use of email/avatar (via
`sso_auth_account_details.html`)
- with/without automatic account detail population (by un/commenting the
`localpart_template` option in synapse config).
### Pull Request Checklist
<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->
* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
- Use markdown where necessary, mostly for `code blocks`.
- End with either a period (.) or an exclamation mark (!).
- Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
... when workers are unreachable, etc.
Fixes https://github.com/element-hq/synapse/issues/17117.
The general principle is just to make sure that we propagate any
exceptions to the JsonResource, so that we return an error code to the
sending server. That means that the sending server no longer considers
the message safely sent, so it will retry later.
In the issue, Erik mentions that an alternative solution would be to
persist the to-device messages into a table so that they can be retried.
This might be an improvement for performance, but even if we did that,
we still need this mechanism, since we might be unable to reach the
database. So, if we want to do that, it can be a later follow-up.
---------
Co-authored-by: Erik Johnston <erik@matrix.org>
Bumps [types-setuptools](https://github.com/python/typeshed) from
69.0.0.20240125 to 69.5.0.20240423.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [idna](https://github.com/kjd/idna) from 3.6 to 3.7.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/kjd/idna/releases">idna's
releases</a>.</em></p>
<blockquote>
<h2>v3.7</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix issue where specially crafted inputs to encode() could take
exceptionally long amount of time to process. [CVE-2024-3651]</li>
</ul>
<p>Thanks to Guido Vranken for reporting the issue.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/kjd/idna/compare/v3.6...v3.7">https://github.com/kjd/idna/compare/v3.6...v3.7</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/kjd/idna/blob/master/HISTORY.rst">idna's
changelog</a>.</em></p>
<blockquote>
<p>3.7 (2024-04-11)
++++++++++++++++</p>
<ul>
<li>Fix issue where specially crafted inputs to encode() could
take exceptionally long amount of time to process. [CVE-2024-3651]</li>
</ul>
<p>Thanks to Guido Vranken for reporting the issue.</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1d365e17e1"><code>1d365e1</code></a>
Release v3.7</li>
<li><a
href="c1b3154939"><code>c1b3154</code></a>
Merge pull request <a
href="https://redirect.github.com/kjd/idna/issues/172">#172</a> from
kjd/optimize-contextj</li>
<li><a
href="0394ec76ff"><code>0394ec7</code></a>
Merge branch 'master' into optimize-contextj</li>
<li><a
href="cd58a23173"><code>cd58a23</code></a>
Merge pull request <a
href="https://redirect.github.com/kjd/idna/issues/152">#152</a> from
elliotwutingfeng/dev</li>
<li><a
href="5beb28b9dd"><code>5beb28b</code></a>
More efficient resolution of joiner contexts</li>
<li><a
href="1b121483ed"><code>1b12148</code></a>
Update ossf/scorecard-action to v2.3.1</li>
<li><a
href="d516b874c3"><code>d516b87</code></a>
Update Github actions/checkout to v4</li>
<li><a
href="c095c75943"><code>c095c75</code></a>
Merge branch 'master' into dev</li>
<li><a
href="60a0a4cb61"><code>60a0a4c</code></a>
Fix typo in GitHub Actions workflow key</li>
<li><a
href="5918a0ef80"><code>5918a0e</code></a>
Merge branch 'master' into dev</li>
<li>Additional commits viewable in <a
href="https://github.com/kjd/idna/compare/v3.6...v3.7">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.198 to
1.0.199.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/serde/releases">serde's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.199</h2>
<ul>
<li>Fix ambiguous associated item when
<code>forward_to_deserialize_any!</code> is used on an enum with
<code>Error</code> variant (<a
href="https://redirect.github.com/serde-rs/serde/issues/2732">#2732</a>,
thanks <a
href="https://github.com/aatifsyed"><code>@aatifsyed</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1477028717"><code>1477028</code></a>
Release 1.0.199</li>
<li><a
href="789740be0d"><code>789740b</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2732">#2732</a>
from aatifsyed/master</li>
<li><a
href="8fe7539bb2"><code>8fe7539</code></a>
fix: ambiguous associated type in forward_to_deserialize_any!</li>
<li><a
href="f6623a3654"><code>f6623a3</code></a>
Ignore cast_precision_loss pedantic clippy lint</li>
<li>See full diff in <a
href="https://github.com/serde-rs/serde/compare/v1.0.198...v1.0.199">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [furo](https://github.com/pradyunsg/furo) from 2024.1.29 to
2024.4.27.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pradyunsg/furo/blob/main/docs/changelog.md">furo's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<!-- raw HTML omitted -->
<h2>2024.04.27 -- Bold Burgundy</h2>
<ul>
<li>Add a skip to content link.</li>
<li>Add <code>--font-stack--headings</code>.</li>
<li>Add <code>:visited</code> colour and enforce uniform contrast
between light/dark.</li>
<li>Add an offset of <code>:target</code> to reduce back-to-top
overlap.</li>
<li>Improve dark mode colours.</li>
<li>Fix outstanding colour contrast warnings on Firefox.</li>
<li>Fix bad indent in footnotes.</li>
<li>Tweak handling of default configuration options in a more resilient
manner.</li>
<li>Tweak length and sizing of API <code>source</code> links.</li>
<li>Stop search engine indexing on search page.</li>
</ul>
<h2>2024.01.29 -- Amazing Amethyst</h2>
<ul>
<li>Fix canonical url when building with <code>dirhtml</code>.</li>
<li>Relicense the demo module.</li>
</ul>
<h2>2023.09.10 -- Zesty Zaffre</h2>
<ul>
<li>Make asset hash injection idempotent, fixing Sphinx 6
compatibility.</li>
<li>Fix the check for HTML builders, fixing non-HTML Read the Docs
builds.</li>
</ul>
<h2>2023.08.19 -- Xenolithic Xanadu</h2>
<ul>
<li>Fix missing search context with Sphinx 7.2, for dirhtml builds.</li>
<li>Drop support for Python 3.7.</li>
<li>Present configuration errors in a better format -- thanks <a
href="https://github.com/AA-Turner"><code>@AA-Turner</code></a>!</li>
<li>Bump <code>require_sphinx()</code> to Sphinx 6.0, in line with
dependency changes in Unassuming Ultramarine.</li>
</ul>
<h2>2023.08.17 -- Wonderous White</h2>
<ul>
<li>Fix compatiblity with Sphinx 7.2.0 and 7.2.1.</li>
</ul>
<h2>2023.07.26 -- Vigilant Volt</h2>
<ul>
<li>Fix compatiblity with Sphinx 7.1.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="750fcd77fd"><code>750fcd7</code></a>
Prepare release: 2024.04.27</li>
<li><a
href="c0cb0200f0"><code>c0cb020</code></a>
Update changelog</li>
<li><a
href="3787a7c1f2"><code>3787a7c</code></a>
Patch <code>app.config</code> in a more resilient manner (<a
href="https://redirect.github.com/pradyunsg/furo/issues/783">#783</a>)</li>
<li><a
href="6a3afaba38"><code>6a3afab</code></a>
Indent all children of aside.footnote (<a
href="https://redirect.github.com/pradyunsg/furo/issues/788">#788</a>)</li>
<li><a
href="035b276516"><code>035b276</code></a>
fix: no index content on search page (<a
href="https://redirect.github.com/pradyunsg/furo/issues/784">#784</a>)</li>
<li><a
href="151f523271"><code>151f523</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/pradyunsg/furo/issues/771">#771</a>)</li>
<li><a
href="2eb75aa20e"><code>2eb75aa</code></a>
Bump the github-actions group with 1 update (<a
href="https://redirect.github.com/pradyunsg/furo/issues/777">#777</a>)</li>
<li><a
href="df6f65c819"><code>df6f65c</code></a>
Bump the npm group with 6 updates (<a
href="https://redirect.github.com/pradyunsg/furo/issues/778">#778</a>)</li>
<li><a
href="0b51a5eebd"><code>0b51a5e</code></a>
Add space after period in ToC warning (<a
href="https://redirect.github.com/pradyunsg/furo/issues/776">#776</a>)</li>
<li><a
href="0188705150"><code>0188705</code></a>
Bump the npm group with 5 updates (<a
href="https://redirect.github.com/pradyunsg/furo/issues/770">#770</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pradyunsg/furo/compare/2024.01.29...2024.04.27">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This makes it easy to store UNIX sockets with correct permissions. Those
would be located in /run/synapse which is the directory used in many
examples in Synapse configuration manual. Additionally, the directory
and sockets are deleted when Synapse is shut down.
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.115 to
1.0.116.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/json/releases">serde_json's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.116</h2>
<ul>
<li>Make module structure comprehensible to static analysis (<a
href="https://redirect.github.com/serde-rs/json/issues/1124">#1124</a>,
thanks <a
href="https://github.com/mleonhard"><code>@mleonhard</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a3f62bb10e"><code>a3f62bb</code></a>
Release 1.0.116</li>
<li><a
href="12c8ee0ce6"><code>12c8ee0</code></a>
Hide "non-exhaustive patterns" errors when crate fails to
compile</li>
<li><a
href="051ce970fe"><code>051ce97</code></a>
Merge pull request 1124 from mleonhard/master</li>
<li><a
href="25dc75050a"><code>25dc750</code></a>
Replace <code>features_check</code> mod with a call to
<code>std::compile_error!</code>. Fixes htt...</li>
<li><a
href="2e15e3d7d5"><code>2e15e3d</code></a>
Revert "Temporarily disable miri on doctests"</li>
<li><a
href="0baba28775"><code>0baba28</code></a>
Resolve legacy_numeric_constants clippy lints</li>
<li>See full diff in <a
href="https://github.com/serde-rs/json/compare/v1.0.115...v1.0.116">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.197 to
1.0.198.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/serde/releases">serde's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.198</h2>
<ul>
<li>Support serializing and deserializing
<code>Saturating<T></code> (<a
href="https://redirect.github.com/serde-rs/serde/issues/2709">#2709</a>,
thanks <a
href="https://github.com/jbethune"><code>@jbethune</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c4fb923335"><code>c4fb923</code></a>
Release 1.0.198</li>
<li><a
href="65b7eea775"><code>65b7eea</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2729">#2729</a>
from dtolnay/saturating</li>
<li><a
href="01cd696fd1"><code>01cd696</code></a>
Integrate Saturating<T> deserialization into impl_deserialize_num
macro</li>
<li><a
href="c13b3f7e68"><code>c13b3f7</code></a>
Format PR 2709</li>
<li><a
href="a6571ee0da"><code>a6571ee</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2709">#2709</a>
from jbethune/master</li>
<li><a
href="6e38afff49"><code>6e38aff</code></a>
Revert "Temporarily disable miri on doctests"</li>
<li><a
href="3d1b19ed90"><code>3d1b19e</code></a>
Implement Ser+De for <code>Saturating\<T></code></li>
<li><a
href="5b24f88e73"><code>5b24f88</code></a>
Resolve legacy_numeric_constants clippy lints</li>
<li><a
href="74d06708dd"><code>74d0670</code></a>
Explicitly install a Rust toolchain for cargo-outdated job</li>
<li><a
href="3bfab6ef7f"><code>3bfab6e</code></a>
Temporarily disable miri on doctests</li>
<li>Additional commits viewable in <a
href="https://github.com/serde-rs/serde/compare/v1.0.197...v1.0.198">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [types-bleach](https://github.com/python/typeshed) from 6.1.0.1 to
6.1.0.20240331.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps
[phonenumbers](https://github.com/daviddrysdale/python-phonenumbers)
from 8.13.29 to 8.13.35.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9369ff4607"><code>9369ff4</code></a>
Prep for 8.13.35 release</li>
<li><a
href="2e1e133890"><code>2e1e133</code></a>
Generated files for metadata</li>
<li><a
href="25a306f670"><code>25a306f</code></a>
Merge metadata changes from upstream 8.13.35</li>
<li><a
href="710529234b"><code>7105292</code></a>
Prep for 8.13.34 release</li>
<li><a
href="e7b328d071"><code>e7b328d</code></a>
Generated files for metadata</li>
<li><a
href="315eb10e00"><code>315eb10</code></a>
Merge metadata changes from upstream 8.13.34</li>
<li><a
href="29dab756ac"><code>29dab75</code></a>
Prep for 8.13.33 release</li>
<li><a
href="f5b9401fdb"><code>f5b9401</code></a>
Generated files for metadata</li>
<li><a
href="aa21158f8d"><code>aa21158</code></a>
Merge metadata changes from upstream 8.13.33</li>
<li><a
href="92c242c2b4"><code>92c242c</code></a>
Prep for 8.13.32 release</li>
<li>Additional commits viewable in <a
href="https://github.com/daviddrysdale/python-phonenumbers/compare/v8.13.29...v8.13.35">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Weakness in auth chain indexing allows DoS from remote room members
through disk fill and high CPU usage.
A remote Matrix user with malicious intent, sharing a room with Synapse
instances before 1.104.1, can dispatch specially crafted events to
exploit a weakness in how the auth chain cover index is calculated. This
can induce high CPU consumption and accumulate excessive data in the
database of such instances, resulting in a denial of service.
Servers in private federations, or those that do not federate, are not
affected.
Bumps [types-pillow](https://github.com/python/typeshed) from
10.2.0.20240406 to 10.2.0.20240415.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [pyasn1-modules](https://github.com/pyasn1/pyasn1-modules) from
0.3.0 to 0.4.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pyasn1/pyasn1-modules/releases">pyasn1-modules's
releases</a>.</em></p>
<blockquote>
<h2>Release 0.4.0</h2>
<p>It's a major release where we drop Python 2 support entirely.
The most significant changes are:</p>
<ul>
<li>Added support for Python 3.11, 3.12</li>
<li>Removed support for EOL Pythons 2.7, 3.6, 3.7</li>
</ul>
<p>A full list of changes can be seen in the <a
href="https://github.com/pyasn1/pyasn1-modules/blob/main/CHANGES.txt">CHANGELOG</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyasn1/pyasn1-modules/blob/main/CHANGES.txt">pyasn1-modules's
changelog</a>.</em></p>
<blockquote>
<h2>Revision 0.4.0, released 26-03-2024</h2>
<ul>
<li>Added support for Python 3.11, 3.12</li>
<li>Removed support for EOL Pythons 2.7, 3.6, 3.7</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="98b1e268a3"><code>98b1e26</code></a>
Prepare release 0.4.0</li>
<li><a
href="0339532a08"><code>0339532</code></a>
Drop support for EOL Python 3.6 and 3.7 (<a
href="https://redirect.github.com/pyasn1/pyasn1-modules/issues/14">#14</a>)</li>
<li><a
href="9ec5409154"><code>9ec5409</code></a>
Drop support for EOL Python 2.7 (<a
href="https://redirect.github.com/pyasn1/pyasn1-modules/issues/12">#12</a>)</li>
<li><a
href="252ac00bf1"><code>252ac00</code></a>
Add support for Python 3.12 (<a
href="https://redirect.github.com/pyasn1/pyasn1-modules/issues/11">#11</a>)</li>
<li>See full diff in <a
href="https://github.com/pyasn1/pyasn1-modules/compare/v0.3.0...v0.4.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.81 to 1.0.82.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dtolnay/anyhow/releases">anyhow's
releases</a>.</em></p>
<blockquote>
<h2>1.0.82</h2>
<ul>
<li>Documentation improvements</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="074bdea1c7"><code>074bdea</code></a>
Release 1.0.82</li>
<li><a
href="47a4fbfa36"><code>47a4fbf</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/anyhow/issues/360">#360</a>
from dtolnay/docensure</li>
<li><a
href="c5af1db020"><code>c5af1db</code></a>
Make ensure's doc comment apply to the cfg(not(doc)) macro too</li>
<li><a
href="bebc7a2fe4"><code>bebc7a2</code></a>
Revert "Temporarily disable miri on doctests"</li>
<li><a
href="f2c4db9b47"><code>f2c4db9</code></a>
Update ui test suite to nightly-2024-03-31</li>
<li><a
href="028cbeedf5"><code>028cbee</code></a>
Explicitly install a Rust toolchain for cargo-outdated job</li>
<li><a
href="7a4cac5192"><code>7a4cac5</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/anyhow/issues/358">#358</a>
from dtolnay/workspacewrapper</li>
<li><a
href="939db012c2"><code>939db01</code></a>
Apply RUSTC_WORKSPACE_WRAPPER</li>
<li><a
href="9f84a37551"><code>9f84a37</code></a>
Temporarily disable miri on doctests</li>
<li><a
href="45e5a589e9"><code>45e5a58</code></a>
Ignore dead code lint in test</li>
<li>Additional commits viewable in <a
href="https://github.com/dtolnay/anyhow/compare/1.0.81...1.0.82">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This adds functions to transform a Twisted request to the
`http::Request`, and then to send back an `http::Response` through it.
It also imports the SynapseError exception so that we can throw that
from Rust code directly
Example usage of this would be:
```rust
use crate::http::{http_request_from_twisted, http_response_to_twisted, HeaderMapPyExt};
fn handler(twisted_request: &PyAny) -> PyResult<()> {
let request = http_request_from_twisted(twisted_request)?;
let ua: headers::UserAgent = request.headers().typed_get_required()?;
if whatever {
return Err((crate::errors::SynapseError::new(
StatusCode::UNAUTHORIZED,
"Whatever".to_owned
"M_UNAUTHORIZED",
None,
None,
)));
}
let response = Response::new("hello".as_bytes());
http_response_to_twisted(twisted_request, response)?;
Ok(())
}
```
Resurrecting https://github.com/matrix-org/synapse/pull/13918.
This should reduce IOPs incurred by joining to the events table to
lookup stream ordering, which happens in many receipt handling code
paths. Like the previous PR I believe sufficient time has passed between
the original migration in DB schema 72 and now to merge this as-is. It's
highly unlikely that both the migration is still ongoing AND (active)
users still have any receipts prior to that date.
In the unlikely event there is a receipt without a populated
`event_stream_ordering` synapse will behave just as it does now when
receipts exist for events that don't (yet): for push action calculation
the receipts are just ignored.
I've removed the validation on event IDs as this is already covered
here:
59ceabcb97/synapse/handlers/receipts.py (L189-L192)
PR #16942 removed an invalid optimisation that avoided pulling out state
for non-gappy syncs. This causes a large increase in DB usage. c.f.
#16941 for why that optimisation was wrong.
However, we can still optimise in the simple case where the events in
the timeline are a linear chain without any branching/merging of the
DAG.
cc. @richvdh
Before we were pulling out *all* read receipts for a user for every
event we pushed. Instead let's only pull out the relevant receipts.
This also pulled out the event rows for each receipt, causing load on
the events table.
This PR fixes a very, very niche edge-case, but I've got some more work
coming which will otherwise make the problem worse.
The bug happens when the syncing user leaves a room, and has a sync
filter which includes "left" rooms, but sets the timeline limit to 0. In
that case, the state returned in the `state` section is calculated
incorrectly.
The fix is to pass a token corresponding to the point that the user
leaves the room through to `compute_state_delta`.
Requests may require a User-Agent header, and the change in #16972
accidentally removed it, resulting in requests getting rejected causing
login to fail.
Fixes https://github.com/element-hq/synapse/issues/16680, as well as a
related bug, where servers which we had *never* successfully sent an
event to would not be retried.
In order to fix the case of pending to-device messages, we hook into the
existing `wake_destinations_needing_catchup` process, by extending it to
look for destinations that have pending to-device messages. The
federation transmission loop then attempts to send the pending to-device
messages as normal.
When running unit tests, we patch the database connection pool so that
it runs queries "synchronously". This is ok, except that if any queries
are launched before we do the patching, those queries get left in limbo
and never complete.
To fix this, let's change the way we do the switcheroo, by patching out
the method which creates the connection pool in the first place.
I have a use case where I'd like the Synapse image to start up a
postgres instance that I can use, but don't want to force Synapse to use
postgres as well.
This commit prevents postgres from being started when it has already
been explicitly enabled elsewhere.
When a lot of locks are waiting for a single lock, notifying all locks
independently with `call_later` on each release is really costly and
incurs some kind of async contention, where the CPU is spinning a lot
for not much.
The included test is taking around 30s before the change, and 0.5s
after.
It was found following failing tests with
https://github.com/element-hq/synapse/pull/16827.
Background: we have a `matrixdotorg/synapse-workers` docker image, which
is intended for running multiple workers within the same container. That
image includes a `prefix-log` script which, for each line printed to
stdout or stderr by one of the processes, prepends the name of the
process.
This commit disables buffering in that script, so that lines are logged
quickly after they are printed. This makes it much easier to understand
the output, since they then come out in a natural order.
This PR aims to fix#16895, caused by a regression in #7 and not fixed
by #16903. The PR #16903 only fixes a starvation issue, where the CPU
isn't released. There is a second issue, where the execution is blocked.
This theory is supported by the flame graphs provided in #16895 and the
fact that I see the CPU usage reducing and far below the limit.
Since the changes in #7, the method `check_state_independent_auth_rules`
is called with the additional parameter `batched_auth_events`:
6fa13b4f92/synapse/handlers/federation_event.py (L1741-L1743)
It makes the execution enter this if clause, introduced with #151956fa13b4f92/synapse/event_auth.py (L178-L189)
There are two issues in the above code snippet.
First, there is the blocking issue. I'm not entirely sure if this is a
deadlock, starvation, or something different. In the beginning, I
thought the copy operation was responsible. It wasn't. Then I
investigated the nested `store.get_events` inside the function `update`.
This was also not causing the blocking issue. Only when I replaced the
set difference operation (`-` ) with a list comprehension, the blocking
was resolved. Creating and comparing sets with a very large amount of
events seems to be problematic.
This is how the flamegraph looks now while persisting outliers. As you
can see, the execution no longer locks up in the above function.

Second, the copying here doesn't serve any purpose, because only a
shallow copy is created. This means the same objects from the original
dict are referenced. This fails the intention of protecting these
objects from mutation. The review of the original PR
https://github.com/matrix-org/synapse/pull/15195 had an extensive
discussion about this matter.
Various approaches to copying the auth_events were attempted:
1) Implementing a deepcopy caused issues due to
builtins.EventInternalMetadata not being pickleable.
2) Creating a dict with new objects akin to a deepcopy.
3) Creating a dict with new objects containing only necessary
attributes.
Concluding, there is no easy way to create an actual copy of the
objects. Opting for a deepcopy can significantly strain memory and CPU
resources, making it an inefficient choice. I don't see why the copy is
necessary in the first place. Therefore I'm proposing to remove it
altogether.
After these changes, I was able to successfully join these rooms,
without the main worker locking up:
- #synapse:matrix.org
- #element-android:matrix.org
- #element-web:matrix.org
- #ecips:matrix.org
- #ipfs-chatter:ipfs.io
- #python:matrix.org
- #matrix:matrix.org
Since Synapse 1.76.0, any module which registers a `on_new_event`
callback would brick the ability to join remote rooms.
This is because this callback tried to get the full state of the room,
which would end up in a deadlock.
Related:
https://github.com/matrix-org/synapse-auto-accept-invite/issues/18
The following module would brick the ability to join remote rooms:
```python
from typing import Any, Dict, Literal, Union
import logging
from synapse.module_api import ModuleApi, EventBase
logger = logging.getLogger(__name__)
class MyModule:
def __init__(self, config: None, api: ModuleApi):
self._api = api
self._config = config
self._api.register_third_party_rules_callbacks(
on_new_event=self.on_new_event,
)
async def on_new_event(self, event: EventBase, _state_map: Any) -> None:
logger.info(f"Received new event: {event}")
@staticmethod
def parse_config(_config: Dict[str, Any]) -> None:
return None
```
This is technically a breaking change, as we are now passing partial
state on the `on_new_event` callback.
However, this callback was broken for federated rooms since 1.76.0, and
local rooms have full state anyway, so it's unlikely that it would
change anything.
This adds a counter `synapse_emails_sent_total` for emails sent. They
are broken down by `type`, which are `password_reset`, `registration`,
`add_threepid`, `notification` (matching the methods of `Mailer`).
We do this by adding support to the LRU cache for "extra indices" based
on the cached value. This allows us to efficiently map from room ID to
the cached events and only invalidate those.
List of users not to send out device list updates for when they register
new devices. This is useful to handle bot accounts.
This is undocumented as its mostly a hack to test on matrix.org.
Note: This will still send out device list updates if the device is
later updated, e.g. end to end keys are added.
Bumps [bcrypt](https://github.com/pyca/bcrypt) from 4.0.1 to 4.1.2.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b9223e61e2"><code>b9223e6</code></a>
Try building py39 wheels to see if that helps with reinitialization
errors (#...</li>
<li><a
href="5049783444"><code>5049783</code></a>
Bump syn from 2.0.40 to 2.0.41 in /src/_bcrypt (<a
href="https://redirect.github.com/pyca/bcrypt/issues/696">#696</a>)</li>
<li><a
href="642d070972"><code>642d070</code></a>
Bump syn from 2.0.39 to 2.0.40 in /src/_bcrypt (<a
href="https://redirect.github.com/pyca/bcrypt/issues/693">#693</a>)</li>
<li><a
href="8b44a1046a"><code>8b44a10</code></a>
Bump libc from 0.2.150 to 0.2.151 in /src/_bcrypt (<a
href="https://redirect.github.com/pyca/bcrypt/issues/692">#692</a>)</li>
<li><a
href="951cc64d0c"><code>951cc64</code></a>
Bump once_cell from 1.18.0 to 1.19.0 in /src/_bcrypt (<a
href="https://redirect.github.com/pyca/bcrypt/issues/690">#690</a>)</li>
<li><a
href="7377c6db3a"><code>7377c6d</code></a>
Bump actions/setup-python from 4.8.0 to 5.0.0 (<a
href="https://redirect.github.com/pyca/bcrypt/issues/689">#689</a>)</li>
<li><a
href="61b32039d4"><code>61b3203</code></a>
Bump actions/setup-python from 4.7.1 to 4.8.0 (<a
href="https://redirect.github.com/pyca/bcrypt/issues/688">#688</a>)</li>
<li><a
href="1c3159a28a"><code>1c3159a</code></a>
Fixed wheels for older versions of macOS (<a
href="https://redirect.github.com/pyca/bcrypt/issues/687">#687</a>)</li>
<li><a
href="1a41437d3a"><code>1a41437</code></a>
Update README.rst (<a
href="https://redirect.github.com/pyca/bcrypt/issues/682">#682</a>)</li>
<li><a
href="7881c5beef"><code>7881c5b</code></a>
Fix building windows abi3 wheels (<a
href="https://redirect.github.com/pyca/bcrypt/issues/681">#681</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/bcrypt/compare/4.0.1...4.1.2">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This basically reverts a change that was in
https://github.com/element-hq/synapse/pull/16833, where we reduced the
batching.
The smaller batching can cause performance issues on busy servers and
databases.
2024-02-09 10:51:00 +00:00
587 changed files with 75105 additions and 12455 deletions
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**, please ask in **[#synapse:matrix.org](https://matrix.to/#/#synapse:matrix.org)** (using a matrix.org account if necessary).
If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/
If you want to report a security issue, please see https://element.io/security/security-disclosure-policy
This is a bug report form. By following the instructions below and completing the sections with your information, you will help the us to get all the necessary data to fix your issue.
giving admins the power to easily manage an organization-wide
deployment. It includes advanced identity management, auditing,
moderation and data retention options as well as Long Term Support and
SLAs. ESS can be used to support any Matrix-based frontend client.
..contents::
Installing and configuration
============================
🛠️ Installing and configuration
===============================
The Synapse documentation describes `how to install Synapse <https://element-hq.github.io/synapse/latest/setup/installation.html>`_. We recommend using
`Docker images <https://element-hq.github.io/synapse/latest/setup/installation.html#docker-images-and-ansible-playbooks>`_ or `Debian packages from Matrix.org
@@ -105,8 +119,8 @@ Following this advice ensures that even if an XSS is found in Synapse, the
impact to other applications will be minimal.
Testing a new installation
==========================
🧪 Testing a new installation
=============================
The easiest way to try out your new Synapse installation is by connecting to it
from a web client.
@@ -145,7 +159,7 @@ it:
We **strongly** recommend using a CAPTCHA, particularly if your homeserver is exposed to
the public internet. Without it, anyone can freely register accounts on your homeserver.
This can be exploited by attackers to create spambots targetting the rest of the Matrix
This can be exploited by attackers to create spambots targeting the rest of the Matrix
federation.
Your new user name will be formed partly from the ``server_name``, and partly
@@ -159,8 +173,20 @@ the form of::
As when logging in, you will need to specify a "Custom server". Specify your
desired ``localpart`` in the 'User name' box.
Troubleshooting and support
===========================
🎯 Troubleshooting and support
==============================
🚀 Professional support
-----------------------
Enterprise quality support for Synapse including SLAs is available as part of an
`Element Server Suite (ESS) <https://element.io/pricing>`_ subscription.
If you are an existing ESS subscriber then you can raise a `support request <https://ems.element.io/support>`_
and access the `knowledge base <https://ems-docs.element.io>`_.
🤝 Community support
--------------------
The `Admin FAQ <https://element-hq.github.io/synapse/latest/usage/administration/admin_faq.html>`_
includes tips on dealing with some common problems. For more details, see
@@ -176,8 +202,8 @@ issues for support requests, only for bug reports and feature requests.
..|docs|replace::``docs``
.._docs: docs
Identity Servers
================
🪪 Identity Servers
===================
Identity servers have the job of mapping email addresses and other 3rd Party
IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs
@@ -206,8 +232,8 @@ an email address with your account, or send an invite to another user via their
email address.
Development
===========
🛠️ Development
==============
We welcome contributions to Synapse from the community!
The best place to get started is our
@@ -224,9 +250,23 @@ Developers might be particularly interested in:
Alongside all that, join our developer community on Matrix:
`#synapse-dev:matrix.org <https://matrix.to/#/#synapse-dev:matrix.org>`_, featuring real humans!
This software is dual-licensed by New Vector Ltd (Element). It can be used either:
(1) for free under the terms of the GNU Affero General Public License (as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version); OR
(2) under the terms of a paid-for Element Commercial License agreement between you and Element (the terms of which may vary depending on what you and Element have agreed to).
Unless required by applicable law or agreed to in writing, software distributed under the Licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the Licenses for the specific language governing permissions and limitations under the Licenses.
@@ -8,6 +8,9 @@ All examples and snippets assume that your Synapse service is called `synapse` i
An example Docker Compose file can be found [here](docker-compose.yaml).
**For a more comprehensive Docker Compose example, showcasing a full Matrix 2.0 stack (originally based on this
docker-compose.yaml), please see https://github.com/element-hq/element-docker-demo**
## Worker Service Examples in Docker Compose
In order to start the Synapse container as a worker, you must specify an `entrypoint` that loads both the `homeserver.yaml` and the configuration for the worker (`synapse-generic-worker-1.yaml` in the example below). You must also include the worker type in the environment variable `SYNAPSE_WORKER` or alternatively pass `-m synapse.app.generic_worker` as part of the `entrypoint` after `"/start.py", "run"`).
Read the password form the command line if [password] is supplied\. If not, prompt the user and read the password form the \fBSTDIN\fR\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
Read the password form the command line if [password] is supplied, or from \fBSTDIN\fR\. If not, prompt the user and read the password from the tty prompt\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
.TP
\fB\-c\fR, \fB\-\-config\fR
Read the supplied YAML \fIfile\fR containing the options \fBbcrypt_rounds\fR and the \fBpassword_config\fR section containing the \fBpepper\fR value\.
@@ -36,6 +36,10 @@ The following query parameters are available:
- the room's name,
- the local part of the room's canonical alias, or
- the complete (local and server part) room's id (case sensitive).
*`public_rooms` - Optional flag to filter public rooms. If `true`, only public rooms are queried. If `false`, public rooms are excluded from
the query. When the flag is absent (the default), **both** public and non-public rooms are included in the search results.
*`empty_rooms` - Optional flag to filter empty rooms. A room is empty if joined_members is zero. If `true`, only empty rooms are queried. If `false`, empty rooms are excluded from
the query. When the flag is absent (the default), **both** empty and non-empty rooms are included in the search results.
Defaults to no filtering.
@@ -381,6 +385,13 @@ The API is:
GET /_synapse/admin/v1/rooms/<room_id>/state
```
**Parameters**
The following query parameter is available:
*`type` - The type of room state event to filter by, eg "m.room.create". If provided, only state events
of this type will be returned (regardless of their `state_key` value).
A response body like the following is returned:
```json
@@ -913,7 +924,7 @@ With all that being said, if you still want to try and recover the room:
them handle rejoining themselves.
4. If `new_room_user_id` was given, a 'Content Violation' will have been
created. Consider whether you want to delete that roomm.
created. Consider whether you want to delete that room.
@@ -40,6 +40,7 @@ It returns a JSON body like the following:
"erased":false,
"shadow_banned":0,
"creation_ts":1560432506,
"last_seen_ts":1732919539393,
"appservice_id":null,
"consent_server_notice_sent":null,
"consent_version":null,
@@ -55,7 +56,8 @@ It returns a JSON body like the following:
}
],
"user_type":null,
"locked":false
"locked":false,
"suspended":false
}
```
@@ -141,8 +143,8 @@ Body parameters:
provider for SSO (Single sign-on). More details are in the configuration manual under the
sections [sso](../usage/configuration/config_documentation.md#sso) and [oidc_providers](../usage/configuration/config_documentation.md#oidc_providers).
-`auth_provider` - **string**, required. The unique, internal ID of the external identity provider.
The same as `idp_id` from the homeserver configuration. Note that no error is raised if the
provided value is not in the homeserver configuration.
The same as `idp_id` from the homeserver configuration. If using OIDC, this value should be prefixed
with `oidc-`. Note that no error is raised if the provided value is not in the homeserver configuration.
-`external_id` - **string**, required. An identifier for the user in the external identity provider.
When the user logs in to the identity provider, this must be the unique ID that they map to.
-`admin` - **bool**, optional, defaults to `false`. Whether the user is a homeserver administrator,
@@ -164,6 +166,7 @@ Body parameters:
Other allowed options are: `bot` and `support`.
## List Accounts
### List Accounts (V2)
This API returns all local user accounts.
By default, the response is ordered by ascending user ID.
@@ -287,6 +290,19 @@ The following fields are returned in the JSON response body:
*Added in Synapse 1.93:* the `locked` query parameter and response field.
### List Accounts (V3)
This API returns all local user accounts (see v2). In contrast to v2, the query parameter `deactivated` is handled differently.
```
GET /_synapse/admin/v3/users
```
**Parameters**
-`deactivated` - Optional flag to filter deactivated users. If `true`, only deactivated users are returned.
If `false`, deactivated users are excluded from the query. When the flag is absent (the default),
users are not filtered by deactivation status.
## Query current sessions for a user
This API returns information about the active sessions for a specific user.
@@ -462,9 +478,9 @@ with a body of:
}
```
## List room memberships of a user
## List joined rooms of a user
Gets a list of all `room_id` that a specific `user_id` is member.
Gets a list of all `room_id` that a specific `user_id` is joined to and is a member of (participating in).
The API is:
@@ -501,6 +517,73 @@ The following fields are returned in the JSON response body:
-`joined_rooms` - An array of `room_id`.
-`total` - Number of rooms.
## Get the number of invites sent by the user
Fetches the number of invites sent by the provided user ID across all rooms
after the given timestamp.
```
GET /_synapse/admin/v1/users/$user_id/sent_invite_count
```
**Parameters**
The following parameters should be set in the URL:
*`user_id`: fully qualified: for example, `@user:server.com`
The following should be set as query parameters in the URL:
*`from_ts`: int, required. A timestamp in ms from the unix epoch. Only
invites sent at or after the provided timestamp will be returned.
This works by comparing the provided timestamp to the `received_ts`
column in the `events` table.
Note: https://currentmillis.com/ is a useful tool for converting dates
into timestamps and vice versa.
A response body like the following is returned:
```json
{
"invite_count":30
}
```
_Added in Synapse 1.122.0_
## Get the cumulative number of rooms a user has joined after a given timestamp
Fetches the number of rooms that the user joined after the given timestamp, even
if they have subsequently left/been banned from those rooms.
```
GET /_synapse/admin/v1/users/$<user_id/cumulative_joined_room_count
```
**Parameters**
The following parameters should be set in the URL:
*`user_id`: fully qualified: for example, `@user:server.com`
The following should be set as query parameters in the URL:
*`from_ts`: int, required. A timestamp in ms from the unix epoch. Only
invites sent at or after the provided timestamp will be returned.
This works by comparing the provided timestamp to the `received_ts`
column in the `events` table.
Note: https://currentmillis.com/ is a useful tool for converting dates
into timestamps and vice versa.
A response body like the following is returned:
```json
{
"cumulative_joined_room_count":30
}
```
_Added in Synapse 1.122.0_
## Account Data
Gets information about account data for a specific `user_id`.
@@ -1347,3 +1430,88 @@ Returns a `404` HTTP status code if no user was found, with a response body like
```
_Added in Synapse 1.72.0._
## Redact all the events of a user
This endpoint allows an admin to redact the events of a given user. There are no restrictions on redactions for a
local user. By default, we puppet the user who sent the message to redact it themselves. Redactions for non-local users are issued using the admin user, and will fail in rooms where the admin user is not admin/does not have the specified power level to issue redactions.
The API is
```
POST /_synapse/admin/v1/user/$user_id/redact
{
"rooms": ["!roomid1", "!roomid2"]
}
```
If an empty list is provided as the key for `rooms`, all events in all the rooms the user is member of will be redacted,
otherwise all the events in the rooms provided in the request will be redacted.
The API starts redaction process running, and returns immediately with a JSON body with
a redact id which can be used to query the status of the redaction process:
```json
{
"redact_id": "<opaque id>"
}
```
**Parameters**
The following parameters should be set in the URL:
- `user_id` - The fully qualified MXID of the user: for example, `@user:server.com`.
The following JSON body parameter must be provided:
- `rooms` - A list of rooms to redact the user's events in. If an empty list is provided all events in all rooms
the user is a member of will be redacted
_Added in Synapse 1.116.0._
The following JSON body parameters are optional:
- `reason` - Reason the redaction is being requested, ie "spam", "abuse", etc. This will be included in each redaction event, and be visible to users.
- `limit` - a limit on the number of the user's events to search for ones that can be redacted (events are redacted newest to oldest) in each room, defaults to 1000 if not provided
## Check the status of a redaction process
It is possible to query the status of the background task for redacting a user's events.
The status can be queried up to 24 hours after completion of the task,
or until Synapse is restarted (whichever happens first).
The API is:
```
GET /_synapse/admin/v1/user/redact_status/$redact_id
```
A response body like the following is returned:
```
{
"status": "active",
"failed_redactions": [],
}
```
**Parameters**
The following parameters should be set in the URL:
* `redact_id` - string - The ID for this redaction process, provided when the redaction was requested.
**Response**
The following fields are returned in the JSON response body:
- `status` - string - one of scheduled/active/completed/failed, indicating the status of the redaction job
- `failed_redactions` - dictionary - the keys of the dict are event ids the process was unable to redact, if any, and the values are
the corresponding error that caused the redaction to fail
d="M 56.2856,18.1428 H 44.9998 c 0.1334,1.181 0.5619,2.1238 1.2858,2.8286 0.7238,0.6857 1.6761,1.0286 2.8571,1.0286 0.7809,0 1.4857,-0.1905 2.1143,-0.5715 0.6286,-0.3809 1.0762,-0.8952 1.3428,-1.5428 h 3.4286 c -0.4571,1.5047 -1.3143,2.7238 -2.5714,3.6571 -1.2381,0.9143 -2.7048,1.3715 -4.4,1.3715 -2.2095,0 -4,-0.7334 -5.3714,-2.2 -1.3524,-1.4667 -2.0286,-3.3239 -2.0286,-5.5715 0,-2.1905 0.6857,-4.0285 2.0571,-5.5143 1.3715,-1.4857 3.1429,-2.22853 5.3143,-2.22853 2.1714,0 3.9238,0.73333 5.2572,2.20003 1.3523,1.4476 2.0285,3.2762 2.0285,5.4857 z m -7.2572,-5.9714 c -1.0667,0 -1.9524,0.3143 -2.6571,0.9429 -0.7048,0.6285 -1.1429,1.4666 -1.3143,2.5142 h 7.8857 c -0.1524,-1.0476 -0.5714,-1.8857 -1.2571,-2.5142 -0.6858,-0.6286 -1.5715,-0.9429 -2.6572,-0.9429 z"
fill="#000000"
id="path6"/><path
d="M 58.6539,20.1428 V 3.14282 h 3.4 V 20.2 c 0,0.7619 0.419,1.1428 1.2571,1.1428 l 0.6,-0.0285 v 3.2285 c -0.3238,0.0572 -0.6667,0.0857 -1.0286,0.0857 -1.4666,0 -2.5428,-0.3714 -3.2285,-1.1142 -0.6667,-0.7429 -1,-1.8667 -1,-3.3715 z"
fill="#000000"
id="path7"/><path
d="M 79.7454,18.1428 H 68.4597 c 0.1333,1.181 0.5619,2.1238 1.2857,2.8286 0.7238,0.6857 1.6762,1.0286 2.8571,1.0286 0.781,0 1.4857,-0.1905 2.1143,-0.5715 0.6286,-0.3809 1.0762,-0.8952 1.3429,-1.5428 h 3.4285 c -0.4571,1.5047 -1.3143,2.7238 -2.5714,3.6571 -1.2381,0.9143 -2.7048,1.3715 -4.4,1.3715 -2.2095,0 -4,-0.7334 -5.3714,-2.2 -1.3524,-1.4667 -2.0286,-3.3239 -2.0286,-5.5715 0,-2.1905 0.6857,-4.0285 2.0571,-5.5143 1.3715,-1.4857 3.1429,-2.22853 5.3143,-2.22853 2.1715,0 3.9238,0.73333 5.2572,2.20003 1.3524,1.4476 2.0285,3.2762 2.0285,5.4857 z m -7.2572,-5.9714 c -1.0666,0 -1.9524,0.3143 -2.6571,0.9429 -0.7048,0.6285 -1.1429,1.4666 -1.3143,2.5142 h 7.8857 c -0.1524,-1.0476 -0.5714,-1.8857 -1.2571,-2.5142 -0.6857,-0.6286 -1.5715,-0.9429 -2.6572,-0.9429 z"
fill="#000000"
id="path8"/><path
d="m 95.0851,16.0571 v 8.5143 h -3.4 v -8.8857 c 0,-2.2476 -0.9333,-3.3714 -2.8,-3.3714 -1.0095,0 -1.819,0.3238 -2.4286,0.9714 -0.5904,0.6476 -0.8857,1.5333 -0.8857,2.6571 v 8.6286 h -3.4 V 9.74282 h 3.1429 v 1.97148 c 0.3619,-0.6667 0.9143,-1.2191 1.6571,-1.6572 0.7429,-0.43809 1.6667,-0.65713 2.7714,-0.65713 2.0572,0 3.5429,0.78093 4.4572,2.34283 1.2571,-1.5619 2.9333,-2.34283 5.0286,-2.34283 1.733,0 3.067,0.54285 4,1.62853 0.933,1.0667 1.4,2.4762 1.4,4.2286 v 9.3143 h -3.4 v -8.8857 c 0,-2.2476 -0.933,-3.3714 -2.8,-3.3714 -1.0286,0 -1.8477,0.3333 -2.4572,1 -0.5905,0.6476 -0.8857,1.5619 -0.8857,2.7428 z"
fill="#000000"
id="path9"/><path
d="m 121.537,18.1428 h -11.286 c 0.133,1.181 0.562,2.1238 1.286,2.8286 0.723,0.6857 1.676,1.0286 2.857,1.0286 0.781,0 1.486,-0.1905 2.114,-0.5715 0.629,-0.3809 1.076,-0.8952 1.343,-1.5428 h 3.429 c -0.458,1.5047 -1.315,2.7238 -2.572,3.6571 -1.238,0.9143 -2.705,1.3715 -4.4,1.3715 -2.209,0 -4,-0.7334 -5.371,-2.2 -1.353,-1.4667 -2.029,-3.3239 -2.029,-5.5715 0,-2.1905 0.686,-4.0285 2.057,-5.5143 1.372,-1.4857 3.143,-2.22853 5.315,-2.22853 2.171,0 3.923,0.73333 5.257,2.20003 1.352,1.4476 2.028,3.2762 2.028,5.4857 z m -7.257,-5.9714 c -1.067,0 -1.953,0.3143 -2.658,0.9429 -0.704,0.6285 -1.142,1.4666 -1.314,2.5142 h 7.886 c -0.153,-1.0476 -0.572,-1.8857 -1.257,-2.5142 -0.686,-0.6286 -1.572,-0.9429 -2.657,-0.9429 z"
fill="#000000"
id="path10"/><path
d="m 127.105,9.74282 v 1.97148 c 0.343,-0.6477 0.905,-1.1905 1.686,-1.6286 0.8,-0.45716 1.762,-0.68573 2.885,-0.68573 1.753,0 3.105,0.53333 4.058,1.60003 0.971,1.0666 1.457,2.4857 1.457,4.2571 v 9.3143 h -3.4 v -8.8857 c 0,-1.0476 -0.248,-1.8667 -0.743,-2.4572 -0.476,-0.6095 -1.21,-0.9142 -2.2,-0.9142 -1.086,0 -1.943,0.3238 -2.572,0.9714 -0.609,0.6476 -0.914,1.5428 -0.914,2.6857 v 8.6 h -3.4 V 9.74282 Z"
fill="#000000"
id="path11"/><path
d="m 147.12,21.5428 v 2.9429 c -0.419,0.1143 -1.009,0.1714 -1.771,0.1714 -2.895,0 -4.343,-1.4571 -4.343,-4.3714 v -7.8286 h -2.257 V 9.74282 h 2.257 V 5.88568 h 3.4 v 3.85714 h 2.772 v 2.71428 h -2.772 v 7.4857 c 0,1.1619 0.552,1.7429 1.657,1.7429 z"
6543 maintains [Synapse packages for Alpine Linux](https://pkgs.alpinelinux.org/packages?name=synapse&branch=edge) in the community repository. Install with:
Jahway603 maintains [Synapse packages for Alpine Linux](https://pkgs.alpinelinux.org/packages?name=synapse&branch=edge) in the community repository. Install with:
```sh
sudo apk add synapse
@@ -210,7 +208,7 @@ When following this route please make sure that the [Platform-specific prerequis
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 3.8 or later, up to Python 3.11.
- Python 3.9 or later, up to Python 3.13.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
If building on an uncommon architecture for which pre-built wheels are
@@ -259,9 +257,9 @@ users, etc.) to the developers via the `--report-stats` argument.
This command will generate you a config file that you can then customise, but it will
also generate a set of keys for you. These keys will allow your homeserver to
identify itself to other homeserver, so don't lose or delete them. It would be
identify itself to other homeservers, so don't lose or delete them. It would be
wise to back them up somewhere safe. (If, for whatever reason, you do need to
change your homeserver's keys, you may find that other homeserver have the
change your homeserver's keys, you may find that other homeservers have the
old key cached. If you update the signing key, you should change the name of the
key in the `<server name>.signing.key` file (the second word) to something
different. See the [spec](https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys) for more information on key management).
*Note: The term "RHEL" below refers to both Red Hat Enterprise Linux and Rocky Linux. The distributions are 1:1 binary compatible.*
It's recommended to use the latest Python versions.
RHEL 8 in particular ships with Python 3.6 by default which is EOL and therefore no longer supported by Synapse. RHEL 9 ship with Python 3.9 which is still supported by the Python core team as of this writing. However, newer Python versions provide significant performance improvements and they're available in official distributions' repositories. Therefore it's recommended to use them.
Python 3.11 and 3.12 are available for both RHEL 8 and 9.
These commands should be run as root user.
RHEL 8
```bash
# Enable PowerTools repository
dnf config-manager --set-enabled powertools
```
RHEL 9
```bash
# Enable CodeReady Linux Builder repository
crb enable
```
Install new version of Python. You only need one of these:
*Note*: You may need to set `PYTHONMALLOC=malloc` to ensure that `jemalloc` can accurately calculate memory usage. By default, Python uses its internal small-object allocator, which may interfere with jemalloc's ability to track memory consumption correctly. This could prevent the [cache_autotuning](../configuration/config_documentation.md#caches-and-associated-values) feature from functioning as expected, as the Python allocator may not reach the memory threshold set by `max_cache_memory_usage`, thus not triggering the cache eviction process.
This made a significant difference on Python 2.7 - it's unclear how
When set to true, all subsequent media uploads will be marked as authenticated, and will not be available over legacy
unauthenticated media endpoints (`/_matrix/media/(r0|v3|v1)/download` and `/_matrix/media/(r0|v3|v1)/thumbnail`) - requests for authenticated media over these endpoints will result in a 404. All media, including authenticated media, will be available over the authenticated media endpoints `_matrix/client/v1/media/download` and `_matrix/client/v1/media/thumbnail`. Media uploaded prior to setting this option to true will still be available over the legacy endpoints. Note if the setting is switched to false
after enabling, media marked as authenticated will be available over legacy endpoints. Defaults to true (previously false). In a future release of Synapse, this option will be removed and become always-on.
In all cases, authenticated requests to download media will succeed, but for unauthenticated requests, this
case-by-case breakdown describes whether media downloads are permitted:
* `enable_authenticated_media = False`:
* unauthenticated client or homeserver requesting local media: allowed
* unauthenticated client or homeserver requesting remote media: allowed as long as the media is in the cache,
or as long as the remote homeserver does not require authentication to retrieve the media
* `enable_authenticated_media = True`:
* unauthenticated client or homeserver requesting local media:
allowed if the media was stored on the server whilst `enable_authenticated_media` was `False` (or in a previous Synapse version where this option did not exist);
otherwise denied.
* unauthenticated client or homeserver requesting remote media: the same as for local media;
allowed if the media was stored on the server whilst `enable_authenticated_media` was `False` (or in a previous Synapse version where this option did not exist);
otherwise denied.
It is especially notable that media downloaded before this option existed (in older Synapse versions), or whilst this option was set to `False`,
will perpetually be available over the legacy, unauthenticated endpoint, even after this option is set to `True`.
This is for backwards compatibility with older clients and homeservers that do not yet support requesting authenticated media;
those older clients or homeservers will not be cut off from media they can already see.
_Changed in Synapse 1.120:_ This option now defaults to `True` when not set, whereas before this version it defaulted to `False`.
Example configuration:
```yaml
enable_authenticated_media: false
```
---
### `enable_media_repo`
@@ -1913,6 +2022,24 @@ Example configuration:
max_image_pixels: 35M
```
---
### `remote_media_download_burst_count`
Remote media downloads are ratelimited using a [leaky bucket algorithm](https://en.wikipedia.org/wiki/Leaky_bucket), where a given "bucket" is keyed to the IP address of the requester when requesting remote media downloads. This configuration option sets the size of the bucket against which the size in bytes of downloads are penalized - if the bucket is full, ie a given number of bytes have already been downloaded, further downloads will be denied until the bucket drains. Defaults to 500MiB. See also `remote_media_download_per_second` which determines the rate at which the "bucket" is emptied and thus has available space to authorize new requests.
Example configuration:
```yaml
remote_media_download_burst_count: 200M
```
---
### `remote_media_download_per_second`
Works in conjunction with `remote_media_download_burst_count` to ratelimit remote media downloads - this configuration option determines the rate at which the "bucket" (see above) leaks in bytes per second. As requests are made to download remote media, the size of those requests in bytes is added to the bucket, and once the bucket has reached it's capacity, no more requests will be allowed until a number of bytes has "drained" from the bucket. This setting determines the rate at which bytes drain from the bucket, with the practical effect that the larger the number, the faster the bucket leaks, allowing for more bytes downloaded over a shorter period of time. Defaults to 87KiB per second. See also `remote_media_download_burst_count`.
Example configuration:
```yaml
remote_media_download_per_second: 40K
```
---
### `prevent_media_downloads_from`
A list of domains to never download media from. Media from these
@@ -1924,9 +2051,10 @@ This will not prevent the listed domains from accessing media themselves.
It simply prevents users on this server from downloading media originating
from the listed servers.
This will have no effect on media originating from the local server.
This only affects media downloaded from other Matrix servers, to
block domains from URL previews see [`url_preview_url_blacklist`](#url_preview_url_blacklist).
This will have no effect on media originating from the local server. This only
affects media downloaded from other Matrix servers, to control URL previews see
[`url_preview_ip_range_blacklist`](#url_preview_ip_range_blacklist) or
If this is set, users must provide all of the specified types of 3PID when registering an account.
If this is set, users must provide all of the specified types of [3PID](https://spec.matrix.org/latest/appendices/#3pid-types) when registering an account.
Note that [`enable_registration`](#enable_registration) must also be set to allow account registration.
Mandate that users are only allowed to associate certain formats of
3PIDs with accounts on this server, as specified by the `medium` and `pattern` sub-options.
`pattern` is a [Perl-like regular expression](https://docs.python.org/3/library/re.html#module-re).
More information about 3PIDs, allowed `medium` types and their `address` syntax can be found [in the Matrix spec](https://spec.matrix.org/latest/appendices/#3pid-types).
Example configuration:
```yaml
@@ -2348,7 +2497,7 @@ allowed_local_3pids:
- medium: email
pattern: '^[^@]+@vector\.im$'
- medium: msisdn
pattern: '\+44'
pattern: '^44\d{10}$'
```
---
### `enable_3pid_lookup`
@@ -2583,6 +2732,11 @@ Possible values for this option are:
* "trusted_private_chat": an invitation is required to join this room and the invitee is
assigned a power level of 100 upon joining the room.
Each preset will set up a room in the same manner as if it were provided as the `preset` parameter when
@@ -3515,6 +3699,15 @@ Has the following sub-options:
users. This allows the CAS SSO flow to be limited to sign in only, rather than
automatically registering users that have a valid SSO login but do not have
a pre-registered account. Defaults to true.
* `allow_numeric_ids`: set to 'true' allow numeric user IDs (default false).
This allows CAS SSO flow to provide user IDs composed of numbers only.
These identifiers will be prefixed by the letter "u" by default.
The prefix can be configured using the "numeric_ids_prefix" option.
Be careful to choose the prefix correctly to avoid any possible conflicts
(e.g. user 1234 becomes u1234 when a user u1234 already exists).
* `numeric_ids_prefix`: the prefix you wish to add in front of a numeric user ID
when the "allow_numeric_ids" option is set to "true".
By default, the prefix is the letter "u" and only alphanumeric characters are allowed.
*Added in Synapse 1.93.0.*
@@ -3529,6 +3722,8 @@ cas_config:
userGroup: "staff"
department: None
enable_registration: true
allow_numeric_ids: true
numeric_ids_prefix: "numericuser"
```
---
### `sso`
@@ -3596,6 +3791,8 @@ Additional sub-options for this setting include:
Required if `enabled` is set to true.
* `subject_claim`: Name of the claim containing a unique identifier for the user.
Optional, defaults to `sub`.
* `display_name_claim`: Name of the claim containing the display name for the user. Optional.
If provided, the display name will be set to the value of this claim upon first login.
* `issuer`: The issuer to validate the "iss" claim against. Optional. If provided the
"iss" claim will be required and validated for all JSON web tokens.
* `audiences`: A list of audiences to validate the "aud" claim against. Optional.
@@ -3610,6 +3807,7 @@ jwt_config:
secret: "provided-by-your-issuer"
algorithm: "provided-by-your-issuer"
subject_claim: "name_of_claim"
display_name_claim: "name_of_claim"
issuer: "provided-by-your-issuer"
audiences:
- "provided-by-your-issuer"
@@ -3734,7 +3932,8 @@ This setting defines options related to the user directory.
This option has the following sub-options:
* `enabled`: Defines whether users can search the user directory. If false then
empty responses are returned to all queries. Defaults to true.
* `search_all_users`: Defines whether to search all users visible to your HS at the time the search is performed. If set to true, will return all users who share a room with the user from the homeserver.
* `search_all_users`: Defines whether to search all users visible to your homeserver at the time the search is performed.
If set to true, will return all users known to the homeserver matching the search query.
If false, search results will only contain users
visible in public rooms and users sharing a room with the requester.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.