Compare commits

...

14 Commits

Author SHA1 Message Date
dependabot[bot]
26562f1ebb Bump gitpython from 3.1.43 to 3.1.44
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.43 to 3.1.44.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/main/CHANGES)
- [Commits](https://github.com/gitpython-developers/GitPython/compare/3.1.43...3.1.44)

---
updated-dependencies:
- dependency-name: gitpython
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-03 10:55:22 +00:00
Andrew Morgan
ac1bf682ff Allow (un)block_room storage functions to be called on workers (#18119)
This is so workers can call these functions.

This was preventing the [Delete Room Admin
API](https://element-hq.github.io/synapse/latest/admin_api/rooms.html#version-2-new-version)
from succeeding when `block: true` was specified. This was because we
had `run_background_tasks_on` configured to run on a separate worker.

As workers weren't able to call the `block_room` storage function before
this PR, the (delete room) task failed when taken off the queue by the
worker.
2025-01-30 20:48:12 +00:00
Eric Eastwood
a0b70473fc Raise an error if someone is using an incorrect suffix in a config duration string (#18112)
Previously, a value like `5q` would be interpreted as 5 milliseconds. We
should just raise an error instead of letting someone run with a
misconfiguration.
2025-01-29 18:14:02 -06:00
Devon Hudson
95a85b1129 Merge branch 'master' into develop 2025-01-28 09:23:26 -07:00
Will Hunt
628351b98d Never autojoin deactivated & suspended users. (#18073)
This PR changes the logic so that deactivated users are always ignored.
Suspended users were already effectively ignored as Synapse forbids a
join while suspended.

---------

Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
2025-01-28 00:37:24 +00:00
dependabot[bot]
8f27b3af07 Bump python-multipart from 0.0.18 to 0.0.20 (#18096)
Bumps [python-multipart](https://github.com/Kludex/python-multipart)
from 0.0.18 to 0.0.20.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/Kludex/python-multipart/releases">python-multipart's
releases</a>.</em></p>
<blockquote>
<h2>Version 0.0.20</h2>
<h2>What's Changed</h2>
<ul>
<li>Handle messages containing only end boundary, fixes <a
href="https://redirect.github.com/Kludex/python-multipart/issues/38">#38</a>
by <a href="https://github.com/jhnstrk"><code>@​jhnstrk</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/142">Kludex/python-multipart#142</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/Mr-Sunglasses"><code>@​Mr-Sunglasses</code></a>
made their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/185">Kludex/python-multipart#185</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Kludex/python-multipart/compare/0.0.19...0.0.20">https://github.com/Kludex/python-multipart/compare/0.0.19...0.0.20</a></p>
<h2>Version 0.0.19</h2>
<h2>What's Changed</h2>
<ul>
<li>Don't warn when CRLF is found after last boundary by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/193">Kludex/python-multipart#193</a></li>
</ul>
<hr />
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Kludex/python-multipart/compare/0.0.18...0.0.19">https://github.com/Kludex/python-multipart/compare/0.0.18...0.0.19</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md">python-multipart's
changelog</a>.</em></p>
<blockquote>
<h2>0.0.20 (2024-12-16)</h2>
<ul>
<li>Handle messages containing only end boundary <a
href="https://redirect.github.com/Kludex/python-multipart/pull/142">#142</a>.</li>
</ul>
<h2>0.0.19 (2024-11-30)</h2>
<ul>
<li>Don't warn when CRLF is found after last boundary on
<code>MultipartParser</code> <a
href="https://redirect.github.com/Kludex/python-multipart/pull/193">#193</a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b083cef4d6"><code>b083cef</code></a>
Version 0.0.20 (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/197">#197</a>)</li>
<li><a
href="04d3cf5ef5"><code>04d3cf5</code></a>
Handle messages containing only end boundary, fixes <a
href="https://redirect.github.com/Kludex/python-multipart/issues/38">#38</a>
(<a
href="https://redirect.github.com/Kludex/python-multipart/issues/142">#142</a>)</li>
<li><a
href="f1c5a2821b"><code>f1c5a28</code></a>
feat: Add python 3.13 in CI matrix. (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/185">#185</a>)</li>
<li><a
href="4bffa0c7c6"><code>4bffa0c</code></a>
doc: A file parameter is not a field (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/127">#127</a>)</li>
<li><a
href="6f3295bc79"><code>6f3295b</code></a>
Bump astral-sh/setup-uv from 3 to 4 in the github-actions group (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/194">#194</a>)</li>
<li><a
href="c4fe4d3ceb"><code>c4fe4d3</code></a>
Don't warn when CRLF is found after last boundary (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/193">#193</a>)</li>
<li>See full diff in <a
href="https://github.com/Kludex/python-multipart/compare/0.0.18...0.0.20">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=python-multipart&package-manager=pip&previous-version=0.0.18&new-version=0.0.20)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-27 21:28:00 +00:00
dependabot[bot]
579f4ac1cd Bump serde_json from 1.0.135 to 1.0.137 (#18099)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.135 to
1.0.137.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/json/releases">serde_json's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.137</h2>
<ul>
<li>Turn on &quot;float_roundtrip&quot; and &quot;unbounded_depth&quot;
features for serde_json in play.rust-lang.org (<a
href="https://redirect.github.com/serde-rs/json/issues/1231">#1231</a>)</li>
</ul>
<h2>v1.0.136</h2>
<ul>
<li>Optimize serde_json::value::Serializer::serialize_map by using
Map::with_capacity (<a
href="https://redirect.github.com/serde-rs/json/issues/1230">#1230</a>,
thanks <a
href="https://github.com/goffrie"><code>@​goffrie</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="eb49e28204"><code>eb49e28</code></a>
Release 1.0.137</li>
<li><a
href="51c48ab3b0"><code>51c48ab</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1231">#1231</a>
from dtolnay/playground</li>
<li><a
href="7d8f15b963"><code>7d8f15b</code></a>
Enable &quot;float_roundtrip&quot; and &quot;unbounded_depth&quot;
features in playground</li>
<li><a
href="a46f14cf2e"><code>a46f14c</code></a>
Release 1.0.136</li>
<li><a
href="eb9f3f6387"><code>eb9f3f6</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1230">#1230</a>
from goffrie/patch-1</li>
<li><a
href="513e5b2f74"><code>513e5b2</code></a>
Use Map::with_capacity in value::Serializer::serialize_map</li>
<li>See full diff in <a
href="https://github.com/serde-rs/json/compare/v1.0.135...v1.0.137">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=serde_json&package-manager=cargo&previous-version=1.0.135&new-version=1.0.137)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-27 21:24:07 +00:00
dependabot[bot]
c53999dab8 Bump types-bleach from 6.1.0.20240331 to 6.2.0.20241123 (#18082)
Bumps [types-bleach](https://github.com/python/typeshed) from
6.1.0.20240331 to 6.2.0.20241123.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=types-bleach&package-manager=pip&previous-version=6.1.0.20240331&new-version=6.2.0.20241123)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-27 21:04:41 +00:00
Andrew Morgan
b41a9ebb38 OIDC: increase length of generated nonce parameter from 30->32 chars (#18109) 2025-01-27 18:39:51 +00:00
Eric Eastwood
6ec5e13ec9 Fix join being denied after being invited over federation (#18075)
This also happens for rejecting an invite. Basically, any out-of-band membership transition where we first get the membership as an `outlier` and then rely on federation filling us in to de-outlier it.

This PR mainly addresses automated test flakiness, bots/scripts, and options within Synapse like [`auto_accept_invites`](https://element-hq.github.io/synapse/v1.122/usage/configuration/config_documentation.html#auto_accept_invites) that are able to react quickly (before federation is able to push us events), but also helps in generic scenarios where federation is lagging.

I initially thought this might be a Synapse consistency issue (see issues labeled with [`Z-Read-After-Write`](https://github.com/matrix-org/synapse/labels/Z-Read-After-Write)) but it seems to be an event auth logic problem. Workers probably do increase the number of possible race condition scenarios that make this visible though (replication and cache invalidation lag).

Fix https://github.com/element-hq/synapse/issues/15012
(probably fixes https://github.com/matrix-org/synapse/issues/15012 (https://github.com/element-hq/synapse/issues/15012))
Related to https://github.com/matrix-org/matrix-spec/issues/2062

Problems:

 1. We don't consider [out-of-band membership](https://github.com/element-hq/synapse/blob/develop/docs/development/room-dag-concepts.md#out-of-band-membership-events) (outliers) in our `event_auth` logic even though we expose them in `/sync`.
 1. (This PR doesn't address this point) Perhaps we should consider authing events in the persistence queue as events already in the queue could allow subsequent events to be allowed (events come through many channels: federation transaction, remote invite, remote join, local send). But this doesn't save us in the case where the event is more delayed over federation.


### What happened before?

I wrote some Complement test that stresses this exact scenario and reproduces the problem: https://github.com/matrix-org/complement/pull/757

```
COMPLEMENT_ALWAYS_PRINT_SERVER_LOGS=1 COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestSynapseConsistency
```


We have `hs1` and `hs2` running in monolith mode (no workers):

 1. `@charlie1:hs2` is invited and joins the room:
     1. `hs1` invites `@charlie1:hs2` to a room which we receive on `hs2` as `PUT /_matrix/federation/v1/invite/{roomId}/{eventId}` (`on_invite_request(...)`) and the invite membership is persisted as an outlier. The `room_memberships` and `local_current_membership` database tables are also updated which means they are visible down `/sync` at this point.
     1. `@charlie1:hs2` decides to join because it saw the invite down `/sync`. Because `hs2` is not yet in the room, this happens as a remote join `make_join`/`send_join` which comes back with all of the auth events needed to auth successfully and now `@charlie1:hs2` is successfully joined to the room.
 1. `@charlie2:hs2` is invited and and tries to join the room:
     1. `hs1` invites `@charlie2:hs2` to the room which we receive on `hs2` as `PUT /_matrix/federation/v1/invite/{roomId}/{eventId}` (`on_invite_request(...)`) and the invite membership is persisted as an outlier. The `room_memberships` and `local_current_membership` database tables are also updated which means they are visible down `/sync` at this point.
     1. Because `hs2` is already participating in the room, we also see the invite come over federation in a transaction and we start processing it (not done yet, see below)
     1. `@charlie2:hs2` decides to join because it saw the invite down `/sync`. Because `hs2`, is already in the room, this happens as a local join but we deny the event because our `event_auth` logic thinks that we have no membership in the room  (expected to be able to join because we saw the invite down `/sync`)
     1. We finally finish processing the `@charlie2:hs2` invite event from and de-outlier it.
         - If this finished before we tried to join we would have been fine but this is the race condition that makes this situation visible.


Logs for `hs2`:

```
🗳️ on_invite_request: handling event <FrozenEventV3 event_id=$PRPCvdXdcqyjdUKP_NxGF2CcukmwOaoK0ZR1WiVOZVk, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=invite, outlier=False>
🔦 _store_room_members_txn update room_memberships: <FrozenEventV3 event_id=$PRPCvdXdcqyjdUKP_NxGF2CcukmwOaoK0ZR1WiVOZVk, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=invite, outlier=True>
🔦 _store_room_members_txn update local_current_membership: <FrozenEventV3 event_id=$PRPCvdXdcqyjdUKP_NxGF2CcukmwOaoK0ZR1WiVOZVk, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=invite, outlier=True>
📨 Notifying about new event <FrozenEventV3 event_id=$PRPCvdXdcqyjdUKP_NxGF2CcukmwOaoK0ZR1WiVOZVk, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=invite, outlier=True>
 on_invite_request: handled event <FrozenEventV3 event_id=$PRPCvdXdcqyjdUKP_NxGF2CcukmwOaoK0ZR1WiVOZVk, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=invite, outlier=True>
🧲 do_invite_join for @user-2-charlie1:hs2 in !sfZVBdLUezpPWetrol:hs1
🔦 _store_room_members_txn update room_memberships: <FrozenEventV3 event_id=$bwv8LxFnqfpsw_rhR7OrTjtz09gaJ23MqstKOcs7ygA, type=m.room.member, state_key=@user-1-alice:hs1, membership=join, outlier=True>
🔦 _store_room_members_txn update room_memberships: <FrozenEventV3 event_id=$oju1ts3G3pz5O62IesrxX5is4LxAwU3WPr4xvid5ijI, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=join, outlier=False>
📨 Notifying about new event <FrozenEventV3 event_id=$oju1ts3G3pz5O62IesrxX5is4LxAwU3WPr4xvid5ijI, type=m.room.member, state_key=@user-2-charlie1:hs2, membership=join, outlier=False>

...

🗳️ on_invite_request: handling event <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=False>
🔦 _store_room_members_txn update room_memberships: <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=True>
🔦 _store_room_members_txn update local_current_membership: <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=True>
📨 Notifying about new event <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=True>
 on_invite_request: handled event <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=True>
📬 handling received PDU in room !sfZVBdLUezpPWetrol:hs1: <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=False>
📮 handle_new_client_event: handling <FrozenEventV3 event_id=$WNVDTQrxy5tCdPQHMyHyIn7tE4NWqKsZ8Bn8R4WbBSA, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=join, outlier=False>
 Denying new event <FrozenEventV3 event_id=$WNVDTQrxy5tCdPQHMyHyIn7tE4NWqKsZ8Bn8R4WbBSA, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=join, outlier=False> because 403: You are not invited to this room.
synapse.http.server - 130 - INFO - POST-16 - <SynapseRequest at 0x7f460c91fbf0 method='POST' uri='/_matrix/client/v3/join/%21sfZVBdLUezpPWetrol:hs1?server_name=hs1' clientproto='HTTP/1.0' site='8080'> SynapseError: 403 - You are not invited to this room.
📨 Notifying about new event <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=False>
 handled received PDU in room !sfZVBdLUezpPWetrol:hs1: <FrozenEventV3 event_id=$O_54j7O--6xMsegY5EVZ9SA-mI4_iHJOIoRwYyeWIPY, type=m.room.member, state_key=@user-3-charlie2:hs2, membership=invite, outlier=False>
```
2025-01-27 11:21:10 -06:00
dependabot[bot]
148e93576e Bump log from 0.4.22 to 0.4.25 (#18098)
Bumps [log](https://github.com/rust-lang/log) from 0.4.22 to 0.4.25.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/rust-lang/log/releases">log's
releases</a>.</em></p>
<blockquote>
<h2>0.4.25</h2>
<h2>What's Changed</h2>
<ul>
<li>Revert loosening of kv cargo features by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/662">rust-lang/log#662</a></li>
<li>Prepare for 0.4.25 release by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/663">rust-lang/log#663</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-lang/log/compare/0.4.24...0.4.25">https://github.com/rust-lang/log/compare/0.4.24...0.4.25</a></p>
<h2>0.4.24 (yanked)</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix up kv feature activation by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/659">rust-lang/log#659</a></li>
<li>Prepare for 0.4.24 release by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/660">rust-lang/log#660</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-lang/log/compare/0.4.23...0.4.24">https://github.com/rust-lang/log/compare/0.4.23...0.4.24</a></p>
<h2>0.4.23 (yanked)</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix some typos by <a
href="https://github.com/Kleinmarb"><code>@​Kleinmarb</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/637">rust-lang/log#637</a></li>
<li>Add logforth to implementation by <a
href="https://github.com/tisonkun"><code>@​tisonkun</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/638">rust-lang/log#638</a></li>
<li>Add <code>spdlog-rs</code> link to README by <a
href="https://github.com/SpriteOvO"><code>@​SpriteOvO</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/639">rust-lang/log#639</a></li>
<li>Add correct lifetime to kv::Value::to_borrowed_str by <a
href="https://github.com/stevenroose"><code>@​stevenroose</code></a> in
<a
href="https://redirect.github.com/rust-lang/log/pull/643">rust-lang/log#643</a></li>
<li>docs: Add logforth as an impl by <a
href="https://github.com/tisonkun"><code>@​tisonkun</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/642">rust-lang/log#642</a></li>
<li>Add clang_log implementation by <a
href="https://github.com/DDAN-17"><code>@​DDAN-17</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/646">rust-lang/log#646</a></li>
<li>Bind lifetimes of &amp;str returned from Key by the lifetime of 'k
rather than the lifetime of the Key struct by <a
href="https://github.com/gbbosak"><code>@​gbbosak</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/648">rust-lang/log#648</a>
(reverted)</li>
<li>Fix up key lifetimes and add method to try get a borrowed key by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/653">rust-lang/log#653</a></li>
<li>Add Ftail implementation by <a
href="https://github.com/tjardoo"><code>@​tjardoo</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/652">rust-lang/log#652</a></li>
<li>Relax feature flag for value's std_support by <a
href="https://github.com/tisonkun"><code>@​tisonkun</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/657">rust-lang/log#657</a></li>
<li>Prepare for 0.4.23 release by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/656">rust-lang/log#656</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/Kleinmarb"><code>@​Kleinmarb</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/637">rust-lang/log#637</a></li>
<li><a href="https://github.com/tisonkun"><code>@​tisonkun</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/638">rust-lang/log#638</a></li>
<li><a href="https://github.com/SpriteOvO"><code>@​SpriteOvO</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/639">rust-lang/log#639</a></li>
<li><a
href="https://github.com/stevenroose"><code>@​stevenroose</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/643">rust-lang/log#643</a></li>
<li><a href="https://github.com/DDAN-17"><code>@​DDAN-17</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/646">rust-lang/log#646</a></li>
<li><a href="https://github.com/gbbosak"><code>@​gbbosak</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/648">rust-lang/log#648</a></li>
<li><a href="https://github.com/tjardoo"><code>@​tjardoo</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/652">rust-lang/log#652</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-lang/log/compare/0.4.22...0.4.23">https://github.com/rust-lang/log/compare/0.4.22...0.4.23</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/rust-lang/log/blob/master/CHANGELOG.md">log's
changelog</a>.</em></p>
<blockquote>
<h2>[0.4.25] - 2025-01-14</h2>
<h2>What's Changed</h2>
<ul>
<li>Revert loosening of kv cargo features by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/662">rust-lang/log#662</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-lang/log/compare/0.4.24...0.4.25">https://github.com/rust-lang/log/compare/0.4.24...0.4.25</a></p>
<h2>[0.4.24] - 2025-01-11</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix up kv feature activation by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/659">rust-lang/log#659</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-lang/log/compare/0.4.23...0.4.24">https://github.com/rust-lang/log/compare/0.4.23...0.4.24</a></p>
<h2>[0.4.23] - 2025-01-10 (yanked)</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix some typos by <a
href="https://github.com/Kleinmarb"><code>@​Kleinmarb</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/637">rust-lang/log#637</a></li>
<li>Add logforth to implementation by <a
href="https://github.com/tisonkun"><code>@​tisonkun</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/638">rust-lang/log#638</a></li>
<li>Add <code>spdlog-rs</code> link to README by <a
href="https://github.com/SpriteOvO"><code>@​SpriteOvO</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/639">rust-lang/log#639</a></li>
<li>Add correct lifetime to kv::Value::to_borrowed_str by <a
href="https://github.com/stevenroose"><code>@​stevenroose</code></a> in
<a
href="https://redirect.github.com/rust-lang/log/pull/643">rust-lang/log#643</a></li>
<li>docs: Add logforth as an impl by <a
href="https://github.com/tisonkun"><code>@​tisonkun</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/642">rust-lang/log#642</a></li>
<li>Add clang_log implementation by <a
href="https://github.com/DDAN-17"><code>@​DDAN-17</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/646">rust-lang/log#646</a></li>
<li>Bind lifetimes of &amp;str returned from Key by the lifetime of 'k
rather than the lifetime of the Key struct by <a
href="https://github.com/gbbosak"><code>@​gbbosak</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/648">rust-lang/log#648</a></li>
<li>Fix up key lifetimes and add method to try get a borrowed key by <a
href="https://github.com/KodrAus"><code>@​KodrAus</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/653">rust-lang/log#653</a></li>
<li>Add Ftail implementation by <a
href="https://github.com/tjardoo"><code>@​tjardoo</code></a> in <a
href="https://redirect.github.com/rust-lang/log/pull/652">rust-lang/log#652</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/Kleinmarb"><code>@​Kleinmarb</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/637">rust-lang/log#637</a></li>
<li><a href="https://github.com/tisonkun"><code>@​tisonkun</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/638">rust-lang/log#638</a></li>
<li><a href="https://github.com/SpriteOvO"><code>@​SpriteOvO</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/639">rust-lang/log#639</a></li>
<li><a
href="https://github.com/stevenroose"><code>@​stevenroose</code></a>
made their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/643">rust-lang/log#643</a></li>
<li><a href="https://github.com/DDAN-17"><code>@​DDAN-17</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/646">rust-lang/log#646</a></li>
<li><a href="https://github.com/gbbosak"><code>@​gbbosak</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/648">rust-lang/log#648</a></li>
<li><a href="https://github.com/tjardoo"><code>@​tjardoo</code></a> made
their first contribution in <a
href="https://redirect.github.com/rust-lang/log/pull/652">rust-lang/log#652</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rust-lang/log/compare/0.4.22...0.4.23">https://github.com/rust-lang/log/compare/0.4.22...0.4.23</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="22be810729"><code>22be810</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/663">#663</a>
from rust-lang/cargo/0.4.25</li>
<li><a
href="0279730123"><code>0279730</code></a>
prepare for 0.4.25 release</li>
<li><a
href="4099bcb357"><code>4099bcb</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/662">#662</a>
from rust-lang/fix/cargo-features</li>
<li><a
href="36e7e3f696"><code>36e7e3f</code></a>
revert loosening of kv cargo features</li>
<li><a
href="2282191854"><code>2282191</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/660">#660</a>
from rust-lang/cargo/0.4.24</li>
<li><a
href="2994f0a62c"><code>2994f0a</code></a>
prepare for 0.4.24 release</li>
<li><a
href="5fcb50eccd"><code>5fcb50e</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/659">#659</a>
from rust-lang/fix/feature-builds</li>
<li><a
href="29fe9e60ff"><code>29fe9e6</code></a>
fix up feature activation</li>
<li><a
href="b1824f2c28"><code>b1824f2</code></a>
use cargo hack in CI to test all feature combinations</li>
<li><a
href="e6b643d591"><code>e6b643d</code></a>
Merge pull request <a
href="https://redirect.github.com/rust-lang/log/issues/656">#656</a>
from rust-lang/cargo/0.4.23</li>
<li>Additional commits viewable in <a
href="https://github.com/rust-lang/log/compare/0.4.22...0.4.25">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=log&package-manager=cargo&previous-version=0.4.22&new-version=0.4.25)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-27 15:23:28 +00:00
dependabot[bot]
56ed412839 Bump dawidd6/action-download-artifact from 7 to 8 (#18108)
Bumps
[dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact)
from 7 to 8.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dawidd6/action-download-artifact/releases">dawidd6/action-download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v8</h2>
<h2>New features</h2>
<ul>
<li><code>use_unzip</code> boolean input (defaulting to false) - if set
to true, the action will use system provided <code>unzip</code> utility
for unpacking downloaded artifact(s) (note that the action will first
download the .zip artifact file, then unpack it and remove the .zip
file)</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>README: v7 by <a
href="https://github.com/haines"><code>@​haines</code></a> in <a
href="https://redirect.github.com/dawidd6/action-download-artifact/pull/318">dawidd6/action-download-artifact#318</a></li>
<li>Unzip by <a
href="https://github.com/dawidd6"><code>@​dawidd6</code></a> in <a
href="https://redirect.github.com/dawidd6/action-download-artifact/pull/325">dawidd6/action-download-artifact#325</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/haines"><code>@​haines</code></a> made
their first contribution in <a
href="https://redirect.github.com/dawidd6/action-download-artifact/pull/318">dawidd6/action-download-artifact#318</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/dawidd6/action-download-artifact/compare/v7...v8">https://github.com/dawidd6/action-download-artifact/compare/v7...v8</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="20319c5641"><code>20319c5</code></a>
README: v8</li>
<li><a
href="e58a9e5d14"><code>e58a9e5</code></a>
Unzip (<a
href="https://redirect.github.com/dawidd6/action-download-artifact/issues/325">#325</a>)</li>
<li><a
href="6d05268723"><code>6d05268</code></a>
node_modules: update</li>
<li><a
href="c03fb0c928"><code>c03fb0c</code></a>
README: v7 (<a
href="https://redirect.github.com/dawidd6/action-download-artifact/issues/318">#318</a>)</li>
<li>See full diff in <a
href="80620a5d27...20319c5641">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=dawidd6/action-download-artifact&package-manager=github_actions&previous-version=7&new-version=8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-27 15:20:41 +00:00
Sven Mäder
9c5d08fff8 Ratelimit presence updates (#18000) 2025-01-24 19:58:01 +00:00
Max Kratz
90a6bd01c2 Contrib: Docker: updates PostgreSQL version in docker-compose.yml (#18089)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-01-21 18:54:31 +00:00
38 changed files with 1902 additions and 548 deletions

View File

@@ -14,7 +14,7 @@ jobs:
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
- name: 📥 Download artifact
uses: dawidd6/action-download-artifact@80620a5d27ce0ae443b965134db88467fc607b43 # v7
uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
with:
workflow: docs-pr.yaml
run_id: ${{ github.event.workflow_run.id }}

8
Cargo.lock generated
View File

@@ -216,9 +216,9 @@ checksum = "ae743338b92ff9146ce83992f766a31066a91a8c84a45e0e9f21e7cf6de6d346"
[[package]]
name = "log"
version = "0.4.22"
version = "0.4.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a7a70ba024b9dc04c27ea2f0c0548feb474ec5c54bba33a7f72f873a39d07b24"
checksum = "04cbf5b083de1c7e0222a7a51dbfdba1cbe1c6ab0b15e29fff3f6c077fd9cd9f"
[[package]]
name = "memchr"
@@ -449,9 +449,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.135"
version = "1.0.137"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2b0d7ba2887406110130a978386c4e1befb98c674b4fba677954e4db976630d9"
checksum = "930cfb6e6abf99298aaad7d29abbef7a9999a9a8806a40088f55f0dcec03146b"
dependencies = [
"itoa",
"memchr",

1
changelog.d/18000.bugfix Normal file
View File

@@ -0,0 +1 @@
Add rate limit `rc_presence.per_user`. This prevents load from excessive presence updates sent by clients via sync api. Also rate limit `/_matrix/client/v3/presence` as per the spec. Contributed by @rda0.

1
changelog.d/18073.bugfix Normal file
View File

@@ -0,0 +1 @@
Deactivated users will no longer automatically accept an invite when `auto_accept_invites` is enabled.

1
changelog.d/18075.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix join being denied after being invited over federation. Also fixes other out-of-band membership transitions.

2
changelog.d/18089.bugfix Normal file
View File

@@ -0,0 +1,2 @@
Updates contributed `docker-compose.yml` file to PostgreSQL v15, as v12 is no longer supported by Synapse.
Contributed by @maxkratz.

1
changelog.d/18109.misc Normal file
View File

@@ -0,0 +1 @@
Increase the length of the generated `nonce` parameter when perfoming OIDC logins to comply with the TI-Messenger spec.

1
changelog.d/18112.bugfix Normal file
View File

@@ -0,0 +1 @@
Raise an error if someone is using an incorrect suffix in a config duration string.

1
changelog.d/18119.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug where the [Delete Room Admin API](https://element-hq.github.io/synapse/latest/admin_api/rooms.html#version-2-new-version) would fail if the `block` parameter was set to `true` and a worker other than the main process was configured to handle background tasks.

View File

@@ -51,7 +51,7 @@ services:
- traefik.http.routers.https-synapse.tls.certResolver=le-ssl
db:
image: docker.io/postgres:12-alpine
image: docker.io/postgres:15-alpine
# Change that password, of course!
environment:
- POSTGRES_USER=synapse

View File

@@ -89,6 +89,11 @@ rc_invites:
per_second: 1000
burst_count: 1000
rc_presence:
per_user:
per_second: 9999
burst_count: 9999
federation_rr_transactions_per_room_per_second: 9999
allow_device_name_lookup_over_federation: true

View File

@@ -1868,6 +1868,27 @@ rc_federation:
concurrent: 5
```
---
### `rc_presence`
This option sets ratelimiting for presence.
The `rc_presence.per_user` option sets rate limits on how often a specific
users' presence updates are evaluated. Ratelimited presence updates sent via sync are
ignored, and no error is returned to the client.
This option also sets the rate limit for the
[`PUT /_matrix/client/v3/presence/{userId}/status`](https://spec.matrix.org/latest/client-server-api/#put_matrixclientv3presenceuseridstatus)
endpoint.
`per_user` defaults to `per_second: 0.1`, `burst_count: 1`.
Example configuration:
```yaml
rc_presence:
per_user:
per_second: 0.05
burst_count: 0.5
```
---
### `federation_rr_transactions_per_room_per_second`
Sets outgoing federation transaction frequency for sending read-receipts,

20
poetry.lock generated
View File

@@ -472,20 +472,20 @@ smmap = ">=3.0.1,<6"
[[package]]
name = "gitpython"
version = "3.1.43"
version = "3.1.44"
description = "GitPython is a Python library used to interact with Git repositories"
optional = false
python-versions = ">=3.7"
files = [
{file = "GitPython-3.1.43-py3-none-any.whl", hash = "sha256:eec7ec56b92aad751f9912a73404bc02ba212a23adb2c7098ee668417051a1ff"},
{file = "GitPython-3.1.43.tar.gz", hash = "sha256:35f314a9f878467f5453cc1fee295c3e18e52f1b99f10f6cf5b1682e968a9e7c"},
{file = "GitPython-3.1.44-py3-none-any.whl", hash = "sha256:9e0e10cda9bed1ee64bc9a6de50e7e38a9c9943241cd7f585f6df3ed28011110"},
{file = "gitpython-3.1.44.tar.gz", hash = "sha256:c87e30b26253bf5418b01b0660f818967f3c503193838337fe5e573331249269"},
]
[package.dependencies]
gitdb = ">=4.0.1,<5"
[package.extras]
doc = ["sphinx (==4.3.2)", "sphinx-autodoc-typehints", "sphinx-rtd-theme", "sphinxcontrib-applehelp (>=1.0.2,<=1.0.4)", "sphinxcontrib-devhelp (==1.0.2)", "sphinxcontrib-htmlhelp (>=2.0.0,<=2.0.1)", "sphinxcontrib-qthelp (==1.0.3)", "sphinxcontrib-serializinghtml (==1.1.5)"]
doc = ["sphinx (>=7.1.2,<7.2)", "sphinx-autodoc-typehints", "sphinx_rtd_theme"]
test = ["coverage[toml]", "ddt (>=1.1.1,!=1.4.3)", "mock", "mypy", "pre-commit", "pytest (>=7.3.1)", "pytest-cov", "pytest-instafail", "pytest-mock", "pytest-sugar", "typing-extensions"]
[[package]]
@@ -1960,13 +1960,13 @@ six = ">=1.5"
[[package]]
name = "python-multipart"
version = "0.0.18"
version = "0.0.20"
description = "A streaming multipart parser for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "python_multipart-0.0.18-py3-none-any.whl", hash = "sha256:efe91480f485f6a361427a541db4796f9e1591afc0fb8e7a4ba06bfbc6708996"},
{file = "python_multipart-0.0.18.tar.gz", hash = "sha256:7a68db60c8bfb82e460637fa4750727b45af1d5e2ed215593f917f64694d34fe"},
{file = "python_multipart-0.0.20-py3-none-any.whl", hash = "sha256:8a62d3a8335e06589fe01f2a3e178cdcc632f3fbe0d492ad9ee0ec35aab1f104"},
{file = "python_multipart-0.0.20.tar.gz", hash = "sha256:8dd0cab45b8e23064ae09147625994d090fa46f5b0d1e13af944c331a7fa9d13"},
]
[[package]]
@@ -2706,13 +2706,13 @@ twisted = "*"
[[package]]
name = "types-bleach"
version = "6.1.0.20240331"
version = "6.2.0.20241123"
description = "Typing stubs for bleach"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-bleach-6.1.0.20240331.tar.gz", hash = "sha256:2ee858a84fb06fc2225ff56ba2f7f6c88b65638659efae0d7bfd6b24a1b5a524"},
{file = "types_bleach-6.1.0.20240331-py3-none-any.whl", hash = "sha256:399bc59bfd20a36a56595f13f805e56c8a08e5a5c07903e5cf6fafb5a5107dd4"},
{file = "types_bleach-6.2.0.20241123-py3-none-any.whl", hash = "sha256:c6e58b3646665ca7c6b29890375390f4569e84f0cf5c171e0fe1ddb71a7be86a"},
{file = "types_bleach-6.2.0.20241123.tar.gz", hash = "sha256:dac5fe9015173514da3ac810c1a935619a3ccbcc5d66c4cbf4707eac00539057"},
]
[package.dependencies]

View File

@@ -275,6 +275,7 @@ class Ratelimiter:
update: bool = True,
n_actions: int = 1,
_time_now_s: Optional[float] = None,
pause: Optional[float] = 0.5,
) -> None:
"""Checks if an action can be performed. If not, raises a LimitExceededError
@@ -298,6 +299,8 @@ class Ratelimiter:
at all.
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
pause: Time in seconds to pause when an action is being limited. Defaults to 0.5
to stop clients from "tight-looping" on retrying their request.
Raises:
LimitExceededError: If an action could not be performed, along with the time in
@@ -316,9 +319,8 @@ class Ratelimiter:
)
if not allowed:
# We pause for a bit here to stop clients from "tight-looping" on
# retrying their request.
await self.clock.sleep(0.5)
if pause:
await self.clock.sleep(pause)
raise LimitExceededError(
limiter_name=self._limiter_name,

View File

@@ -221,9 +221,13 @@ class Config:
The number of milliseconds in the duration.
Raises:
TypeError, if given something other than an integer or a string
TypeError: if given something other than an integer or a string, or the
duration is using an incorrect suffix.
ValueError: if given a string not of the form described above.
"""
# For integers, we prefer to use `type(value) is int` instead of
# `isinstance(value, int)` because we want to exclude subclasses of int, such as
# bool.
if type(value) is int: # noqa: E721
return value
elif isinstance(value, str):
@@ -246,9 +250,20 @@ class Config:
if suffix in sizes:
value = value[:-1]
size = sizes[suffix]
elif suffix.isdigit():
# No suffix is treated as milliseconds.
value = value
size = 1
else:
raise TypeError(
f"Bad duration suffix {value} (expected no suffix or one of these suffixes: {sizes.keys()})"
)
return int(value) * size
else:
raise TypeError(f"Bad duration {value!r}")
raise TypeError(
f"Bad duration type {value!r} (expected int or string duration)"
)
@staticmethod
def abspath(file_path: str) -> str:

View File

@@ -228,3 +228,9 @@ class RatelimitConfig(Config):
config.get("remote_media_download_burst_count", "500M")
),
)
self.rc_presence_per_user = RatelimitSettings.parse(
config,
"rc_presence.per_user",
defaults={"per_second": 0.1, "burst_count": 1},
)

View File

@@ -566,6 +566,7 @@ def _is_membership_change_allowed(
logger.debug(
"_is_membership_change_allowed: %s",
{
"caller_membership": caller.membership if caller else None,
"caller_in_room": caller_in_room,
"caller_invited": caller_invited,
"caller_knocked": caller_knocked,
@@ -677,7 +678,8 @@ def _is_membership_change_allowed(
and join_rule == JoinRules.KNOCK_RESTRICTED
)
):
if not caller_in_room and not caller_invited:
# You can only join the room if you are invited or are already in the room.
if not (caller_in_room or caller_invited):
raise AuthError(403, "You are not invited to this room.")
else:
# TODO (erikj): may_join list

View File

@@ -42,7 +42,7 @@ import attr
from typing_extensions import Literal
from unpaddedbase64 import encode_base64
from synapse.api.constants import RelationTypes
from synapse.api.constants import EventTypes, RelationTypes
from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions
from synapse.synapse_rust.events import EventInternalMetadata
from synapse.types import JsonDict, StrCollection
@@ -325,12 +325,17 @@ class EventBase(metaclass=abc.ABCMeta):
def __repr__(self) -> str:
rejection = f"REJECTED={self.rejected_reason}, " if self.rejected_reason else ""
conditional_membership_string = ""
if self.get("type") == EventTypes.Member:
conditional_membership_string = f"membership={self.membership}, "
return (
f"<{self.__class__.__name__} "
f"{rejection}"
f"event_id={self.event_id}, "
f"type={self.get('type')}, "
f"state_key={self.get('state_key')}, "
f"{conditional_membership_string}"
f"outlier={self.internal_metadata.is_outlier()}"
">"
)

View File

@@ -66,50 +66,67 @@ class InviteAutoAccepter:
event: The incoming event.
"""
# Check if the event is an invite for a local user.
is_invite_for_local_user = (
event.type == EventTypes.Member
and event.is_state()
and event.membership == Membership.INVITE
and self._api.is_mine(event.state_key)
)
if (
event.type != EventTypes.Member
or event.is_state() is False
or event.membership != Membership.INVITE
or self._api.is_mine(event.state_key) is False
):
return
# Only accept invites for direct messages if the configuration mandates it.
is_direct_message = event.content.get("is_direct", False)
is_allowed_by_direct_message_rules = (
not self._config.accept_invites_only_for_direct_messages
or is_direct_message is True
)
if (
self._config.accept_invites_only_for_direct_messages
and is_direct_message is False
):
return
# Only accept invites from remote users if the configuration mandates it.
is_from_local_user = self._api.is_mine(event.sender)
is_allowed_by_local_user_rules = (
not self._config.accept_invites_only_from_local_users
or is_from_local_user is True
if (
self._config.accept_invites_only_from_local_users
and is_from_local_user is False
):
return
# Check the user is activated.
recipient = await self._api.get_userinfo_by_id(event.state_key)
# Ignore if the user doesn't exist.
if recipient is None:
return
# Never accept invites for deactivated users.
if recipient.is_deactivated:
return
# Never accept invites for suspended users.
if recipient.suspended:
return
# Never accept invites for locked users.
if recipient.locked:
return
# Make the user join the room. We run this as a background process to circumvent a race condition
# that occurs when responding to invites over federation (see https://github.com/matrix-org/synapse-auto-accept-invite/issues/12)
run_as_background_process(
"retry_make_join",
self._retry_make_join,
event.state_key,
event.state_key,
event.room_id,
"join",
bg_start_span=False,
)
if (
is_invite_for_local_user
and is_allowed_by_direct_message_rules
and is_allowed_by_local_user_rules
):
# Make the user join the room. We run this as a background process to circumvent a race condition
# that occurs when responding to invites over federation (see https://github.com/matrix-org/synapse-auto-accept-invite/issues/12)
run_as_background_process(
"retry_make_join",
self._retry_make_join,
event.state_key,
event.state_key,
event.room_id,
"join",
bg_start_span=False,
if is_direct_message:
# Mark this room as a direct message!
await self._mark_room_as_direct_message(
event.state_key, event.sender, event.room_id
)
if is_direct_message:
# Mark this room as a direct message!
await self._mark_room_as_direct_message(
event.state_key, event.sender, event.room_id
)
async def _mark_room_as_direct_message(
self, user_id: str, dm_user_id: str, room_id: str
) -> None:

View File

@@ -24,7 +24,7 @@ from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
import attr
from signedjson.types import SigningKey
from synapse.api.constants import MAX_DEPTH
from synapse.api.constants import MAX_DEPTH, EventTypes
from synapse.api.room_versions import (
KNOWN_EVENT_FORMAT_VERSIONS,
EventFormatVersions,
@@ -109,6 +109,19 @@ class EventBuilder:
def is_state(self) -> bool:
return self._state_key is not None
def is_mine_id(self, user_id: str) -> bool:
"""Determines whether a user ID or room alias originates from this homeserver.
Returns:
`True` if the hostname part of the user ID or room alias matches this
homeserver.
`False` otherwise, or if the user ID or room alias is malformed.
"""
localpart_hostname = user_id.split(":", 1)
if len(localpart_hostname) < 2:
return False
return localpart_hostname[1] == self._hostname
async def build(
self,
prev_event_ids: List[str],
@@ -142,6 +155,46 @@ class EventBuilder:
self, state_ids
)
# Check for out-of-band membership that may have been exposed on `/sync` but
# the events have not been de-outliered yet so they won't be part of the
# room state yet.
#
# This helps in situations where a remote homeserver invites a local user to
# a room that we're already participating in; and we've persisted the invite
# as an out-of-band membership (outlier), but it hasn't been pushed to us as
# part of a `/send` transaction yet and de-outliered. This also helps for
# any of the other out-of-band membership transitions.
#
# As an optimization, we could check if the room state already includes a
# non-`leave` membership event, then we can assume the membership event has
# been de-outliered and we don't need to check for an out-of-band
# membership. But we don't have the necessary information from a
# `StateMap[str]` and we'll just have to take the hit of this extra lookup
# for any membership event for now.
if self.type == EventTypes.Member and self.is_mine_id(self.state_key):
(
_membership,
member_event_id,
) = await self._store.get_local_current_membership_for_user_in_room(
user_id=self.state_key,
room_id=self.room_id,
)
# There is no need to check if the membership is actually an
# out-of-band membership (`outlier`) as we would end up with the
# same result either way (adding the member event to the
# `auth_event_ids`).
if (
member_event_id is not None
# We only need to be careful about duplicating the event in the
# `auth_event_ids` list (duplicate `type`/`state_key` is part of the
# authorization rules)
and member_event_id not in auth_event_ids
):
auth_event_ids.append(member_event_id)
# Also make sure to point to the previous membership event that will
# allow this one to happen so the computed state works out.
prev_event_ids.append(member_event_id)
format_version = self.room_version.event_format
# The types of auth/prev events changes between event versions.
prev_events: Union[StrCollection, List[Tuple[str, Dict[str, str]]]]

View File

@@ -2272,8 +2272,9 @@ class FederationEventHandler:
event_and_contexts, backfilled=backfilled
)
# After persistence we always need to notify replication there may
# be new data.
# After persistence, we never notify clients (wake up `/sync` streams) about
# backfilled events but it's important to let all the workers know about any
# new event (backfilled or not) because TODO
self._notifier.notify_replication()
if self._ephemeral_messages_enabled:

View File

@@ -1002,7 +1002,21 @@ class OidcProvider:
"""
state = generate_token()
nonce = generate_token()
# Generate a nonce 32 characters long. When encoded with base64url later on,
# the nonce will be 43 characters when sent to the identity provider.
#
# While RFC7636 does not specify a minimum length for the `nonce`
# parameter, the TI-Messenger IDP_FD spec v1.7.3 does require it to be
# between 43 and 128 characters. This spec concerns using Matrix for
# communication in German healthcare.
#
# As increasing the length only strengthens security, we use this length
# to allow TI-Messenger deployments using Synapse to satisfy this
# external spec.
#
# See https://github.com/element-hq/synapse/pull/18109 for more context.
nonce = generate_token(length=32)
code_verifier = ""
if not client_redirect_url:

View File

@@ -24,7 +24,8 @@
import logging
from typing import TYPE_CHECKING, Tuple
from synapse.api.errors import AuthError, SynapseError
from synapse.api.errors import AuthError, Codes, LimitExceededError, SynapseError
from synapse.api.ratelimiting import Ratelimiter
from synapse.handlers.presence import format_user_presence_state
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_json_object_from_request
@@ -48,6 +49,14 @@ class PresenceStatusRestServlet(RestServlet):
self.presence_handler = hs.get_presence_handler()
self.clock = hs.get_clock()
self.auth = hs.get_auth()
self.store = hs.get_datastores().main
# Ratelimiter for presence updates, keyed by requester.
self._presence_per_user_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
cfg=hs.config.ratelimiting.rc_presence_per_user,
)
async def on_GET(
self, request: SynapseRequest, user_id: str
@@ -82,6 +91,17 @@ class PresenceStatusRestServlet(RestServlet):
if requester.user != user:
raise AuthError(403, "Can only set your own presence state")
# ignore the presence update if the ratelimit is exceeded
try:
await self._presence_per_user_limiter.ratelimit(requester)
except LimitExceededError as e:
logger.debug("User presence ratelimit exceeded; ignoring it.")
return 429, {
"errcode": Codes.LIMIT_EXCEEDED,
"error": "Too many requests",
"retry_after_ms": e.retry_after_ms,
}
state = {}
content = parse_json_object_from_request(request)

View File

@@ -24,9 +24,10 @@ from collections import defaultdict
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Tuple, Union
from synapse.api.constants import AccountDataTypes, EduTypes, Membership, PresenceState
from synapse.api.errors import Codes, StoreError, SynapseError
from synapse.api.errors import Codes, LimitExceededError, StoreError, SynapseError
from synapse.api.filtering import FilterCollection
from synapse.api.presence import UserPresenceState
from synapse.api.ratelimiting import Ratelimiter
from synapse.events.utils import (
SerializeEventConfig,
format_event_for_client_v2_without_room_id,
@@ -126,6 +127,13 @@ class SyncRestServlet(RestServlet):
cache_name="sync_valid_filter",
)
# Ratelimiter for presence updates, keyed by requester.
self._presence_per_user_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
cfg=hs.config.ratelimiting.rc_presence_per_user,
)
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
# This will always be set by the time Twisted calls us.
assert request.args is not None
@@ -239,7 +247,14 @@ class SyncRestServlet(RestServlet):
# send any outstanding server notices to the user.
await self._server_notices_sender.on_user_syncing(user.to_string())
affect_presence = set_presence != PresenceState.OFFLINE
# ignore the presence update if the ratelimit is exceeded but do not pause the request
try:
await self._presence_per_user_limiter.ratelimit(requester, pause=0.0)
except LimitExceededError:
affect_presence = False
logger.debug("User set_presence ratelimit exceeded; ignoring it.")
else:
affect_presence = set_presence != PresenceState.OFFLINE
context = await self.presence_handler.user_syncing(
user.to_string(),

View File

@@ -391,7 +391,7 @@ class HomeServer(metaclass=abc.ABCMeta):
def is_mine(self, domain_specific_string: DomainSpecificString) -> bool:
return domain_specific_string.domain == self.hostname
def is_mine_id(self, string: str) -> bool:
def is_mine_id(self, user_id: str) -> bool:
"""Determines whether a user ID or room alias originates from this homeserver.
Returns:
@@ -399,7 +399,7 @@ class HomeServer(metaclass=abc.ABCMeta):
homeserver.
`False` otherwise, or if the user ID or room alias is malformed.
"""
localpart_hostname = string.split(":", 1)
localpart_hostname = user_id.split(":", 1)
if len(localpart_hostname) < 2:
return False
return localpart_hostname[1] == self.hostname

View File

@@ -1181,6 +1181,50 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
return total_media_quarantined
async def block_room(self, room_id: str, user_id: str) -> None:
"""Marks the room as blocked.
Can be called multiple times (though we'll only track the last user to
block this room).
Can be called on a room unknown to this homeserver.
Args:
room_id: Room to block
user_id: Who blocked it
"""
await self.db_pool.simple_upsert(
table="blocked_rooms",
keyvalues={"room_id": room_id},
values={},
insertion_values={"user_id": user_id},
desc="block_room",
)
await self.db_pool.runInteraction(
"block_room_invalidation",
self._invalidate_cache_and_stream,
self.is_room_blocked,
(room_id,),
)
async def unblock_room(self, room_id: str) -> None:
"""Remove the room from blocking list.
Args:
room_id: Room to unblock
"""
await self.db_pool.simple_delete(
table="blocked_rooms",
keyvalues={"room_id": room_id},
desc="unblock_room",
)
await self.db_pool.runInteraction(
"block_room_invalidation",
self._invalidate_cache_and_stream,
self.is_room_blocked,
(room_id,),
)
async def get_rooms_for_retention_period_in_range(
self, min_ms: Optional[int], max_ms: Optional[int], include_null: bool = False
) -> Dict[str, RetentionPolicy]:
@@ -2500,50 +2544,6 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore):
)
return next_id
async def block_room(self, room_id: str, user_id: str) -> None:
"""Marks the room as blocked.
Can be called multiple times (though we'll only track the last user to
block this room).
Can be called on a room unknown to this homeserver.
Args:
room_id: Room to block
user_id: Who blocked it
"""
await self.db_pool.simple_upsert(
table="blocked_rooms",
keyvalues={"room_id": room_id},
values={},
insertion_values={"user_id": user_id},
desc="block_room",
)
await self.db_pool.runInteraction(
"block_room_invalidation",
self._invalidate_cache_and_stream,
self.is_room_blocked,
(room_id,),
)
async def unblock_room(self, room_id: str) -> None:
"""Remove the room from blocking list.
Args:
room_id: Room to unblock
"""
await self.db_pool.simple_delete(
table="blocked_rooms",
keyvalues={"room_id": room_id},
desc="unblock_room",
)
await self.db_pool.runInteraction(
"block_room_invalidation",
self._invalidate_cache_and_stream,
self.is_room_blocked,
(room_id,),
)
async def clear_partial_state_room(self, room_id: str) -> Optional[int]:
"""Clears the partial state flag for a room.

View File

@@ -39,7 +39,7 @@ from synapse.module_api import ModuleApi
from synapse.rest import admin
from synapse.rest.client import login, room
from synapse.server import HomeServer
from synapse.types import StreamToken, create_requester
from synapse.types import StreamToken, UserID, UserInfo, create_requester
from synapse.util import Clock
from tests.handlers.test_sync import generate_sync_config
@@ -349,6 +349,169 @@ class AutoAcceptInvitesTestCase(FederatingHomeserverTestCase):
join_updates, _ = sync_join(self, invited_user_id)
self.assertEqual(len(join_updates), 0)
@override_config(
{
"auto_accept_invites": {
"enabled": True,
},
}
)
async def test_ignore_invite_for_missing_user(self) -> None:
"""Tests that receiving an invite for a missing user is ignored."""
inviting_user_id = self.register_user("inviter", "pass")
inviting_user_tok = self.login("inviter", "pass")
# A local user who receives an invite
invited_user_id = "@fake:" + self.hs.config.server.server_name
# Create a room and send an invite to the other user
room_id = self.helper.create_room_as(
inviting_user_id,
tok=inviting_user_tok,
)
self.helper.invite(
room_id,
inviting_user_id,
invited_user_id,
tok=inviting_user_tok,
)
join_updates, _ = sync_join(self, inviting_user_id)
# Assert that the last event in the room was not a member event for the target user.
self.assertEqual(
join_updates[0].timeline.events[-1].content["membership"], "invite"
)
@override_config(
{
"auto_accept_invites": {
"enabled": True,
},
}
)
async def test_ignore_invite_for_deactivated_user(self) -> None:
"""Tests that receiving an invite for a deactivated user is ignored."""
inviting_user_id = self.register_user("inviter", "pass", admin=True)
inviting_user_tok = self.login("inviter", "pass")
# A local user who receives an invite
invited_user_id = self.register_user("invitee", "pass")
# Create a room and send an invite to the other user
room_id = self.helper.create_room_as(
inviting_user_id,
tok=inviting_user_tok,
)
channel = self.make_request(
"PUT",
"/_synapse/admin/v2/users/%s" % invited_user_id,
{"deactivated": True},
access_token=inviting_user_tok,
)
assert channel.code == 200
self.helper.invite(
room_id,
inviting_user_id,
invited_user_id,
tok=inviting_user_tok,
)
join_updates, b = sync_join(self, inviting_user_id)
# Assert that the last event in the room was not a member event for the target user.
self.assertEqual(
join_updates[0].timeline.events[-1].content["membership"], "invite"
)
@override_config(
{
"auto_accept_invites": {
"enabled": True,
},
}
)
async def test_ignore_invite_for_suspended_user(self) -> None:
"""Tests that receiving an invite for a suspended user is ignored."""
inviting_user_id = self.register_user("inviter", "pass", admin=True)
inviting_user_tok = self.login("inviter", "pass")
# A local user who receives an invite
invited_user_id = self.register_user("invitee", "pass")
# Create a room and send an invite to the other user
room_id = self.helper.create_room_as(
inviting_user_id,
tok=inviting_user_tok,
)
channel = self.make_request(
"PUT",
f"/_synapse/admin/v1/suspend/{invited_user_id}",
{"suspend": True},
access_token=inviting_user_tok,
)
assert channel.code == 200
self.helper.invite(
room_id,
inviting_user_id,
invited_user_id,
tok=inviting_user_tok,
)
join_updates, b = sync_join(self, inviting_user_id)
# Assert that the last event in the room was not a member event for the target user.
self.assertEqual(
join_updates[0].timeline.events[-1].content["membership"], "invite"
)
@override_config(
{
"auto_accept_invites": {
"enabled": True,
},
}
)
async def test_ignore_invite_for_locked_user(self) -> None:
"""Tests that receiving an invite for a suspended user is ignored."""
inviting_user_id = self.register_user("inviter", "pass", admin=True)
inviting_user_tok = self.login("inviter", "pass")
# A local user who receives an invite
invited_user_id = self.register_user("invitee", "pass")
# Create a room and send an invite to the other user
room_id = self.helper.create_room_as(
inviting_user_id,
tok=inviting_user_tok,
)
channel = self.make_request(
"PUT",
f"/_synapse/admin/v2/users/{invited_user_id}",
{"locked": True},
access_token=inviting_user_tok,
)
assert channel.code == 200
self.helper.invite(
room_id,
inviting_user_id,
invited_user_id,
tok=inviting_user_tok,
)
join_updates, b = sync_join(self, inviting_user_id)
# Assert that the last event in the room was not a member event for the target user.
self.assertEqual(
join_updates[0].timeline.events[-1].content["membership"], "invite"
)
_request_key = 0
@@ -647,6 +810,22 @@ def create_module(
module_api.is_mine.side_effect = lambda a: a.split(":")[1] == "test"
module_api.worker_name = worker_name
module_api.sleep.return_value = make_multiple_awaitable(None)
module_api.get_userinfo_by_id.return_value = UserInfo(
user_id=UserID.from_string("@user:test"),
is_admin=False,
is_guest=False,
consent_server_notice_sent=None,
consent_ts=None,
consent_version=None,
appservice_id=None,
creation_ts=0,
user_type=None,
is_deactivated=False,
locked=False,
is_shadow_banned=False,
approved=True,
suspended=False,
)
if config_override is None:
config_override = {}

View File

@@ -0,0 +1,161 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
import logging
from unittest.mock import AsyncMock, Mock
from twisted.test.proto_helpers import MemoryReactor
from synapse.handlers.device import DeviceListUpdater
from synapse.server import HomeServer
from synapse.types import JsonDict
from synapse.util import Clock
from synapse.util.retryutils import NotRetryingDestination
from tests import unittest
logger = logging.getLogger(__name__)
class DeviceListResyncTestCase(unittest.HomeserverTestCase):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.store = self.hs.get_datastores().main
def test_retry_device_list_resync(self) -> None:
"""Tests that device lists are marked as stale if they couldn't be synced, and
that stale device lists are retried periodically.
"""
remote_user_id = "@john:test_remote"
remote_origin = "test_remote"
# Track the number of attempts to resync the user's device list.
self.resync_attempts = 0
# When this function is called, increment the number of resync attempts (only if
# we're querying devices for the right user ID), then raise a
# NotRetryingDestination error to fail the resync gracefully.
def query_user_devices(
destination: str, user_id: str, timeout: int = 30000
) -> JsonDict:
if user_id == remote_user_id:
self.resync_attempts += 1
raise NotRetryingDestination(0, 0, destination)
# Register the mock on the federation client.
federation_client = self.hs.get_federation_client()
federation_client.query_user_devices = Mock(side_effect=query_user_devices) # type: ignore[method-assign]
# Register a mock on the store so that the incoming update doesn't fail because
# we don't share a room with the user.
self.store.get_rooms_for_user = AsyncMock(return_value=["!someroom:test"])
# Manually inject a fake device list update. We need this update to include at
# least one prev_id so that the user's device list will need to be retried.
device_list_updater = self.hs.get_device_handler().device_list_updater
assert isinstance(device_list_updater, DeviceListUpdater)
self.get_success(
device_list_updater.incoming_device_list_update(
origin=remote_origin,
edu_content={
"deleted": False,
"device_display_name": "Mobile",
"device_id": "QBUAZIFURK",
"prev_id": [5],
"stream_id": 6,
"user_id": remote_user_id,
},
)
)
# Check that there was one resync attempt.
self.assertEqual(self.resync_attempts, 1)
# Check that the resync attempt failed and caused the user's device list to be
# marked as stale.
need_resync = self.get_success(
self.store.get_user_ids_requiring_device_list_resync()
)
self.assertIn(remote_user_id, need_resync)
# Check that waiting for 30 seconds caused Synapse to retry resyncing the device
# list.
self.reactor.advance(30)
self.assertEqual(self.resync_attempts, 2)
def test_cross_signing_keys_retry(self) -> None:
"""Tests that resyncing a device list correctly processes cross-signing keys from
the remote server.
"""
remote_user_id = "@john:test_remote"
remote_master_key = "85T7JXPFBAySB/jwby4S3lBPTqY3+Zg53nYuGmu1ggY"
remote_self_signing_key = "QeIiFEjluPBtI7WQdG365QKZcFs9kqmHir6RBD0//nQ"
# Register mock device list retrieval on the federation client.
federation_client = self.hs.get_federation_client()
federation_client.query_user_devices = AsyncMock( # type: ignore[method-assign]
return_value={
"user_id": remote_user_id,
"stream_id": 1,
"devices": [],
"master_key": {
"user_id": remote_user_id,
"usage": ["master"],
"keys": {"ed25519:" + remote_master_key: remote_master_key},
},
"self_signing_key": {
"user_id": remote_user_id,
"usage": ["self_signing"],
"keys": {
"ed25519:" + remote_self_signing_key: remote_self_signing_key
},
},
}
)
# Resync the device list.
device_handler = self.hs.get_device_handler()
self.get_success(
device_handler.device_list_updater.multi_user_device_resync(
[remote_user_id]
),
)
# Retrieve the cross-signing keys for this user.
keys = self.get_success(
self.store.get_e2e_cross_signing_keys_bulk(user_ids=[remote_user_id]),
)
self.assertIn(remote_user_id, keys)
key = keys[remote_user_id]
assert key is not None
# Check that the master key is the one returned by the mock.
master_key = key["master"]
self.assertEqual(len(master_key["keys"]), 1)
self.assertTrue("ed25519:" + remote_master_key in master_key["keys"].keys())
self.assertTrue(remote_master_key in master_key["keys"].values())
# Check that the self-signing key is the one returned by the mock.
self_signing_key = key["self_signing"]
self.assertEqual(len(self_signing_key["keys"]), 1)
self.assertTrue(
"ed25519:" + remote_self_signing_key in self_signing_key["keys"].keys(),
)
self.assertTrue(remote_self_signing_key in self_signing_key["keys"].values())

View File

@@ -0,0 +1,671 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2020 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
import logging
import time
import urllib.parse
from http import HTTPStatus
from typing import Any, Callable, Optional, Set, Tuple, TypeVar, Union
from unittest.mock import Mock
import attr
from parameterized import parameterized
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.api.room_versions import RoomVersion, RoomVersions
from synapse.events import EventBase, make_event_from_dict
from synapse.events.utils import strip_event
from synapse.federation.federation_base import (
event_from_pdu_json,
)
from synapse.federation.transport.client import SendJoinResponse
from synapse.http.matrixfederationclient import (
ByteParser,
)
from synapse.http.types import QueryParams
from synapse.rest import admin
from synapse.rest.client import login, room, sync
from synapse.server import HomeServer
from synapse.types import JsonDict, MutableStateMap, StateMap
from synapse.types.handlers.sliding_sync import (
StateValues,
)
from synapse.util import Clock
from tests import unittest
from tests.utils import test_timeout
logger = logging.getLogger(__name__)
def required_state_json_to_state_map(required_state: Any) -> StateMap[EventBase]:
state_map: MutableStateMap[EventBase] = {}
# Scrutinize JSON values to ensure it's in the expected format
if isinstance(required_state, list):
for state_event_dict in required_state:
# Yell because we're in a test and this is unexpected
assert isinstance(
state_event_dict, dict
), "`required_state` should be a list of event dicts"
event_type = state_event_dict["type"]
event_state_key = state_event_dict["state_key"]
# Yell because we're in a test and this is unexpected
assert isinstance(
event_type, str
), "Each event in `required_state` should have a string `type`"
assert isinstance(
event_state_key, str
), "Each event in `required_state` should have a string `state_key`"
state_map[(event_type, event_state_key)] = make_event_from_dict(
state_event_dict
)
else:
# Yell because we're in a test and this is unexpected
raise AssertionError("`required_state` should be a list of event dicts")
return state_map
@attr.s(slots=True, auto_attribs=True)
class RemoteRoomJoinResult:
remote_room_id: str
room_version: RoomVersion
remote_room_creator_user_id: str
local_user1_id: str
local_user1_tok: str
state_map: StateMap[EventBase]
class OutOfBandMembershipTests(unittest.FederatingHomeserverTestCase):
"""
Tests to make sure that interactions with out-of-band membership (outliers) works as
expected.
- invites received over federation, before we join the room
- *rejections* for said invites
See the "Out-of-band membership events" section in
`docs/development/room-dag-concepts.md` for more information.
"""
servlets = [
admin.register_servlets,
room.register_servlets,
login.register_servlets,
sync.register_servlets,
]
sync_endpoint = "/_matrix/client/unstable/org.matrix.simplified_msc3575/sync"
def default_config(self) -> JsonDict:
conf = super().default_config()
# Federation sending is disabled by default in the test environment
# so we need to enable it like this.
conf["federation_sender_instances"] = ["master"]
return conf
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
self.federation_http_client = Mock(
# The problem with using `spec=MatrixFederationHttpClient` here is that it
# requires everything to be mocked which is a lot of work that I don't want
# to do when the code only uses a few methods (`get_json` and `put_json`).
)
return self.setup_test_homeserver(
federation_http_client=self.federation_http_client
)
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
super().prepare(reactor, clock, hs)
self.store = self.hs.get_datastores().main
self.storage_controllers = hs.get_storage_controllers()
def do_sync(
self, sync_body: JsonDict, *, since: Optional[str] = None, tok: str
) -> Tuple[JsonDict, str]:
"""Do a sliding sync request with given body.
Asserts the request was successful.
Attributes:
sync_body: The full request body to use
since: Optional since token
tok: Access token to use
Returns:
A tuple of the response body and the `pos` field.
"""
sync_path = self.sync_endpoint
if since:
sync_path += f"?pos={since}"
channel = self.make_request(
method="POST",
path=sync_path,
content=sync_body,
access_token=tok,
)
self.assertEqual(channel.code, 200, channel.json_body)
return channel.json_body, channel.json_body["pos"]
def _invite_local_user_to_remote_room_and_join(self) -> RemoteRoomJoinResult:
"""
Helper to reproduce this scenario:
1. The remote user invites our local user to a room on their remote server (which
creates an out-of-band invite membership for user1 on our local server).
2. The local user notices the invite from `/sync`.
3. The local user joins the room.
4. The local user can see that they are now joined to the room from `/sync`.
"""
# Create a local user
local_user1_id = self.register_user("user1", "pass")
local_user1_tok = self.login(local_user1_id, "pass")
# Create a remote room
room_creator_user_id = f"@remote-user:{self.OTHER_SERVER_NAME}"
remote_room_id = f"!remote-room:{self.OTHER_SERVER_NAME}"
room_version = RoomVersions.V10
room_create_event = make_event_from_dict(
self.add_hashes_and_signatures_from_other_server(
{
"room_id": remote_room_id,
"sender": room_creator_user_id,
"depth": 1,
"origin_server_ts": 1,
"type": EventTypes.Create,
"state_key": "",
"content": {
# The `ROOM_CREATOR` field could be removed if we used a room
# version > 10 (in favor of relying on `sender`)
EventContentFields.ROOM_CREATOR: room_creator_user_id,
EventContentFields.ROOM_VERSION: room_version.identifier,
},
"auth_events": [],
"prev_events": [],
}
),
room_version=room_version,
)
creator_membership_event = make_event_from_dict(
self.add_hashes_and_signatures_from_other_server(
{
"room_id": remote_room_id,
"sender": room_creator_user_id,
"depth": 2,
"origin_server_ts": 2,
"type": EventTypes.Member,
"state_key": room_creator_user_id,
"content": {"membership": Membership.JOIN},
"auth_events": [room_create_event.event_id],
"prev_events": [room_create_event.event_id],
}
),
room_version=room_version,
)
# From the remote homeserver, invite user1 on the local homserver
user1_invite_membership_event = make_event_from_dict(
self.add_hashes_and_signatures_from_other_server(
{
"room_id": remote_room_id,
"sender": room_creator_user_id,
"depth": 3,
"origin_server_ts": 3,
"type": EventTypes.Member,
"state_key": local_user1_id,
"content": {"membership": Membership.INVITE},
"auth_events": [
room_create_event.event_id,
creator_membership_event.event_id,
],
"prev_events": [creator_membership_event.event_id],
}
),
room_version=room_version,
)
channel = self.make_signed_federation_request(
"PUT",
f"/_matrix/federation/v2/invite/{remote_room_id}/{user1_invite_membership_event.event_id}",
content={
"event": user1_invite_membership_event.get_dict(),
"invite_room_state": [
strip_event(room_create_event),
],
"room_version": room_version.identifier,
},
)
self.assertEqual(channel.code, HTTPStatus.OK, channel.json_body)
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [(EventTypes.Member, StateValues.WILDCARD)],
"timeline_limit": 0,
}
}
}
# Sync until the local user1 can see the invite
with test_timeout(
3,
"Unable to find user1's invite event in the room",
):
while True:
response_body, _ = self.do_sync(sync_body, tok=local_user1_tok)
if (
remote_room_id in response_body["rooms"].keys()
# If they have `invite_state` for the room, they are invited
and len(
response_body["rooms"][remote_room_id].get("invite_state", [])
)
> 0
):
break
# Prevent tight-looping to allow the `test_timeout` to work
time.sleep(0.1)
user1_join_membership_event_template = make_event_from_dict(
{
"room_id": remote_room_id,
"sender": local_user1_id,
"depth": 4,
"origin_server_ts": 4,
"type": EventTypes.Member,
"state_key": local_user1_id,
"content": {"membership": Membership.JOIN},
"auth_events": [
room_create_event.event_id,
user1_invite_membership_event.event_id,
],
"prev_events": [user1_invite_membership_event.event_id],
},
room_version=room_version,
)
T = TypeVar("T")
# Mock the remote homeserver responding to our HTTP requests
#
# We're going to mock the following endpoints so that user1 can join the remote room:
# - GET /_matrix/federation/v1/make_join/{room_id}/{user_id}
# - PUT /_matrix/federation/v2/send_join/{room_id}/{user_id}
#
async def get_json(
destination: str,
path: str,
args: Optional[QueryParams] = None,
retry_on_dns_fail: bool = True,
timeout: Optional[int] = None,
ignore_backoff: bool = False,
try_trailing_slash_on_400: bool = False,
parser: Optional[ByteParser[T]] = None,
) -> Union[JsonDict, T]:
if (
path
== f"/_matrix/federation/v1/make_join/{urllib.parse.quote_plus(remote_room_id)}/{urllib.parse.quote_plus(local_user1_id)}"
):
return {
"event": user1_join_membership_event_template.get_pdu_json(),
"room_version": room_version.identifier,
}
raise NotImplementedError(
"We have not mocked a response for `get_json(...)` for the following endpoint yet: "
+ f"{destination}{path}"
)
self.federation_http_client.get_json.side_effect = get_json
# PDU's that hs1 sent to hs2
collected_pdus_from_hs1_federation_send: Set[str] = set()
async def put_json(
destination: str,
path: str,
args: Optional[QueryParams] = None,
data: Optional[JsonDict] = None,
json_data_callback: Optional[Callable[[], JsonDict]] = None,
long_retries: bool = False,
timeout: Optional[int] = None,
ignore_backoff: bool = False,
backoff_on_404: bool = False,
try_trailing_slash_on_400: bool = False,
parser: Optional[ByteParser[T]] = None,
backoff_on_all_error_codes: bool = False,
) -> Union[JsonDict, T, SendJoinResponse]:
if (
path.startswith(
f"/_matrix/federation/v2/send_join/{urllib.parse.quote_plus(remote_room_id)}/"
)
and data is not None
and data.get("type") == EventTypes.Member
and data.get("state_key") == local_user1_id
# We're assuming this is a `ByteParser[SendJoinResponse]`
and parser is not None
):
# As the remote server, we need to sign the event before sending it back
user1_join_membership_event_signed = make_event_from_dict(
self.add_hashes_and_signatures_from_other_server(data),
room_version=room_version,
)
# Since they passed in a `parser`, we need to return the type that
# they're expecting instead of just a `JsonDict`
return SendJoinResponse(
auth_events=[
room_create_event,
user1_invite_membership_event,
],
state=[
room_create_event,
creator_membership_event,
user1_invite_membership_event,
],
event_dict=user1_join_membership_event_signed.get_pdu_json(),
event=user1_join_membership_event_signed,
members_omitted=False,
servers_in_room=[
self.OTHER_SERVER_NAME,
],
)
if path.startswith("/_matrix/federation/v1/send/") and data is not None:
for pdu in data.get("pdus", []):
event = event_from_pdu_json(pdu, room_version)
collected_pdus_from_hs1_federation_send.add(event.event_id)
# Just acknowledge everything hs1 is trying to send hs2
return {
event_from_pdu_json(pdu, room_version).event_id: {}
for pdu in data.get("pdus", [])
}
raise NotImplementedError(
"We have not mocked a response for `put_json(...)` for the following endpoint yet: "
+ f"{destination}{path} with the following body data: {data}"
)
self.federation_http_client.put_json.side_effect = put_json
# User1 joins the room
self.helper.join(remote_room_id, local_user1_id, tok=local_user1_tok)
# Reset the mocks now that user1 has joined the room
self.federation_http_client.get_json.side_effect = None
self.federation_http_client.put_json.side_effect = None
# Sync until the local user1 can see that they are now joined to the room
with test_timeout(
3,
"Unable to find user1's join event in the room",
):
while True:
response_body, _ = self.do_sync(sync_body, tok=local_user1_tok)
if remote_room_id in response_body["rooms"].keys():
required_state_map = required_state_json_to_state_map(
response_body["rooms"][remote_room_id]["required_state"]
)
if (
required_state_map.get((EventTypes.Member, local_user1_id))
is not None
):
break
# Prevent tight-looping to allow the `test_timeout` to work
time.sleep(0.1)
# Nothing needs to be sent from hs1 to hs2 since we already let the other
# homeserver know by doing the `/make_join` and `/send_join` dance.
self.assertIncludes(
collected_pdus_from_hs1_federation_send,
set(),
exact=True,
message="Didn't expect any events to be sent from hs1 over federation to hs2",
)
return RemoteRoomJoinResult(
remote_room_id=remote_room_id,
room_version=room_version,
remote_room_creator_user_id=room_creator_user_id,
local_user1_id=local_user1_id,
local_user1_tok=local_user1_tok,
state_map=self.get_success(
self.storage_controllers.state.get_current_state(remote_room_id)
),
)
def test_can_join_from_out_of_band_invite(self) -> None:
"""
Test to make sure that we can join a room that we were invited to over
federation; even if our server has never participated in the room before.
"""
self._invite_local_user_to_remote_room_and_join()
@parameterized.expand(
[("accept invite", Membership.JOIN), ("reject invite", Membership.LEAVE)]
)
def test_can_x_from_out_of_band_invite_after_we_are_already_participating_in_the_room(
self, _test_description: str, membership_action: str
) -> None:
"""
Test to make sure that we can do either a) join the room (accept the invite) or
b) reject the invite after being invited to over federation; even if we are
already participating in the room.
This is a regression test to make sure we stress the scenario where even though
we are already participating in the room, local users can still react to invites
regardless of whether the remote server has told us about the invite event (via
a federation `/send` transaction) and we have de-outliered the invite event.
Previously, we would mistakenly throw an error saying the user wasn't in the
room when they tried to join or reject the invite.
"""
remote_room_join_result = self._invite_local_user_to_remote_room_and_join()
remote_room_id = remote_room_join_result.remote_room_id
room_version = remote_room_join_result.room_version
# Create another local user
local_user2_id = self.register_user("user2", "pass")
local_user2_tok = self.login(local_user2_id, "pass")
T = TypeVar("T")
# PDU's that hs1 sent to hs2
collected_pdus_from_hs1_federation_send: Set[str] = set()
async def put_json(
destination: str,
path: str,
args: Optional[QueryParams] = None,
data: Optional[JsonDict] = None,
json_data_callback: Optional[Callable[[], JsonDict]] = None,
long_retries: bool = False,
timeout: Optional[int] = None,
ignore_backoff: bool = False,
backoff_on_404: bool = False,
try_trailing_slash_on_400: bool = False,
parser: Optional[ByteParser[T]] = None,
backoff_on_all_error_codes: bool = False,
) -> Union[JsonDict, T]:
if path.startswith("/_matrix/federation/v1/send/") and data is not None:
for pdu in data.get("pdus", []):
event = event_from_pdu_json(pdu, room_version)
collected_pdus_from_hs1_federation_send.add(event.event_id)
# Just acknowledge everything hs1 is trying to send hs2
return {
event_from_pdu_json(pdu, room_version).event_id: {}
for pdu in data.get("pdus", [])
}
raise NotImplementedError(
"We have not mocked a response for `put_json(...)` for the following endpoint yet: "
+ f"{destination}{path} with the following body data: {data}"
)
self.federation_http_client.put_json.side_effect = put_json
# From the remote homeserver, invite user2 on the local homserver
user2_invite_membership_event = make_event_from_dict(
self.add_hashes_and_signatures_from_other_server(
{
"room_id": remote_room_id,
"sender": remote_room_join_result.remote_room_creator_user_id,
"depth": 5,
"origin_server_ts": 5,
"type": EventTypes.Member,
"state_key": local_user2_id,
"content": {"membership": Membership.INVITE},
"auth_events": [
remote_room_join_result.state_map[
(EventTypes.Create, "")
].event_id,
remote_room_join_result.state_map[
(
EventTypes.Member,
remote_room_join_result.remote_room_creator_user_id,
)
].event_id,
],
"prev_events": [
remote_room_join_result.state_map[
(EventTypes.Member, remote_room_join_result.local_user1_id)
].event_id
],
}
),
room_version=room_version,
)
channel = self.make_signed_federation_request(
"PUT",
f"/_matrix/federation/v2/invite/{remote_room_id}/{user2_invite_membership_event.event_id}",
content={
"event": user2_invite_membership_event.get_dict(),
"invite_room_state": [
strip_event(
remote_room_join_result.state_map[(EventTypes.Create, "")]
),
],
"room_version": room_version.identifier,
},
)
self.assertEqual(channel.code, HTTPStatus.OK, channel.json_body)
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [(EventTypes.Member, StateValues.WILDCARD)],
"timeline_limit": 0,
}
}
}
# Sync until the local user2 can see the invite
with test_timeout(
3,
"Unable to find user2's invite event in the room",
):
while True:
response_body, _ = self.do_sync(sync_body, tok=local_user2_tok)
if (
remote_room_id in response_body["rooms"].keys()
# If they have `invite_state` for the room, they are invited
and len(
response_body["rooms"][remote_room_id].get("invite_state", [])
)
> 0
):
break
# Prevent tight-looping to allow the `test_timeout` to work
time.sleep(0.1)
if membership_action == Membership.JOIN:
# User2 joins the room
join_event = self.helper.join(
remote_room_join_result.remote_room_id,
local_user2_id,
tok=local_user2_tok,
)
expected_pdu_event_id = join_event["event_id"]
elif membership_action == Membership.LEAVE:
# User2 rejects the invite
leave_event = self.helper.leave(
remote_room_join_result.remote_room_id,
local_user2_id,
tok=local_user2_tok,
)
expected_pdu_event_id = leave_event["event_id"]
else:
raise NotImplementedError(
"This test does not support this membership action yet"
)
# Sync until the local user2 can see their new membership in the room
with test_timeout(
3,
"Unable to find user2's new membership event in the room",
):
while True:
response_body, _ = self.do_sync(sync_body, tok=local_user2_tok)
if membership_action == Membership.JOIN:
if remote_room_id in response_body["rooms"].keys():
required_state_map = required_state_json_to_state_map(
response_body["rooms"][remote_room_id]["required_state"]
)
if (
required_state_map.get((EventTypes.Member, local_user2_id))
is not None
):
break
elif membership_action == Membership.LEAVE:
if remote_room_id not in response_body["rooms"].keys():
break
else:
raise NotImplementedError(
"This test does not support this membership action yet"
)
# Prevent tight-looping to allow the `test_timeout` to work
time.sleep(0.1)
# Make sure that we let hs2 know about the new membership event
self.assertIncludes(
collected_pdus_from_hs1_federation_send,
{expected_pdu_event_id},
exact=True,
message="Expected to find the event ID of the user2 membership to be sent from hs1 over federation to hs2",
)

View File

@@ -20,14 +20,21 @@
#
import logging
from http import HTTPStatus
from typing import Optional, Union
from unittest.mock import Mock
from parameterized import parameterized
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import FederationError
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.config.server import DEFAULT_ROOM_VERSION
from synapse.events import EventBase, make_event_from_dict
from synapse.federation.federation_base import event_from_pdu_json
from synapse.http.types import QueryParams
from synapse.logging.context import LoggingContext
from synapse.rest import admin
from synapse.rest.client import login, room
from synapse.server import HomeServer
@@ -85,6 +92,163 @@ class FederationServerTests(unittest.FederatingHomeserverTestCase):
self.assertEqual(500, channel.code, channel.result)
def _create_acl_event(content: JsonDict) -> EventBase:
return make_event_from_dict(
{
"room_id": "!a:b",
"event_id": "$a:b",
"type": "m.room.server_acls",
"sender": "@a:b",
"content": content,
}
)
class MessageAcceptTests(unittest.FederatingHomeserverTestCase):
"""
Tests to make sure that we don't accept flawed events from federation (incoming).
"""
servlets = [
admin.register_servlets,
login.register_servlets,
room.register_servlets,
]
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
self.http_client = Mock()
return self.setup_test_homeserver(federation_http_client=self.http_client)
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
super().prepare(reactor, clock, hs)
self.store = self.hs.get_datastores().main
self.storage_controllers = hs.get_storage_controllers()
self.federation_event_handler = self.hs.get_federation_event_handler()
# Create a local room
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
self.room_id = self.helper.create_room_as(
user1_id, tok=user1_tok, is_public=True
)
state_map = self.get_success(
self.storage_controllers.state.get_current_state(self.room_id)
)
# Figure out what the forward extremities in the room are (the most recent
# events that aren't tied into the DAG)
forward_extremity_event_ids = self.get_success(
self.hs.get_datastores().main.get_latest_event_ids_in_room(self.room_id)
)
# Join a remote user to the room that will attempt to send bad events
self.remote_bad_user_id = f"@baduser:{self.OTHER_SERVER_NAME}"
self.remote_bad_user_join_event = make_event_from_dict(
self.add_hashes_and_signatures_from_other_server(
{
"room_id": self.room_id,
"sender": self.remote_bad_user_id,
"state_key": self.remote_bad_user_id,
"depth": 1000,
"origin_server_ts": 1,
"type": EventTypes.Member,
"content": {"membership": Membership.JOIN},
"auth_events": [
state_map[(EventTypes.Create, "")].event_id,
state_map[(EventTypes.JoinRules, "")].event_id,
],
"prev_events": list(forward_extremity_event_ids),
}
),
room_version=RoomVersions.V10,
)
# Send the join, it should return None (which is not an error)
self.assertEqual(
self.get_success(
self.federation_event_handler.on_receive_pdu(
self.OTHER_SERVER_NAME, self.remote_bad_user_join_event
)
),
None,
)
# Make sure we actually joined the room
self.assertEqual(
self.get_success(self.store.get_latest_event_ids_in_room(self.room_id)),
{self.remote_bad_user_join_event.event_id},
)
def test_cant_hide_direct_ancestors(self) -> None:
"""
If you send a message, you must be able to provide the direct
prev_events that said event references.
"""
async def post_json(
destination: str,
path: str,
data: Optional[JsonDict] = None,
long_retries: bool = False,
timeout: Optional[int] = None,
ignore_backoff: bool = False,
args: Optional[QueryParams] = None,
) -> Union[JsonDict, list]:
# If it asks us for new missing events, give them NOTHING
if path.startswith("/_matrix/federation/v1/get_missing_events/"):
return {"events": []}
return {}
self.http_client.post_json = post_json
# Figure out what the forward extremities in the room are (the most recent
# events that aren't tied into the DAG)
forward_extremity_event_ids = self.get_success(
self.hs.get_datastores().main.get_latest_event_ids_in_room(self.room_id)
)
# Now lie about an event's prev_events
lying_event = make_event_from_dict(
self.add_hashes_and_signatures_from_other_server(
{
"room_id": self.room_id,
"sender": self.remote_bad_user_id,
"depth": 1000,
"origin_server_ts": 1,
"type": "m.room.message",
"content": {"body": "hewwo?"},
"auth_events": [],
"prev_events": ["$missing_prev_event"]
+ list(forward_extremity_event_ids),
}
),
room_version=RoomVersions.V10,
)
with LoggingContext("test-context"):
failure = self.get_failure(
self.federation_event_handler.on_receive_pdu(
self.OTHER_SERVER_NAME, lying_event
),
FederationError,
)
# on_receive_pdu should throw an error
self.assertEqual(
failure.value.args[0],
(
"ERROR 403: Your server isn't divulging details about prev_events "
"referenced in this event."
),
)
# Make sure the invalid event isn't there
extrem = self.get_success(self.store.get_latest_event_ids_in_room(self.room_id))
self.assertEqual(extrem, {self.remote_bad_user_join_event.event_id})
class ServerACLsTestCase(unittest.TestCase):
def test_blocked_server(self) -> None:
e = _create_acl_event({"allow": ["*"], "deny": ["evil.com"]})
@@ -355,13 +519,76 @@ class SendJoinFederationTests(unittest.FederatingHomeserverTestCase):
# is probably sufficient to reassure that the bucket is updated.
def _create_acl_event(content: JsonDict) -> EventBase:
return make_event_from_dict(
{
"room_id": "!a:b",
"event_id": "$a:b",
"type": "m.room.server_acls",
"sender": "@a:b",
"content": content,
class StripUnsignedFromEventsTestCase(unittest.TestCase):
"""
Test to make sure that we handle the raw JSON events from federation carefully and
strip anything that shouldn't be there.
"""
def test_strip_unauthorized_unsigned_values(self) -> None:
event1 = {
"sender": "@baduser:test.serv",
"state_key": "@baduser:test.serv",
"event_id": "$event1:test.serv",
"depth": 1000,
"origin_server_ts": 1,
"type": "m.room.member",
"origin": "test.servx",
"content": {"membership": "join"},
"auth_events": [],
"unsigned": {"malicious garbage": "hackz", "more warez": "more hackz"},
}
)
filtered_event = event_from_pdu_json(event1, RoomVersions.V1)
# Make sure unauthorized fields are stripped from unsigned
self.assertNotIn("more warez", filtered_event.unsigned)
def test_strip_event_maintains_allowed_fields(self) -> None:
event2 = {
"sender": "@baduser:test.serv",
"state_key": "@baduser:test.serv",
"event_id": "$event2:test.serv",
"depth": 1000,
"origin_server_ts": 1,
"type": "m.room.member",
"origin": "test.servx",
"auth_events": [],
"content": {"membership": "join"},
"unsigned": {
"malicious garbage": "hackz",
"more warez": "more hackz",
"age": 14,
"invite_room_state": [],
},
}
filtered_event2 = event_from_pdu_json(event2, RoomVersions.V1)
self.assertIn("age", filtered_event2.unsigned)
self.assertEqual(14, filtered_event2.unsigned["age"])
self.assertNotIn("more warez", filtered_event2.unsigned)
# Invite_room_state is allowed in events of type m.room.member
self.assertIn("invite_room_state", filtered_event2.unsigned)
self.assertEqual([], filtered_event2.unsigned["invite_room_state"])
def test_strip_event_removes_fields_based_on_event_type(self) -> None:
event3 = {
"sender": "@baduser:test.serv",
"state_key": "@baduser:test.serv",
"event_id": "$event3:test.serv",
"depth": 1000,
"origin_server_ts": 1,
"type": "m.room.power_levels",
"origin": "test.servx",
"content": {},
"auth_events": [],
"unsigned": {
"malicious garbage": "hackz",
"more warez": "more hackz",
"age": 14,
"invite_room_state": [],
},
}
filtered_event3 = event_from_pdu_json(event3, RoomVersions.V1)
self.assertIn("age", filtered_event3.unsigned)
# Invite_room_state field is only permitted in event type m.room.member
self.assertNotIn("invite_room_state", filtered_event3.unsigned)
self.assertNotIn("more warez", filtered_event3.unsigned)

View File

@@ -375,7 +375,7 @@ class FederationEventHandlerTests(unittest.FederatingHomeserverTestCase):
In this test, we pretend we are processing a "pulled" event via
backfill. The pulled event succesfully processes and the backward
extremeties are updated along with clearing out any failed pull attempts
extremities are updated along with clearing out any failed pull attempts
for those old extremities.
We check that we correctly cleared failed pull attempts of the

View File

@@ -23,14 +23,21 @@ from typing import Optional, cast
from unittest.mock import Mock, call
from parameterized import parameterized
from signedjson.key import generate_signing_key
from signedjson.key import (
encode_verify_key_base64,
generate_signing_key,
get_verify_key,
)
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.constants import EventTypes, Membership, PresenceState
from synapse.api.presence import UserDevicePresenceState, UserPresenceState
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events.builder import EventBuilder
from synapse.api.room_versions import (
RoomVersion,
)
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events import EventBase, make_event_from_dict
from synapse.federation.sender import FederationSender
from synapse.handlers.presence import (
BUSY_ONLINE_TIMEOUT,
@@ -45,18 +52,24 @@ from synapse.handlers.presence import (
handle_update,
)
from synapse.rest import admin
from synapse.rest.client import room
from synapse.rest.client import login, room, sync
from synapse.server import HomeServer
from synapse.storage.database import LoggingDatabaseConnection
from synapse.storage.keys import FetchKeyResult
from synapse.types import JsonDict, UserID, get_domain_from_id
from synapse.util import Clock
from tests import unittest
from tests.replication._base import BaseMultiWorkerStreamTestCase
from tests.unittest import override_config
class PresenceUpdateTestCase(unittest.HomeserverTestCase):
servlets = [admin.register_servlets]
servlets = [
admin.register_servlets,
login.register_servlets,
sync.register_servlets,
]
def prepare(
self, reactor: MemoryReactor, clock: Clock, homeserver: HomeServer
@@ -425,6 +438,102 @@ class PresenceUpdateTestCase(unittest.HomeserverTestCase):
wheel_timer.insert.assert_not_called()
# `rc_presence` is set very high during unit tests to avoid ratelimiting
# subtly impacting unrelated tests. We set the ratelimiting back to a
# reasonable value for the tests specific to presence ratelimiting.
@override_config(
{"rc_presence": {"per_user": {"per_second": 0.1, "burst_count": 1}}}
)
def test_over_ratelimit_offline_to_online_to_unavailable(self) -> None:
"""
Send a presence update, check that it went through, immediately send another one and
check that it was ignored.
"""
self._test_ratelimit_offline_to_online_to_unavailable(ratelimited=True)
@override_config(
{"rc_presence": {"per_user": {"per_second": 0.1, "burst_count": 1}}}
)
def test_within_ratelimit_offline_to_online_to_unavailable(self) -> None:
"""
Send a presence update, check that it went through, advancing time a sufficient amount,
send another presence update and check that it also worked.
"""
self._test_ratelimit_offline_to_online_to_unavailable(ratelimited=False)
@override_config(
{"rc_presence": {"per_user": {"per_second": 0.1, "burst_count": 1}}}
)
def _test_ratelimit_offline_to_online_to_unavailable(
self, ratelimited: bool
) -> None:
"""Test rate limit for presence updates sent with sync requests.
Args:
ratelimited: Test rate limited case.
"""
wheel_timer = Mock()
user_id = "@user:pass"
now = 5000000
sync_url = "/sync?access_token=%s&set_presence=%s"
# Register the user who syncs presence
user_id = self.register_user("user", "pass")
access_token = self.login("user", "pass")
# Get the handler (which kicks off a bunch of timers).
presence_handler = self.hs.get_presence_handler()
# Ensure the user is initially offline.
prev_state = UserPresenceState.default(user_id)
new_state = prev_state.copy_and_replace(
state=PresenceState.OFFLINE, last_active_ts=now
)
state, persist_and_notify, federation_ping = handle_update(
prev_state,
new_state,
is_mine=True,
wheel_timer=wheel_timer,
now=now,
persist=False,
)
# Check that the user is offline.
state = self.get_success(
presence_handler.get_state(UserID.from_string(user_id))
)
self.assertEqual(state.state, PresenceState.OFFLINE)
# Send sync request with set_presence=online.
channel = self.make_request("GET", sync_url % (access_token, "online"))
self.assertEqual(200, channel.code)
# Assert the user is now online.
state = self.get_success(
presence_handler.get_state(UserID.from_string(user_id))
)
self.assertEqual(state.state, PresenceState.ONLINE)
if not ratelimited:
# Advance time a sufficient amount to avoid rate limiting.
self.reactor.advance(30)
# Send another sync request with set_presence=unavailable.
channel = self.make_request("GET", sync_url % (access_token, "unavailable"))
self.assertEqual(200, channel.code)
state = self.get_success(
presence_handler.get_state(UserID.from_string(user_id))
)
if ratelimited:
# Assert the user is still online and presence update was ignored.
self.assertEqual(state.state, PresenceState.ONLINE)
else:
# Assert the user is now unavailable.
self.assertEqual(state.state, PresenceState.UNAVAILABLE)
class PresenceTimeoutTestCase(unittest.TestCase):
"""Tests different timers and that the timer does not change `status_msg` of user."""
@@ -1825,6 +1934,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
# self.event_builder_for_2.hostname = "test2"
self.store = hs.get_datastores().main
self.storage_controllers = hs.get_storage_controllers()
self.state = hs.get_state_handler()
self._event_auth_handler = hs.get_event_auth_handler()
@@ -1940,29 +2050,35 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
hostname = get_domain_from_id(user_id)
room_version = self.get_success(self.store.get_room_version_id(room_id))
room_version = self.get_success(self.store.get_room_version(room_id))
builder = EventBuilder(
state=self.state,
event_auth_handler=self._event_auth_handler,
store=self.store,
clock=self.clock,
hostname=hostname,
signing_key=self.random_signing_key,
room_version=KNOWN_ROOM_VERSIONS[room_version],
room_id=room_id,
type=EventTypes.Member,
sender=user_id,
state_key=user_id,
content={"membership": Membership.JOIN},
state_map = self.get_success(
self.storage_controllers.state.get_current_state(room_id)
)
prev_event_ids = self.get_success(
self.store.get_latest_event_ids_in_room(room_id)
# Figure out what the forward extremities in the room are (the most recent
# events that aren't tied into the DAG)
forward_extremity_event_ids = self.get_success(
self.hs.get_datastores().main.get_latest_event_ids_in_room(room_id)
)
event = self.get_success(
builder.build(prev_event_ids=list(prev_event_ids), auth_event_ids=None)
event = self.create_fake_event_from_remote_server(
remote_server_name=hostname,
event_dict={
"room_id": room_id,
"sender": user_id,
"type": EventTypes.Member,
"state_key": user_id,
"depth": 1000,
"origin_server_ts": 1,
"content": {"membership": Membership.JOIN},
"auth_events": [
state_map[(EventTypes.Create, "")].event_id,
state_map[(EventTypes.JoinRules, "")].event_id,
],
"prev_events": list(forward_extremity_event_ids),
},
room_version=room_version,
)
self.get_success(self.federation_event_handler.on_receive_pdu(hostname, event))
@@ -1970,3 +2086,50 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
# Check that it was successfully persisted.
self.get_success(self.store.get_event(event.event_id))
self.get_success(self.store.get_event(event.event_id))
def create_fake_event_from_remote_server(
self, remote_server_name: str, event_dict: JsonDict, room_version: RoomVersion
) -> EventBase:
"""
This is similar to what `FederatingHomeserverTestCase` is doing but we don't
need all of the extra baggage and we want to be able to create an event from
many remote servers.
"""
# poke the other server's signing key into the key store, so that we don't
# make requests for it
other_server_signature_key = generate_signing_key("test")
verify_key = get_verify_key(other_server_signature_key)
verify_key_id = "%s:%s" % (verify_key.alg, verify_key.version)
self.get_success(
self.hs.get_datastores().main.store_server_keys_response(
remote_server_name,
from_server=remote_server_name,
ts_added_ms=self.clock.time_msec(),
verify_keys={
verify_key_id: FetchKeyResult(
verify_key=verify_key,
valid_until_ts=self.clock.time_msec() + 10000,
),
},
response_json={
"verify_keys": {
verify_key_id: {"key": encode_verify_key_base64(verify_key)}
}
},
)
)
add_hashes_and_signatures(
room_version=room_version,
event_dict=event_dict,
signature_name=remote_server_name,
signing_key=other_server_signature_key,
)
event = make_event_from_dict(
event_dict,
room_version=room_version,
)
return event

View File

@@ -17,6 +17,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
from http import HTTPStatus
from typing import Collection, ContextManager, List, Optional
from unittest.mock import AsyncMock, Mock, patch
@@ -347,7 +348,15 @@ class SyncTestCase(tests.unittest.HomeserverTestCase):
# the prev_events used when creating the join event, such that the ban does not
# precede the join.
with self._patch_get_latest_events([last_room_creation_event_id]):
self.helper.join(room_id, eve, tok=eve_token)
self.helper.join(
room_id,
eve,
tok=eve_token,
# Previously, this join would succeed but now we expect it to fail at
# this point. The rest of the test is for the case when this used to
# succeed.
expect_code=HTTPStatus.FORBIDDEN,
)
# Eve makes a second, incremental sync.
eve_incremental_sync_after_join: SyncResult = self.get_success(

View File

@@ -22,14 +22,26 @@ import logging
from unittest.mock import AsyncMock, Mock
from netaddr import IPSet
from signedjson.key import (
encode_verify_key_base64,
generate_signing_key,
get_verify_key,
)
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.constants import EventTypes, Membership
from synapse.events.builder import EventBuilderFactory
from synapse.api.room_versions import RoomVersion
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events import EventBase, make_event_from_dict
from synapse.handlers.typing import TypingWriterHandler
from synapse.http.federation.matrix_federation_agent import MatrixFederationAgent
from synapse.rest.admin import register_servlets_for_client_rest_resource
from synapse.rest.client import login, room
from synapse.types import UserID, create_requester
from synapse.server import HomeServer
from synapse.storage.keys import FetchKeyResult
from synapse.types import JsonDict, UserID, create_requester
from synapse.util import Clock
from tests.replication._base import BaseMultiWorkerStreamTestCase
from tests.server import get_clock
@@ -63,6 +75,9 @@ class FederationSenderTestCase(BaseMultiWorkerStreamTestCase):
ip_blocklist=IPSet(),
)
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.storage_controllers = hs.get_storage_controllers()
def test_send_event_single_sender(self) -> None:
"""Test that using a single federation sender worker correctly sends a
new event.
@@ -243,35 +258,92 @@ class FederationSenderTestCase(BaseMultiWorkerStreamTestCase):
self.assertTrue(sent_on_1)
self.assertTrue(sent_on_2)
def create_fake_event_from_remote_server(
self, remote_server_name: str, event_dict: JsonDict, room_version: RoomVersion
) -> EventBase:
"""
This is similar to what `FederatingHomeserverTestCase` is doing but we don't
need all of the extra baggage and we want to be able to create an event from
many remote servers.
"""
# poke the other server's signing key into the key store, so that we don't
# make requests for it
other_server_signature_key = generate_signing_key("test")
verify_key = get_verify_key(other_server_signature_key)
verify_key_id = "%s:%s" % (verify_key.alg, verify_key.version)
self.get_success(
self.hs.get_datastores().main.store_server_keys_response(
remote_server_name,
from_server=remote_server_name,
ts_added_ms=self.clock.time_msec(),
verify_keys={
verify_key_id: FetchKeyResult(
verify_key=verify_key,
valid_until_ts=self.clock.time_msec() + 10000,
),
},
response_json={
"verify_keys": {
verify_key_id: {"key": encode_verify_key_base64(verify_key)}
}
},
)
)
add_hashes_and_signatures(
room_version=room_version,
event_dict=event_dict,
signature_name=remote_server_name,
signing_key=other_server_signature_key,
)
event = make_event_from_dict(
event_dict,
room_version=room_version,
)
return event
def create_room_with_remote_server(
self, user: str, token: str, remote_server: str = "other_server"
) -> str:
room = self.helper.create_room_as(user, tok=token)
room_id = self.helper.create_room_as(user, tok=token)
store = self.hs.get_datastores().main
federation = self.hs.get_federation_event_handler()
prev_event_ids = self.get_success(store.get_latest_event_ids_in_room(room))
room_version = self.get_success(store.get_room_version(room))
room_version = self.get_success(store.get_room_version(room_id))
factory = EventBuilderFactory(self.hs)
factory.hostname = remote_server
state_map = self.get_success(
self.storage_controllers.state.get_current_state(room_id)
)
# Figure out what the forward extremities in the room are (the most recent
# events that aren't tied into the DAG)
prev_event_ids = self.get_success(store.get_latest_event_ids_in_room(room_id))
user_id = UserID("user", remote_server).to_string()
event_dict = {
"type": EventTypes.Member,
"state_key": user_id,
"content": {"membership": Membership.JOIN},
"sender": user_id,
"room_id": room,
}
builder = factory.for_room_version(room_version, event_dict)
join_event = self.get_success(
builder.build(prev_event_ids=list(prev_event_ids), auth_event_ids=None)
join_event = self.create_fake_event_from_remote_server(
remote_server_name=remote_server,
event_dict={
"room_id": room_id,
"sender": user_id,
"type": EventTypes.Member,
"state_key": user_id,
"depth": 1000,
"origin_server_ts": 1,
"content": {"membership": Membership.JOIN},
"auth_events": [
state_map[(EventTypes.Create, "")].event_id,
state_map[(EventTypes.JoinRules, "")].event_id,
],
"prev_events": list(prev_event_ids),
},
room_version=room_version,
)
self.get_success(federation.on_send_membership_event(remote_server, join_event))
self.replicate()
return room
return room_id

View File

@@ -29,6 +29,7 @@ from synapse.types import UserID
from synapse.util import Clock
from tests import unittest
from tests.unittest import override_config
class PresenceTestCase(unittest.HomeserverTestCase):
@@ -95,3 +96,54 @@ class PresenceTestCase(unittest.HomeserverTestCase):
self.assertEqual(channel.code, HTTPStatus.OK)
self.assertEqual(self.presence_handler.set_state.call_count, 0)
@override_config(
{"rc_presence": {"per_user": {"per_second": 0.1, "burst_count": 1}}}
)
def test_put_presence_over_ratelimit(self) -> None:
"""
Multiple PUTs to the status endpoint without sufficient delay will be rate limited.
"""
self.hs.config.server.presence_enabled = True
body = {"presence": "here", "status_msg": "beep boop"}
channel = self.make_request(
"PUT", "/presence/%s/status" % (self.user_id,), body
)
self.assertEqual(channel.code, HTTPStatus.OK)
body = {"presence": "here", "status_msg": "beep boop"}
channel = self.make_request(
"PUT", "/presence/%s/status" % (self.user_id,), body
)
self.assertEqual(channel.code, HTTPStatus.TOO_MANY_REQUESTS)
self.assertEqual(self.presence_handler.set_state.call_count, 1)
@override_config(
{"rc_presence": {"per_user": {"per_second": 0.1, "burst_count": 1}}}
)
def test_put_presence_within_ratelimit(self) -> None:
"""
Multiple PUTs to the status endpoint with sufficient delay should all call set_state.
"""
self.hs.config.server.presence_enabled = True
body = {"presence": "here", "status_msg": "beep boop"}
channel = self.make_request(
"PUT", "/presence/%s/status" % (self.user_id,), body
)
self.assertEqual(channel.code, HTTPStatus.OK)
# Advance time a sufficient amount to avoid rate limiting.
self.reactor.advance(30)
body = {"presence": "here", "status_msg": "beep boop"}
channel = self.make_request(
"PUT", "/presence/%s/status" % (self.user_id,), body
)
self.assertEqual(channel.code, HTTPStatus.OK)
self.assertEqual(self.presence_handler.set_state.call_count, 2)

View File

@@ -742,7 +742,7 @@ class RoomsCreateTestCase(RoomBase):
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.assertTrue("room_id" in channel.json_body)
assert channel.resource_usage is not None
self.assertEqual(33, channel.resource_usage.db_txn_count)
self.assertEqual(34, channel.resource_usage.db_txn_count)
def test_post_room_initial_state(self) -> None:
# POST with initial_state config key, expect new room id
@@ -755,7 +755,7 @@ class RoomsCreateTestCase(RoomBase):
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.assertTrue("room_id" in channel.json_body)
assert channel.resource_usage is not None
self.assertEqual(35, channel.resource_usage.db_txn_count)
self.assertEqual(36, channel.resource_usage.db_txn_count)
def test_post_room_visibility_key(self) -> None:
# POST with visibility config key, expect new room id

View File

@@ -1,378 +0,0 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2020 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
from typing import Collection, List, Optional, Union
from unittest.mock import AsyncMock, Mock
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.errors import FederationError
from synapse.api.room_versions import RoomVersion, RoomVersions
from synapse.events import EventBase, make_event_from_dict
from synapse.events.snapshot import EventContext
from synapse.federation.federation_base import event_from_pdu_json
from synapse.handlers.device import DeviceListUpdater
from synapse.http.types import QueryParams
from synapse.logging.context import LoggingContext
from synapse.server import HomeServer
from synapse.types import JsonDict, UserID, create_requester
from synapse.util import Clock
from synapse.util.retryutils import NotRetryingDestination
from tests import unittest
class MessageAcceptTests(unittest.HomeserverTestCase):
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
self.http_client = Mock()
return self.setup_test_homeserver(federation_http_client=self.http_client)
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
user_id = UserID("us", "test")
our_user = create_requester(user_id)
room_creator = self.hs.get_room_creation_handler()
self.room_id = self.get_success(
room_creator.create_room(
our_user, room_creator._presets_dict["public_chat"], ratelimit=False
)
)[0]
self.store = self.hs.get_datastores().main
# Figure out what the most recent event is
most_recent = next(
iter(
self.get_success(
self.hs.get_datastores().main.get_latest_event_ids_in_room(
self.room_id
)
)
)
)
join_event = make_event_from_dict(
{
"room_id": self.room_id,
"sender": "@baduser:test.serv",
"state_key": "@baduser:test.serv",
"event_id": "$join:test.serv",
"depth": 1000,
"origin_server_ts": 1,
"type": "m.room.member",
"origin": "test.servx",
"content": {"membership": "join"},
"auth_events": [],
"prev_state": [(most_recent, {})],
"prev_events": [(most_recent, {})],
}
)
self.handler = self.hs.get_federation_handler()
federation_event_handler = self.hs.get_federation_event_handler()
async def _check_event_auth(
origin: Optional[str], event: EventBase, context: EventContext
) -> None:
pass
federation_event_handler._check_event_auth = _check_event_auth # type: ignore[method-assign]
self.client = self.hs.get_federation_client()
async def _check_sigs_and_hash_for_pulled_events_and_fetch(
dest: str, pdus: Collection[EventBase], room_version: RoomVersion
) -> List[EventBase]:
return list(pdus)
self.client._check_sigs_and_hash_for_pulled_events_and_fetch = ( # type: ignore[method-assign]
_check_sigs_and_hash_for_pulled_events_and_fetch # type: ignore[assignment]
)
# Send the join, it should return None (which is not an error)
self.assertEqual(
self.get_success(
federation_event_handler.on_receive_pdu("test.serv", join_event)
),
None,
)
# Make sure we actually joined the room
self.assertEqual(
self.get_success(self.store.get_latest_event_ids_in_room(self.room_id)),
{"$join:test.serv"},
)
def test_cant_hide_direct_ancestors(self) -> None:
"""
If you send a message, you must be able to provide the direct
prev_events that said event references.
"""
async def post_json(
destination: str,
path: str,
data: Optional[JsonDict] = None,
long_retries: bool = False,
timeout: Optional[int] = None,
ignore_backoff: bool = False,
args: Optional[QueryParams] = None,
) -> Union[JsonDict, list]:
# If it asks us for new missing events, give them NOTHING
if path.startswith("/_matrix/federation/v1/get_missing_events/"):
return {"events": []}
return {}
self.http_client.post_json = post_json
# Figure out what the most recent event is
most_recent = next(
iter(
self.get_success(self.store.get_latest_event_ids_in_room(self.room_id))
)
)
# Now lie about an event
lying_event = make_event_from_dict(
{
"room_id": self.room_id,
"sender": "@baduser:test.serv",
"event_id": "one:test.serv",
"depth": 1000,
"origin_server_ts": 1,
"type": "m.room.message",
"origin": "test.serv",
"content": {"body": "hewwo?"},
"auth_events": [],
"prev_events": [("two:test.serv", {}), (most_recent, {})],
}
)
federation_event_handler = self.hs.get_federation_event_handler()
with LoggingContext("test-context"):
failure = self.get_failure(
federation_event_handler.on_receive_pdu("test.serv", lying_event),
FederationError,
)
# on_receive_pdu should throw an error
self.assertEqual(
failure.value.args[0],
(
"ERROR 403: Your server isn't divulging details about prev_events "
"referenced in this event."
),
)
# Make sure the invalid event isn't there
extrem = self.get_success(self.store.get_latest_event_ids_in_room(self.room_id))
self.assertEqual(extrem, {"$join:test.serv"})
def test_retry_device_list_resync(self) -> None:
"""Tests that device lists are marked as stale if they couldn't be synced, and
that stale device lists are retried periodically.
"""
remote_user_id = "@john:test_remote"
remote_origin = "test_remote"
# Track the number of attempts to resync the user's device list.
self.resync_attempts = 0
# When this function is called, increment the number of resync attempts (only if
# we're querying devices for the right user ID), then raise a
# NotRetryingDestination error to fail the resync gracefully.
def query_user_devices(
destination: str, user_id: str, timeout: int = 30000
) -> JsonDict:
if user_id == remote_user_id:
self.resync_attempts += 1
raise NotRetryingDestination(0, 0, destination)
# Register the mock on the federation client.
federation_client = self.hs.get_federation_client()
federation_client.query_user_devices = Mock(side_effect=query_user_devices) # type: ignore[method-assign]
# Register a mock on the store so that the incoming update doesn't fail because
# we don't share a room with the user.
store = self.hs.get_datastores().main
store.get_rooms_for_user = AsyncMock(return_value=["!someroom:test"])
# Manually inject a fake device list update. We need this update to include at
# least one prev_id so that the user's device list will need to be retried.
device_list_updater = self.hs.get_device_handler().device_list_updater
assert isinstance(device_list_updater, DeviceListUpdater)
self.get_success(
device_list_updater.incoming_device_list_update(
origin=remote_origin,
edu_content={
"deleted": False,
"device_display_name": "Mobile",
"device_id": "QBUAZIFURK",
"prev_id": [5],
"stream_id": 6,
"user_id": remote_user_id,
},
)
)
# Check that there was one resync attempt.
self.assertEqual(self.resync_attempts, 1)
# Check that the resync attempt failed and caused the user's device list to be
# marked as stale.
need_resync = self.get_success(
store.get_user_ids_requiring_device_list_resync()
)
self.assertIn(remote_user_id, need_resync)
# Check that waiting for 30 seconds caused Synapse to retry resyncing the device
# list.
self.reactor.advance(30)
self.assertEqual(self.resync_attempts, 2)
def test_cross_signing_keys_retry(self) -> None:
"""Tests that resyncing a device list correctly processes cross-signing keys from
the remote server.
"""
remote_user_id = "@john:test_remote"
remote_master_key = "85T7JXPFBAySB/jwby4S3lBPTqY3+Zg53nYuGmu1ggY"
remote_self_signing_key = "QeIiFEjluPBtI7WQdG365QKZcFs9kqmHir6RBD0//nQ"
# Register mock device list retrieval on the federation client.
federation_client = self.hs.get_federation_client()
federation_client.query_user_devices = AsyncMock( # type: ignore[method-assign]
return_value={
"user_id": remote_user_id,
"stream_id": 1,
"devices": [],
"master_key": {
"user_id": remote_user_id,
"usage": ["master"],
"keys": {"ed25519:" + remote_master_key: remote_master_key},
},
"self_signing_key": {
"user_id": remote_user_id,
"usage": ["self_signing"],
"keys": {
"ed25519:" + remote_self_signing_key: remote_self_signing_key
},
},
}
)
# Resync the device list.
device_handler = self.hs.get_device_handler()
self.get_success(
device_handler.device_list_updater.multi_user_device_resync(
[remote_user_id]
),
)
# Retrieve the cross-signing keys for this user.
keys = self.get_success(
self.store.get_e2e_cross_signing_keys_bulk(user_ids=[remote_user_id]),
)
self.assertIn(remote_user_id, keys)
key = keys[remote_user_id]
assert key is not None
# Check that the master key is the one returned by the mock.
master_key = key["master"]
self.assertEqual(len(master_key["keys"]), 1)
self.assertTrue("ed25519:" + remote_master_key in master_key["keys"].keys())
self.assertTrue(remote_master_key in master_key["keys"].values())
# Check that the self-signing key is the one returned by the mock.
self_signing_key = key["self_signing"]
self.assertEqual(len(self_signing_key["keys"]), 1)
self.assertTrue(
"ed25519:" + remote_self_signing_key in self_signing_key["keys"].keys(),
)
self.assertTrue(remote_self_signing_key in self_signing_key["keys"].values())
class StripUnsignedFromEventsTestCase(unittest.TestCase):
def test_strip_unauthorized_unsigned_values(self) -> None:
event1 = {
"sender": "@baduser:test.serv",
"state_key": "@baduser:test.serv",
"event_id": "$event1:test.serv",
"depth": 1000,
"origin_server_ts": 1,
"type": "m.room.member",
"origin": "test.servx",
"content": {"membership": "join"},
"auth_events": [],
"unsigned": {"malicious garbage": "hackz", "more warez": "more hackz"},
}
filtered_event = event_from_pdu_json(event1, RoomVersions.V1)
# Make sure unauthorized fields are stripped from unsigned
self.assertNotIn("more warez", filtered_event.unsigned)
def test_strip_event_maintains_allowed_fields(self) -> None:
event2 = {
"sender": "@baduser:test.serv",
"state_key": "@baduser:test.serv",
"event_id": "$event2:test.serv",
"depth": 1000,
"origin_server_ts": 1,
"type": "m.room.member",
"origin": "test.servx",
"auth_events": [],
"content": {"membership": "join"},
"unsigned": {
"malicious garbage": "hackz",
"more warez": "more hackz",
"age": 14,
"invite_room_state": [],
},
}
filtered_event2 = event_from_pdu_json(event2, RoomVersions.V1)
self.assertIn("age", filtered_event2.unsigned)
self.assertEqual(14, filtered_event2.unsigned["age"])
self.assertNotIn("more warez", filtered_event2.unsigned)
# Invite_room_state is allowed in events of type m.room.member
self.assertIn("invite_room_state", filtered_event2.unsigned)
self.assertEqual([], filtered_event2.unsigned["invite_room_state"])
def test_strip_event_removes_fields_based_on_event_type(self) -> None:
event3 = {
"sender": "@baduser:test.serv",
"state_key": "@baduser:test.serv",
"event_id": "$event3:test.serv",
"depth": 1000,
"origin_server_ts": 1,
"type": "m.room.power_levels",
"origin": "test.servx",
"content": {},
"auth_events": [],
"unsigned": {
"malicious garbage": "hackz",
"more warez": "more hackz",
"age": 14,
"invite_room_state": [],
},
}
filtered_event3 = event_from_pdu_json(event3, RoomVersions.V1)
self.assertIn("age", filtered_event3.unsigned)
# Invite_room_state field is only permitted in event type m.room.member
self.assertNotIn("invite_room_state", filtered_event3.unsigned)
self.assertNotIn("more warez", filtered_event3.unsigned)

View File

@@ -200,6 +200,7 @@ def default_config(
"per_user": {"per_second": 10000, "burst_count": 10000},
},
"rc_3pid_validation": {"per_second": 10000, "burst_count": 10000},
"rc_presence": {"per_user": {"per_second": 10000, "burst_count": 10000}},
"saml2_enabled": False,
"public_baseurl": None,
"default_identity_server": None,
@@ -399,11 +400,24 @@ class TestTimeout(Exception):
class test_timeout:
"""
FIXME: This implementation is not robust against other code tight-looping and
preventing the signals propagating and timing out the test. You may need to add
`time.sleep(0.1)` to your code in order to allow this timeout to work correctly.
```py
with test_timeout(3):
while True:
my_checking_func()
time.sleep(0.1)
```
"""
def __init__(self, seconds: int, error_message: Optional[str] = None) -> None:
if error_message is None:
error_message = "test timed out after {}s.".format(seconds)
self.error_message = f"Test timed out after {seconds}s"
if error_message is not None:
self.error_message += f": {error_message}"
self.seconds = seconds
self.error_message = error_message
def handle_timeout(self, signum: int, frame: Optional[FrameType]) -> None:
raise TestTimeout(self.error_message)