Compare commits

...

192 Commits

Author SHA1 Message Date
Erik Johnston
b37ad05a62 Newsfile 2024-11-27 11:35:27 +00:00
Erik Johnston
d739d0f71d Fix release process to not create duplicate releases 2024-11-27 11:34:47 +00:00
V02460
a58f09acc7 Bump pyo3 to v0.23.2 (#17966)
Keep up-to-date with pyo3 releases. This bump enables Python 3.13
support and resolves deprecations.

Links for quick reference:
https://github.com/PyO3/pyo3/releases
https://github.com/davidhewitt/pythonize/releases
https://github.com/vorner/pyo3-log
2024-11-27 10:46:00 +00:00
Quentin Gliech
cee9da0da5 MSC4108: Add a Content-Type header on the PUT response (#17253)
This is a workaround for some proxy setup, where the ETag header gets
stripped from the response headers unless there is a Content-Type header
set.

In particular, we saw this bug when putting Cloudflare in front of
Synapse.
I'm pretty sure this is a Cloudflare bug, as this behaviour isn't
documented anywhere, and doesn't make sense whatsoever.

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2024-11-26 19:43:26 +01:00
Quentin Gliech
a9c4d1c8ac Merge branch 'master' into develop 2024-11-26 16:08:27 +01:00
Quentin Gliech
8c653e1dd6 1.120.0 2024-11-26 14:11:12 +01:00
dependabot[bot]
cd7d90bd28 Bump tomli from 2.0.2 to 2.1.0 (#17959) 2024-11-26 09:30:16 +00:00
Richard van der Hoff
02aa7adf4c Fix delete_old_otks job on worker deployments (#17960)
In a worker-mode deployment, the `E2eKeysHandler` is not necessarily
loaded, which means the handler for the `delete_old_otks` task will not
be registered. Make sure we load the handler.

Introduced in https://github.com/element-hq/synapse/pull/17934
2024-11-26 08:45:18 +01:00
Erik Johnston
3943d2fde7 Fix up logic for delaying sending read receipts over federation. (#17933)
For context of why we delay read receipts, see
https://github.com/matrix-org/synapse/issues/4730.

Element Web often sends read receipts in quick succession, if it reloads
the timeline it'll send one for the last message in the old timeline and
again for the last message in the new timeline. This caused remote users
to see a read receipt for older messages come through quickly, but then
the second read receipt taking a while to arrive for the most recent
message.

There are two things going on in this PR:
1. There was a mismatch between seconds and milliseconds, and so we
ended up delaying for far longer than intended.
2. Changing the logic to reuse the `DestinationWakeupQueue` (used for
presence)

The changes in logic are:
- Treat the first receipt and subsequent receipts in a room in the same
way
- Whitelist certain classes of receipts to never delay being sent, i.e.
receipts in small rooms, receipts for events that were sent within the
last 60s, and sending receipts to the event sender's server.
- The maximum delay a receipt can have before being sent to a server is
30s, and we'll send out receipts to remotes at least at 50Hz (by
default)

The upshot is that this should make receipts feel more snappy over
federation.

This new logic should send roughly between 10%–20% of transactions
immediately on matrix.org.
2024-11-25 18:12:33 +00:00
dependabot[bot]
93cc955051 Bump tornado from 6.4.1 to 6.4.2 (#17955) 2024-11-25 14:23:32 +00:00
Shay
4587decd67 Return suspended status when querying user account (#17952) 2024-11-22 12:37:19 +00:00
Matthew Hodgson
4c67d20af7 link to element-docker-demo from contrib/docker* (#17953) 2024-11-22 12:35:03 +00:00
Valentin Iovene
80e39fd834 Add Forgejo oidc provider config example (#17872) 2024-11-20 16:06:08 -06:00
Olivier 'reivilibre
573bdbc824 Merge branch 'release-v1.120' into develop 2024-11-20 17:26:16 +00:00
Erik Johnston
79c02cada0 Fix incorrect comment in new schema delta (#17936)
Added in #17912, was a bad copy and paste.
2024-11-20 17:12:17 +00:00
dependabot[bot]
81b080f7a2 Bump serde_json from 1.0.132 to 1.0.133 (#17939) 2024-11-20 16:52:19 +00:00
V02460
84ec15c47e Raise setuptools_rust version cap to 1.10.2 (#17944) 2024-11-20 16:49:21 +00:00
Olivier 'reivilibre
0202e5f210 Tweak changelog 2024-11-20 16:45:54 +00:00
Will Hunt
f73edbe4d2 Add encrypted appservice extensions to Complement test image. (#17945) 2024-11-20 16:35:43 +00:00
Olivier 'reivilibre
ec4d136965 1.120.0rc1 2024-11-20 15:13:32 +00:00
Olivier 'reivilibre
ddd1d79d03 Fix nix flake 2024-11-20 15:01:56 +00:00
Travis Ralston
d0a474d312 Enable authenticated media by default (#17889)
Co-authored-by: Olivier 'reivilibre <oliverw@matrix.org>
2024-11-20 14:48:22 +00:00
Renaud Allard
8291aa8fd7 Support both import names of PyPI package python-multipart. (#17932) 2024-11-20 11:48:04 +00:00
Erik Johnston
1092a35a2a Speed up slow initial sliding syncs on large servers (#17946)
This was due to a missing index, which meant that deleting previous
connections associated with the device and `conn_id` took a long time.
2024-11-19 15:03:32 +00:00
Richard van der Hoff
c5e89f5fae Create one-off scheduled task to delete old OTKs (#17934)
To work around the fact that,
pre-https://github.com/element-hq/synapse/pull/17903, our database may
have old one-time-keys that the clients have long thrown away the
private keys for, we want to delete OTKs that look like they came from
libolm.

To spread the load a bit, without holding up other background database
updates, we use a scheduled task to do the work.
2024-11-19 11:20:48 +00:00
dependabot[bot]
e918f683d4 Bump serde from 1.0.214 to 1.0.215 (#17938) 2024-11-18 15:48:26 +00:00
dependabot[bot]
4efd1056ca Bump packaging from 24.1 to 24.2 (#17940) 2024-11-18 15:48:05 +00:00
dependabot[bot]
0f32408c80 Bump phonenumbers from 8.13.49 to 8.13.50 (#17942) 2024-11-18 15:47:54 +00:00
dependabot[bot]
9d837daa8a Bump immutabledict from 4.2.0 to 4.2.1 (#17941) 2024-11-18 15:24:44 +00:00
Richard van der Hoff
d72843056b Add some documentation about backing up Synapse (#17931)
Fixes: https://github.com/element-hq/element-meta/issues/2155
Fixes: https://github.com/element-hq/synapse/issues/2046
2024-11-18 14:05:49 +00:00
Devon Hudson
e80dad5fa9 Move server event filtering logic to rust (#17928)
### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2024-11-14 16:18:24 +00:00
Erik Johnston
97284689ea Merge branch 'master' into develop 2024-11-13 21:51:44 +00:00
Poruri Sai Rahul
c812a79422 Removal: Remove support for experimental msc3886 (#17638) 2024-11-13 14:10:20 +00:00
Erik Johnston
850ff14613 1.119.0 2024-11-13 13:58:18 +00:00
Erik Johnston
e0fdb862cb Bump macos version used to build wheels (#17924)
MacOS 12 is end-of-life and GitHub is deprecating support for it
(including doing brown outs). Let's bump to MacOS 13.
2024-11-13 11:30:04 +00:00
Erik Johnston
73dc05c993 Unpin the upload release GHA action (#17923)
We were pinned to an old version that had deprecation warnings.

In new versions of the action leaving off properties (i.e. `draft` and
`prerelease`) tells the action to not modify those properties of the
release.
2024-11-12 16:52:00 +00:00
Benjamin Bouvier
bfb197c596 Fix typo in error message when a media ID isn't known (#17865) 2024-11-12 16:41:14 +00:00
Erik Johnston
f387f47a6a Merge branch 'release-v1.119' into develop 2024-11-11 15:47:27 +00:00
Erik Johnston
a4c503674f 1.119.0rc2 2024-11-11 14:33:37 +00:00
Erik Johnston
2637b26cfe Fix building and attaching release artifacts (#17921)
Broke in #17905 due to upgrading the `upload-artifact` action, as we
didn't rename debs. I think we also need to change how we download the
artefacts and attach them to a release, as they'll download to a
different place.

Docs:
- https://github.com/actions/upload-artifact/tree/v4/
- https://github.com/actions/download-artifact/tree/v4/
2024-11-11 14:32:45 +00:00
dependabot[bot]
db59067e78 Bump bleach from 6.1.0 to 6.2.0 (#17918) 2024-11-11 14:15:17 +00:00
dependabot[bot]
7feb07c3e9 Bump pygithub from 2.4.0 to 2.5.0 (#17917) 2024-11-11 13:52:14 +00:00
dependabot[bot]
54e0086abd Bump ruff from 0.7.2 to 0.7.3 (#17919) 2024-11-11 13:51:47 +00:00
dependabot[bot]
9916932e98 Bump anyhow from 1.0.92 to 1.0.93 (#17920) 2024-11-11 13:51:36 +00:00
Erik Johnston
f4943b875b Update changelog 2024-11-11 11:37:09 +00:00
Erik Johnston
92fcca8ed7 Update changelog 2024-11-11 10:46:34 +00:00
Erik Johnston
c486ec8bc2 Add index to current_state_delta_stream (#17912)
As we're now using it in the sync APIs to get state changes within a
room
2024-11-11 10:45:46 +00:00
reivilibre
20fc9fcc33 Clarify the semantics of the enable_authenticated_media configuration option. (#17913)
Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2024-11-11 10:44:47 +00:00
Devon Hudson
2f41f6d947 Update changelog for release 2024-11-08 10:23:07 -07:00
Devon Hudson
f377cee7ec Merge branch 'develop' into release-v1.119 2024-11-08 10:06:46 -07:00
Erik Johnston
cacd4fd7bd Fix MSC4222 returning full state (#17915)
There was a bug that meant we would return the full state of the room on
incremental syncs when using lazy loaded members and there were no
entries in the timeline.

This was due to trying to use `state_filter or state_filter.all()` as a
short hand for handling `None` case, however `state_filter` implements
`__bool__` so if the state filter was empty it would be set to full.

c.f. MSC4222 and #17888
2024-11-08 16:41:24 +00:00
Erik Johnston
c7a1d0aa1a Fix Twisted tests with latest release (#17911)
c.f. #17906 and #17907
2024-11-07 16:22:09 +00:00
Andrew Morgan
c92639df21 Switch portdb CI to python 3.13, pg 17 (#17909) 2024-11-07 16:09:45 +00:00
Erik Johnston
d0fc1e904a Fix cancellation tests with new Twisted. (#17906)
The latest Twisted release changed how they implemented `__await__` on
deferreds, which broke the machinery we used to test cancellation.

This PR changes things a bit to instead patch the `__await__` method,
which is a stable API. This mostly doesn't change the core logic, except
for fixing two bugs:
  - We previously did not intercept all await points
- After cancellation we now need to not only unblock currently blocked
await points, but also make sure we don't block any future await points.

c.f. https://github.com/twisted/twisted/pull/12226

---------

Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
2024-11-07 15:26:14 +00:00
Erik Johnston
77eafd47df Fix other unit tests with latest twisted (#17907)
There's also https://github.com/element-hq/synapse/pull/17906
2024-11-07 10:11:13 +00:00
Richard van der Hoff
2a321bac35 Issue one time keys in upload order (#17903)
Currently, one-time-keys are issued in a somewhat random order. (In
practice, they are issued according to the lexicographical order of
their key IDs.) That can lead to a situation where a client gives up
hope of a given OTK ever being used, whilst it is still on the server.

Related: https://github.com/element-hq/element-meta/issues/2356
2024-11-06 22:21:06 +00:00
Devon Hudson
eda735e4bb Remove support for python 3.8 (#17908)
### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2024-11-06 19:36:01 +00:00
Eric Eastwood
e1f5da65e1 Update version constraint to allow the latest poetry-core 1.9.1 (#17902)
Update version constraint to allow the latest `poetry-core` `1.9.1`

Context:

> I am working on updating poetry-core in Fedora and synapse is one of
affected packages. Please run a CI to see if it works properly. Thank
you.

Mergeable version of https://github.com/element-hq/synapse/pull/17848
2024-11-06 10:51:19 -06:00
Devon Hudson
a4438c9bc1 Cleanup changelog 2024-11-06 09:15:59 -07:00
Devon Hudson
9266ba72b5 1.119.0rc1 2024-11-06 09:03:06 -07:00
Devon Hudson
61aadb158f Use unique name for each os.arch variant when uploading Wheels (#17905)
### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2024-11-06 15:21:45 +00:00
Sandro
75698a3e53 Improve nix flake to use nixpkgs-unstable in lieu of master (#17852) 2024-11-06 14:03:46 +00:00
dependabot[bot]
46bd7e136d Bump actions/download-artifact from 3 to 4.1.7 in /.github/workflows (#17657)
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 3 to 4.1.7.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v4.1.7</h2>
<h2>What's Changed</h2>
<ul>
<li>Update <code>@​actions/artifact</code> dependency by <a
href="https://github.com/bethanyj28"><code>@​bethanyj28</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/325">actions/download-artifact#325</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/download-artifact/compare/v4.1.6...v4.1.7">https://github.com/actions/download-artifact/compare/v4.1.6...v4.1.7</a></p>
<h2>v4.1.6</h2>
<h2>What's Changed</h2>
<ul>
<li>updating <code>@actions/artifact</code> dependency to v2.1.6 by <a
href="https://github.com/eggyhead"><code>@​eggyhead</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/324">actions/download-artifact#324</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/download-artifact/compare/v4.1.5...v4.1.6">https://github.com/actions/download-artifact/compare/v4.1.5...v4.1.6</a></p>
<h2>v4.1.5</h2>
<h2>What's Changed</h2>
<ul>
<li>Update readme with v3/v2/v1 deprecation notice by <a
href="https://github.com/robherley"><code>@​robherley</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/322">actions/download-artifact#322</a></li>
<li>Update dependencies <code>@actions/core</code> to v1.10.1 and
<code>@actions/artifact</code> to v2.1.5</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/download-artifact/compare/v4.1.4...v4.1.5">https://github.com/actions/download-artifact/compare/v4.1.4...v4.1.5</a></p>
<h2>v4.1.4</h2>
<h2>What's Changed</h2>
<ul>
<li>Update <code>@​actions/artifact</code> by <a
href="https://github.com/bethanyj28"><code>@​bethanyj28</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/307">actions/download-artifact#307</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/download-artifact/compare/v4...v4.1.4">https://github.com/actions/download-artifact/compare/v4...v4.1.4</a></p>
<h2>v4.1.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Update release-new-action-version.yml by <a
href="https://github.com/konradpabjan"><code>@​konradpabjan</code></a>
in <a
href="https://redirect.github.com/actions/download-artifact/pull/292">actions/download-artifact#292</a></li>
<li>Update toolkit dependency with updated unzip logic by <a
href="https://github.com/bethanyj28"><code>@​bethanyj28</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/299">actions/download-artifact#299</a></li>
<li>Update <code>@​actions/artifact</code> by <a
href="https://github.com/bethanyj28"><code>@​bethanyj28</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/303">actions/download-artifact#303</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/bethanyj28"><code>@​bethanyj28</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/download-artifact/pull/299">actions/download-artifact#299</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/download-artifact/compare/v4...v4.1.3">https://github.com/actions/download-artifact/compare/v4...v4.1.3</a></p>
<h2>v4.1.2</h2>
<ul>
<li>Bump <code>@​actions/artifacts</code> to latest version to include
<a href="https://redirect.github.com/actions/toolkit/pull/1648">updated
GHES host check</a></li>
</ul>
<h2>v4.1.1</h2>
<ul>
<li>Fix transient request timeouts <a
href="https://redirect.github.com/actions/download-artifact/issues/249">actions/download-artifact#249</a></li>
<li>Bump <code>@actions/artifacts</code> to latest version</li>
</ul>
<h2>v4.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Some cleanup by <a
href="https://github.com/robherley"><code>@​robherley</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/247">actions/download-artifact#247</a></li>
<li>Fix default for run-id by <a
href="https://github.com/stchr"><code>@​stchr</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/252">actions/download-artifact#252</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="65a9edc588"><code>65a9edc</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/325">#325</a>
from bethanyj28/main</li>
<li><a
href="fdd1595981"><code>fdd1595</code></a>
licensed</li>
<li><a
href="c13dba102f"><code>c13dba1</code></a>
update <code>@​actions/artifact</code> dependency</li>
<li><a
href="0daa75ebea"><code>0daa75e</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/324">#324</a>
from actions/eggyhead/use-artifact-v2.1.6</li>
<li><a
href="9c19ed7fe5"><code>9c19ed7</code></a>
Merge branch 'main' into eggyhead/use-artifact-v2.1.6</li>
<li><a
href="3d3ea8741e"><code>3d3ea87</code></a>
updating license</li>
<li><a
href="89af5db821"><code>89af5db</code></a>
updating artifact package v2.1.6</li>
<li><a
href="b4aefff88e"><code>b4aefff</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/323">#323</a>
from actions/eggyhead/update-artifact-v215</li>
<li><a
href="8caf195ad4"><code>8caf195</code></a>
package lock update</li>
<li><a
href="d7a2ec411d"><code>d7a2ec4</code></a>
updating package version</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/download-artifact/compare/v3...v4.1.7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/download-artifact&package-manager=github_actions&previous-version=3&new-version=4.1.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/element-hq/synapse/network/alerts).

</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Devon Hudson <devonhudson@librem.one>
2024-11-06 00:24:40 +00:00
Eric Eastwood
eac170b21b Use more correct changelog entries for refactoring Generator usage (#17890)
Use more correct changelog entries for refactoring `Generator` usage

 - https://github.com/element-hq/synapse/pull/17813
 - https://github.com/element-hq/synapse/pull/17814
 - https://github.com/element-hq/synapse/pull/17815
 - https://github.com/element-hq/synapse/pull/17816
 - https://github.com/element-hq/synapse/pull/17817

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2024-11-05 22:54:18 +00:00
Alexander Udovichenko
211c31dbd7 Fix WheelTimer implementation that can expired timeout early (#17850)
When entries insert in the end of timer queue, then unnecessary entry
inserted (with duplicated key).
This can lead to some timeouts expired early and consume memory.
2024-11-05 12:08:17 -06:00
Erik Johnston
361bdafb87 Add experimental support for MSC4222 (#17888)
Basically, if the client sets a special query param on `/sync` v2
instead of responding with `state` at the *start* of the timeline, we
instead respond with `state_after` at the *end* of the timeline.

We do this by using the `current_state_delta_stream` table, which is
actually reliable, rather than messing around with "state at" points on
the timeline.

c.f. MSC4222
2024-11-05 14:45:57 +00:00
Andrew Morgan
1c2b18a704 Bump Synapse Dockerfile default to Python 3.12 (#17887) 2024-11-05 13:15:10 +00:00
Eric Eastwood
2c9ed5e510 Remove usage of internal header encoding API (#17894)
```py
from twisted.web.http_headers import Headers

Headers()._canonicalNameCaps
Headers()._encodeName
```

Introduced in https://github.com/matrix-org/synapse/pull/15913 <-
https://github.com/matrix-org/synapse/pull/15773
2024-11-04 12:20:07 -06:00
dependabot[bot]
9c0a3963bc Bump phonenumbers from 8.13.48 to 8.13.49 (#17899) 2024-11-04 17:21:05 +00:00
Eric Eastwood
0932c77539 Sliding Sync: Lazy-loading room members on incremental sync (remember memberships) (#17809)
Lazy-loading room members on incremental sync and remember which
memberships we've sent down the connection before (up-to 100)

Fix https://github.com/element-hq/synapse/issues/17804
2024-11-04 10:17:58 -06:00
dependabot[bot]
5580a820ae Bump ruff from 0.7.1 to 0.7.2 (#17897) 2024-11-04 16:14:46 +00:00
dependabot[bot]
541a009564 Bump anyhow from 1.0.91 to 1.0.92 (#17901) 2024-11-04 16:14:10 +00:00
dependabot[bot]
b5493899c5 Bump serde from 1.0.213 to 1.0.214 (#17900) 2024-11-04 16:14:01 +00:00
dependabot[bot]
da7d71e2a2 Bump mypy-zope from 1.0.7 to 1.0.8 (#17898) 2024-11-04 16:13:16 +00:00
Travis Ralston
c705beebf7 Support & use stable endpoints for MSC4151 (#17374)
https://github.com/matrix-org/matrix-spec-proposals/pull/4151 has
finished FCP.

See https://github.com/element-hq/synapse/issues/17373 for unstable
endpoint removal

---------

Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
2024-10-31 09:55:30 +00:00
Jason Little
47fe6df013 Remove Generator in _prune_old_outbound_device_pokes (#17814)
Context: https://github.com/matrix-org/synapse/issues/15439
(https://github.com/element-hq/synapse/issues/15439)

Also see discussion in https://github.com/element-hq/synapse/pull/17813
2024-10-30 21:21:22 -05:00
Jason Little
034d472688 Remove Generator in _purge_unreferenced_state_groups twice (#17815)
Context: https://github.com/matrix-org/synapse/issues/15439
(https://github.com/element-hq/synapse/issues/15439)

Also see discussion in https://github.com/element-hq/synapse/pull/17813
2024-10-30 20:16:49 -05:00
Jason Little
0c429fae1d Remove Generator in update_cached_last_access_time (#17816)
Context: https://github.com/matrix-org/synapse/issues/15439
(https://github.com/element-hq/synapse/issues/15439)

Also see discussion in https://github.com/element-hq/synapse/pull/17813
2024-10-30 20:16:24 -05:00
Jason Little
2e5fe3f187 Remove Generator in store_search_entries_txn (#17817)
Context: https://github.com/matrix-org/synapse/issues/15439
(https://github.com/element-hq/synapse/issues/15439)

Also see discussion in https://github.com/element-hq/synapse/pull/17813
2024-10-30 20:15:57 -05:00
Jason Little
af59a99933 Remove Generator from 4 places in PersistEventStore (#17818)
Context: https://github.com/matrix-org/synapse/issues/15439
(https://github.com/element-hq/synapse/issues/15439)

Also see discussion in https://github.com/element-hq/synapse/pull/17813
2024-10-30 20:14:36 -05:00
Jason Little
7987d5e638 Remove Generator in _quarantine_media_txn() (#17813) 2024-10-30 19:34:11 -05:00
Lama
3ae80b0de4 Check if user is in room before being able to tag it (#17839)
Fix #17819
2024-10-30 11:55:23 -05:00
dependabot[bot]
5c781b578d Bump ruff from 0.6.9 to 0.7.1 (#17868) 2024-10-30 11:57:36 +00:00
dependabot[bot]
418fbba8de Bump phonenumbers from 8.13.47 to 8.13.48 (#17880) 2024-10-30 11:56:20 +00:00
dependabot[bot]
6d65c3944b Bump python-multipart from 0.0.12 to 0.0.16 (#17879) 2024-10-30 11:56:12 +00:00
dependabot[bot]
330f170c0e Bump bytes from 1.7.2 to 1.8.0 (#17877) 2024-10-30 11:55:17 +00:00
dependabot[bot]
bf03361c86 Bump anyhow from 1.0.90 to 1.0.91 (#17876) 2024-10-30 11:54:59 +00:00
dependabot[bot]
3e750ab0d8 Bump serde from 1.0.210 to 1.0.213 (#17875) 2024-10-30 11:54:48 +00:00
dependabot[bot]
9cd3545bca Bump regex from 1.11.0 to 1.11.1 (#17874) 2024-10-30 11:54:38 +00:00
Erik Johnston
83513b75f7 Speed up sliding sync by computing extensions in parallel (#17884)
The main change here is to add a helper function
`gather_optional_coroutines`, which works in a similar way as
`yieldable_gather_results` but takes a set of coroutines rather than a
function
2024-10-30 10:51:04 +00:00
Shay
58deef5eba Add admin handler to list of handlers used for background tasks (#17847)
Fixes #17823

While we're at it, makes a change where the redactions are sent as the
admin if the user is not a member of the server (otherwise these fail
with a "User must be our own" message).
2024-10-29 13:50:13 -05:00
Erik Johnston
d427403c67 Fix check for outdated Rust library (#17861)
This failed when install with poetry, so let's properly try and detect
what's going on.
2024-10-29 17:06:15 +00:00
Till Faelligen
e9f9625d6b Merge branch 'master' into develop 2024-10-29 17:47:05 +01:00
Till Faelligen
4be3bd41fd Move announcements up 2024-10-29 17:05:22 +01:00
Till Faelligen
b3b1db4057 1.118.0 2024-10-29 15:30:10 +01:00
Erik Johnston
6c51f8649d Include the destination in the error of 'Destination mismatch' (#17830)
To help debug problems such as
https://github.com/element-hq/synapse/issues/17822
2024-10-29 10:09:25 +00:00
dependabot[bot]
69e9b75373 Bump types-setuptools from 75.1.0.20241014 to 75.2.0.20241019 (#17856)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:44:12 +01:00
dependabot[bot]
5d0514f29b Bump serde_json from 1.0.128 to 1.0.132 (#17857)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:43:40 +01:00
dependabot[bot]
4e5410fdae Bump types-psycopg2 from 2.9.21.20240819 to 2.9.21.20241019 (#17855)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:42:38 +01:00
dependabot[bot]
12d65a6778 Bump cryptography from 43.0.1 to 43.0.3 (#17853)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:40:58 +01:00
dependabot[bot]
1006c12eb2 Bump anyhow from 1.0.89 to 1.0.90 (#17858)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:35:37 +01:00
Andrew Morgan
57efc8c03e Add media tests for a CMYK JPEG image (#17786) 2024-10-23 18:26:01 +01:00
Andrew Morgan
46c885f5b5 fix spelling in changelog 2024-10-22 12:00:40 +01:00
Andrew Morgan
4b94a056bd 1.118.0rc1 2024-10-22 11:56:08 +01:00
Eric Eastwood
a5e16a4ab5 Sliding Sync: Reset forgotten status when membership changes (like rejoining a room) (#17835)
Reset `sliding_sync_membership_snapshots` -> `forgotten` status when
membership changes (like rejoining a room).

Fix https://github.com/element-hq/synapse/issues/17781

### What was the problem before?

Previously, if someone used `/forget` on one of their rooms, it would
update `sliding_sync_membership_snapshots` as expected but when someone
rejoined the room (or had any membership change), the upsert didn't
overwrite and reset the `forgotten` status so it remained `forgotten`
and invisible down the Sliding Sync endpoint.
2024-10-22 11:06:46 +01:00
Quentin Gliech
80ad02e10e Ensure Python 3.13 and PostgreSQL 17 compatibility (#17752)
This adds Python 3.13.0 to the trial test matrix

Also updates `cffi` and `zope.interface` in the locked dependencies to
make sure we have versions compatible with Python 3.13. For some
reasons, they are not being picked up by dependabot.
2024-10-22 09:23:36 +00:00
dependabot[bot]
9512b84a72 Bump mypy from 1.10.1 to 1.11.2 (#17842)
Bumps [mypy](https://github.com/python/mypy) from 1.10.1 to 1.11.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/python/mypy/blob/master/CHANGELOG.md">mypy's
changelog</a>.</em></p>
<blockquote>
<h3>Mypy 1.11.2</h3>
<ul>
<li>Alternative fix for a union-like literal string (Ivan Levkivskyi, PR
<a
href="https://redirect.github.com/python/mypy/pull/17639">17639</a>)</li>
<li>Unwrap <code>TypedDict</code> item types before storing (Ivan
Levkivskyi, PR <a
href="https://redirect.github.com/python/mypy/pull/17640">17640</a>)</li>
</ul>
<h3>Acknowledgements</h3>
<p>Thanks to all mypy contributors who contributed to this release:</p>
<ul>
<li>Alex Waygood</li>
<li>Alexander Leopold Shon</li>
<li>Ali Hamdan</li>
<li>Anders Kaseorg</li>
<li>Ben Brown</li>
<li>Bénédikt Tran</li>
<li>bzoracler</li>
<li>Christoph Tyralla</li>
<li>Christopher Barber</li>
<li>dexterkennedy</li>
<li>gilesgc</li>
<li>GiorgosPapoutsakis</li>
<li>Ivan Levkivskyi</li>
<li>Jelle Zijlstra</li>
<li>Jukka Lehtosalo</li>
<li>Marc Mueller</li>
<li>Matthieu Devlin</li>
<li>Michael R. Crusoe</li>
<li>Nikita Sobolev</li>
<li>Seo Sanghyeon</li>
<li>Shantanu</li>
<li>sobolevn</li>
<li>Steven Troxler</li>
<li>Tadeu Manoel</li>
<li>Tamir Duberstein</li>
<li>Tushar Sadhwani</li>
<li>urnest</li>
<li>Valentin Stanciu</li>
</ul>
<p>I’d also like to thank my employer, Dropbox, for supporting mypy
development.</p>
<h2>Mypy 1.10</h2>
<p>We’ve just uploaded mypy 1.10 to the Python Package Index (<a
href="https://pypi.org/project/mypy/">PyPI</a>). Mypy is a static type
checker for Python. This release includes new features, performance
improvements and bug fixes. You can install it as follows:</p>
<pre><code>python3 -m pip install -U mypy
</code></pre>
<p>You can read the full documentation for this release on <a
href="http://mypy.readthedocs.io">Read the Docs</a>.</p>
<h3>Support TypeIs (PEP 742)</h3>
<p>Mypy now supports <code>TypeIs</code> (<a
href="https://peps.python.org/pep-0742/">PEP 742</a>), which allows</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="789f02c83a"><code>789f02c</code></a>
Bump version to 1.11.2</li>
<li><a
href="917cc75fd6"><code>917cc75</code></a>
An alternative fix for a union-like literal string (<a
href="https://redirect.github.com/python/mypy/issues/17639">#17639</a>)</li>
<li><a
href="7d805b364e"><code>7d805b3</code></a>
Unwrap TypedDict item types before storing (<a
href="https://redirect.github.com/python/mypy/issues/17640">#17640</a>)</li>
<li><a
href="32675dddfa"><code>32675dd</code></a>
Revert &quot;Fix Literal strings containing pipe characters&quot; (<a
href="https://redirect.github.com/python/mypy/issues/17638">#17638</a>)</li>
<li><a
href="778542b93a"><code>778542b</code></a>
Revert &quot;Fix <code>RawExpressionType.accept</code> crash with
<code>--cache-fine-grained</code>&quot; (<a
href="https://redirect.github.com/python/mypy/issues/1">#1</a>...</li>
<li><a
href="14ab742dec"><code>14ab742</code></a>
Bump version to 1.11.2+dev</li>
<li><a
href="570b90a7a3"><code>570b90a</code></a>
Bump version to 1.11</li>
<li><a
href="b3a102ef31"><code>b3a102e</code></a>
Fix <code>RawExpressionType.accept</code> crash with
<code>--cache-fine-grained</code> (<a
href="https://redirect.github.com/python/mypy/issues/17588">#17588</a>)</li>
<li><a
href="aec04c7448"><code>aec04c7</code></a>
Fix PEP 604 isinstance caching (<a
href="https://redirect.github.com/python/mypy/issues/17563">#17563</a>)</li>
<li><a
href="cb44e4d8f1"><code>cb44e4d</code></a>
Fix <code>typing.TypeAliasType</code> being undefined on python &lt;
3.12 (<a
href="https://redirect.github.com/python/mypy/issues/17558">#17558</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/python/mypy/compare/v1.10.1...v1.11.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=mypy&package-manager=pip&previous-version=1.10.1&new-version=1.11.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-17 15:05:00 +00:00
dependabot[bot]
22aa925523 Bump types-requests from 2.32.0.20240914 to 2.32.0.20241016 (#17841)
Bumps [types-requests](https://github.com/python/typeshed) from
2.32.0.20240914 to 2.32.0.20241016.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=types-requests&package-manager=pip&previous-version=2.32.0.20240914&new-version=2.32.0.20241016)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-17 14:52:18 +00:00
dependabot[bot]
0ab99369a1 Bump sentry-sdk from 2.16.0 to 2.17.0 (#17844)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from
2.16.0 to 2.17.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-python/releases">sentry-sdk's
releases</a>.</em></p>
<blockquote>
<h2>2.17.0</h2>
<h3>Various fixes &amp; improvements</h3>
<ul>
<li>Add support for async calls in Anthropic and OpenAI integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3497">#3497</a>)
by <a href="https://github.com/vetyy"><code>@​vetyy</code></a></li>
<li>Allow custom transaction names in ASGI (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3664">#3664</a>)
by <a
href="https://github.com/sl0thentr0py"><code>@​sl0thentr0py</code></a></li>
<li>Langchain: Handle case when parent span wasn't traced (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3656">#3656</a>)
by <a
href="https://github.com/rbasoalto"><code>@​rbasoalto</code></a></li>
<li>Fix Anthropic integration when using tool calls (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3615">#3615</a>)
by <a href="https://github.com/kwnath"><code>@​kwnath</code></a></li>
<li>More defensive Django Spotlight middleware injection (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3665">#3665</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>Remove <code>ensure_integration_enabled_async</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3632">#3632</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Test with newer Falcon version (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3644">#3644</a>,
<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3653">#3653</a>,
<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3662">#3662</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Fix mypy (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3657">#3657</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Fix flaky transport test (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3666">#3666</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Remove pin on <code>sphinx</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3650">#3650</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Bump <code>actions/checkout</code> from <code>4.2.0</code> to
<code>4.2.1</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3651">#3651</a>)
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md">sentry-sdk's
changelog</a>.</em></p>
<blockquote>
<h2>2.17.0</h2>
<h3>Various fixes &amp; improvements</h3>
<ul>
<li>Add support for async calls in Anthropic and OpenAI integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3497">#3497</a>)
by <a href="https://github.com/vetyy"><code>@​vetyy</code></a></li>
<li>Allow custom transaction names in ASGI (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3664">#3664</a>)
by <a
href="https://github.com/sl0thentr0py"><code>@​sl0thentr0py</code></a></li>
<li>Langchain: Handle case when parent span wasn't traced (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3656">#3656</a>)
by <a
href="https://github.com/rbasoalto"><code>@​rbasoalto</code></a></li>
<li>Fix Anthropic integration when using tool calls (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3615">#3615</a>)
by <a href="https://github.com/kwnath"><code>@​kwnath</code></a></li>
<li>More defensive Django Spotlight middleware injection (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3665">#3665</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>Remove <code>ensure_integration_enabled_async</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3632">#3632</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Test with newer Falcon version (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3644">#3644</a>,
<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3653">#3653</a>,
<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3662">#3662</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Fix mypy (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3657">#3657</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Fix flaky transport test (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3666">#3666</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Remove pin on <code>sphinx</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3650">#3650</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Bump <code>actions/checkout</code> from <code>4.2.0</code> to
<code>4.2.1</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3651">#3651</a>)
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e44c9eeafd"><code>e44c9ee</code></a>
Update CHANGELOG.md</li>
<li><a
href="ee30db346c"><code>ee30db3</code></a>
release: 2.17.0</li>
<li><a
href="365d9cf244"><code>365d9cf</code></a>
Fix flaky transport test (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3666">#3666</a>)</li>
<li><a
href="9ae58209ee"><code>9ae5820</code></a>
Add support for async calls in Anthropic and OpenAI integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3497">#3497</a>)</li>
<li><a
href="891afee6df"><code>891afee</code></a>
fix(spotlight): More defensive Django spotlight middleware injection (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3665">#3665</a>)</li>
<li><a
href="f493057fde"><code>f493057</code></a>
Allow custom transaction names in asgi (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3664">#3664</a>)</li>
<li><a
href="e463034c2c"><code>e463034</code></a>
tests: Falcon RC1 (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3662">#3662</a>)</li>
<li><a
href="deca5f2f01"><code>deca5f2</code></a>
build(deps): Remove pin on sphinx (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3650">#3650</a>)</li>
<li><a
href="302457dec2"><code>302457d</code></a>
build(deps): bump actions/checkout from 4.2.0 to 4.2.1 (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3651">#3651</a>)</li>
<li><a
href="846b8b26aa"><code>846b8b2</code></a>
fix(langchain): handle case when parent span wasn't traced (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3656">#3656</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-python/compare/2.16.0...2.17.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=sentry-sdk&package-manager=pip&previous-version=2.16.0&new-version=2.17.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-17 14:34:00 +00:00
dependabot[bot]
6ececb8f2a Bump psycopg2 from 2.9.9 to 2.9.10 (#17843)
Bumps [psycopg2](https://github.com/psycopg/psycopg2) from 2.9.9 to
2.9.10.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/psycopg/psycopg2/blob/master/NEWS">psycopg2's
changelog</a>.</em></p>
<blockquote>
<h2>Current release</h2>
<p>What's new in psycopg 2.9.10
^^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Add support for Python 3.13.</li>
<li>Receive notifications on commit
(🎫<code>[#1728](https://github.com/psycopg/psycopg2/issues/1728)</code>).</li>
<li><code>~psycopg2.errorcodes</code> map and
<code>~psycopg2.errors</code> classes updated to
PostgreSQL 17.</li>
<li>Drop support for Python 3.7.</li>
</ul>
<p>What's new in psycopg 2.9.9
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Add support for Python 3.12.</li>
<li>Drop support for Python 3.6.</li>
</ul>
<p>What's new in psycopg 2.9.8
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Wheel package bundled with PostgreSQL 16 libpq in order to add
support for
recent features, such as <code>sslcertmode</code>.</li>
</ul>
<p>What's new in psycopg 2.9.7
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Fix propagation of exceptions raised during module initialization

(🎫<code>[#1598](https://github.com/psycopg/psycopg2/issues/1598)</code>).</li>
<li>Fix building when pg_config returns an empty string
(🎫<code>[#1599](https://github.com/psycopg/psycopg2/issues/1599)</code>).</li>
<li>Wheel package bundled with OpenSSL 1.1.1v.</li>
</ul>
<p>What's new in psycopg 2.9.6
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Package manylinux 2014 for aarch64 and ppc64le platforms, in order
to
include libpq 15 in the binary package
(🎫<code>[#1396](https://github.com/psycopg/psycopg2/issues/1396)</code>).</li>
<li>Wheel package bundled with OpenSSL 1.1.1t.</li>
</ul>
<p>What's new in psycopg 2.9.5
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Add support for Python 3.11.</li>
<li>Add support for rowcount in MERGE statements in binary packages

(🎫<code>[#1497](https://github.com/psycopg/psycopg2/issues/1497)</code>).</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/psycopg/psycopg2/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=psycopg2&package-manager=pip&previous-version=2.9.9&new-version=2.9.10)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-17 14:29:05 +00:00
Erik Johnston
2ce7a1edf7 Merge branch 'master' into develop 2024-10-15 15:01:48 +01:00
Erik Johnston
ec885ffd33 1.117.0 2024-10-15 10:46:33 +01:00
Tulir Asokan
11bc9a1b3a Implement MSC4210: Remove legacy mentions (#17783) 2024-10-14 14:24:28 +01:00
Andrew Morgan
c5b379de66 Enable the .org.matrix.msc4028.encrypted_event push rule by default (#17826)
Clients will still only see this rule if the corresponding experimental
feature, `msc4028_push_encrypted_events`, is also enabled.

This aligns the implementation with MSC4028, specifically [this
section](https://github.com/matrix-org/matrix-spec-proposals/blob/giomfo/push_encrypted_events/proposals/4028-push-all-encrypted-events-except-for-muted-rooms.md#unstable-prefix).
See https://github.com/element-hq/synapse/issues/16846 for context.
2024-10-14 13:49:43 +01:00
Eric Eastwood
adda2a4613 Sliding Sync: Slight optimization when fetching state for the room (get_events_as_list(...)) (#17718)
Spawning from @kegsay [pointing
out](https://matrix.to/#/!cnVVNLKqgUzNTOFQkz:matrix.org/$ExOO7J8uPUQSyH-9Uxc_QCa8jlXX9uK4VRtkSC0EI3o?via=element.io&via=matrix.org&via=jki.re)
that the Sliding Sync endpoint doesn't handle a large room with a lot of
state well on initial sync (requesting all state via `required_state: [
["*","*"] ]`) (it just takes forever).

After investigating further, the slow part is just
`get_events_as_list(...)` fetching all of the current state ID's out for
the room (which can be 100k+ events for rooms with a lot of membership).
This is just a slow thing in Synapse in general and the same thing
happens in Sync v2 or the `/state` endpoint.


---

The only idea I had to improve things was to use `batch_iter` to only
try fetching a fixed amount at a time instead of working with large
maps, lists, and sets. This doesn't seem to have much effect though.

There is already a `batch_iter(event_ids, 200)` in
`_fetch_event_rows(...)` for when we actually have to touch the database
and that's inside a queue to deduplicate work.

I did notice one slight optimization to use `get_events_as_list(...)`
directly instead of `get_events(...)`. `get_events(...)` just turns the
result from `get_events_as_list(...)` into a dict and since we're just
iterating over the events, we don't need the dict/map.
2024-10-14 13:47:35 +01:00
Andrew Morgan
5d47138b46 Fix typo in target_cache_memory_usage docs (#17825) 2024-10-14 13:34:55 +01:00
Erik Johnston
d025b5ab50 Correctly changes to required state config in sliding sync (#17785)
Fixes https://github.com/element-hq/synapse/issues/17698

This handles `required_state` changes by checking if new state has been
added to the config, and if so fetching and returning that from the
current state.

This also takes care to ensure that given a state entry S that is added,
removed and then re-added that we do *not* send S down a second time if
there have been no changes to S in the current state. This is fine for
Rust SDK (as it just remembers all state), but we might decide not to do
this behaviour in the MSC. If we decide to always send down S then its
easy enough to rip out all the code.

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-10-14 13:31:22 +01:00
dependabot[bot]
ae6179b382 Bump mypy-zope from 1.0.5 to 1.0.7 (#17827) 2024-10-14 13:26:40 +01:00
dependabot[bot]
5dd6157972 Bump types-setuptools from 75.1.0.20240917 to 75.1.0.20241014 (#17828) 2024-10-14 13:26:23 +01:00
dependabot[bot]
1266138b66 Bump sentry-sdk from 2.15.0 to 2.16.0 (#17829) 2024-10-14 13:26:12 +01:00
Erik Johnston
24975eca4d Build debian packages for new Ubuntu versions (#17824)
c.f. https://wiki.ubuntu.com/Releases for the currently supported Ubuntu
releases.

Note: this removes support for 23.04 and 23.10, which are EOL.

Fixes #17811
2024-10-14 11:34:33 +01:00
Andrew Morgan
451a9dc7b9 Clarify when 3PID invite module callbacks are called (#17627)
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-10-14 11:31:49 +01:00
Erik Johnston
f6a3e5e1c2 Fix release script to check GH token (#17803)
The current logic didn't work.
2024-10-10 08:59:01 +00:00
Nathan
05576f0b4b Added display_name_claim in jwt_config which sets the user's display name upon registration (#17708) 2024-10-09 12:21:08 +00:00
Martin Weinelt
60aebdb27e Fix saving of non-RGB thumbnails as PNG (#17736) 2024-10-08 18:32:25 +01:00
Erik Johnston
b1b4b2944d Merge branch 'release-v1.117' into develop 2024-10-08 16:35:35 +01:00
Andrew Ferrazzutti
bdcc9fa388 Fix incorrectly documented config path argument (#17802) 2024-10-08 15:05:36 +01:00
Erik Johnston
6a0c21fabd Fixup changlog 2024-10-08 15:04:20 +01:00
dependabot[bot]
f40641c29b Bump sigstore/cosign-installer from 3.6.0 to 3.7.0 (#17798)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 14:56:15 +01:00
dependabot[bot]
1bb528ee44 Bump phonenumbers from 8.13.46 to 8.13.47 (#17797)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 14:41:27 +01:00
dependabot[bot]
165f4ca776 Bump sentry-sdk from 2.14.0 to 2.15.0 (#17795)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 14:41:03 +01:00
dependabot[bot]
475e192cbe Bump tomli from 2.0.1 to 2.0.2 (#17796)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 14:40:12 +01:00
dependabot[bot]
43040a4051 Bump ruff from 0.6.8 to 0.6.9 (#17794)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 14:39:19 +01:00
Erik Johnston
b3e2d10f39 1.117.0rc1 2024-10-08 14:37:36 +01:00
Shay
a5986ac229 Improvements to admin redact api (#17792)
- better validation on user input
- fix an early task completion
- when checking membership in rooms, check for rooms user has been
banned from as well
2024-10-08 14:23:21 +01:00
Andrew Ferrazzutti
006251a5d0 Add missing license header (#17799)
Co-authored-by: Erik Johnston <erik@matrix.org>
2024-10-08 12:01:44 +01:00
Erik Johnston
422f3ecec1 Sliding sync: omit bump stamp when it is unchanged (#17788)
This saves some DB lookups in rooms
2024-10-08 11:17:23 +01:00
Erik Johnston
4e90221d87 Sliding sync minor performance speed up using new table (#17787)
Use the new tables to work out which rooms have changed.

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-10-08 11:06:31 +01:00
Erik Johnston
e2610de208 Speed up sliding sync when there are many active subscriptions (#17789)
Two changes: a) use a batch lookup function instead of a loop, b) check
existing data to see if we already have what we need and only fetch what
we don't.
2024-10-08 10:35:15 +01:00
Andrew Morgan
e8c8924b81 Clarify test_forget_when_not_left docstring (#17628) 2024-10-07 16:34:32 +01:00
V02460
e8e0f0fad7 Add config option redis.password_path (#17717)
Adds the option to load the Redis password from a file, instead of
giving it in the config directly. The code is similar to how it’s done
for `registration_shared_secret_path`. I changed the example in the
documentation to represent the best practice regarding the handling of
secrets.

Reading secrets from files has the security advantage of separating the
secrets from the config. It also simplifies secrets management in
Kubernetes.
2024-10-07 09:46:51 +01:00
Henrique
beb7a951f4 docs: add note about PYTHONMALLOC for accurate jemalloc memory tracking (#17709)
Added a note in the documentation suggesting that users may set
`PYTHONMALLOC=malloc` when using `jemalloc`. This allows jemalloc to
track memory usage more accurately by bypassing Python's internal
small-object allocator (`pymalloc`), helping to ensure that
`cache_autotuning` functions as expected.

This doc change aims to provide more clarity for users configuring
jemalloc with Synapse.


Based on:
4ac783549c/synapse/metrics/jemalloc.py (L198-L201)
2024-10-07 08:37:39 +00:00
dependabot[bot]
d34f827ed8 Bump python-multipart from 0.0.10 to 0.0.12 (#17772) 2024-10-07 09:14:30 +01:00
Andrew Ferrazzutti
9920417723 Don't say MSC4140 is supported when it's disabled (#17780) 2024-10-04 13:42:34 +01:00
Andrew Morgan
316d635906 Fix NAME attribute of ReplicationRemovePusherRestServlet (#17779) 2024-10-04 09:53:35 +01:00
Dirk Klimpel
8bbe66a9b9 explain load balancing for federation_sender_instances (#17776)
Adding information on how the load is distributed for
`federation_sender_instances`.

Thx to @devonh for the information.

causal source:
c2e5e9e67c/synapse/config/_base.py (L946-L989)

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
2024-10-03 22:01:33 +00:00
Andrew Morgan
d4e3ad04cd Merge branch 'master' into develop 2024-10-01 12:18:22 +01:00
Andrew Morgan
55c0391cc8 1.116.0 2024-10-01 11:14:13 +01:00
Erik Johnston
81e0f57800 Fix perf when streams don't change often (#17767)
There is a bug with the `StreamChangeCache` where it would incorrectly
return that all entities had changed if asked for entities changed
*since* the earliest stream position.

Note that for streams we use the inequalities: `$min_stream_id <
stream_id <= $max_stream_id`, i.e. when we ask the stream change cache
for all things that have changed since `$stream_id` we don't care for
events that happened *at* `$stream_id`.

Specifically: `_earliest_known_stream_pos` is the position at which we
know that we'll have entries for all changes since that point, we can
use the cache for any stream IDs that equal
`_earliest_known_stream_pos`.

`_earliest_known_stream_pos` is set in three places:
- On startup we set it either to:
  - the current maximum stream ID, with not prefilled values; or
  - the minimum of the latest N values we pulled from the DB
- When we evict items from the bottom, we set it to the stream ID of the
evicted items.

This was changed in https://github.com/matrix-org/synapse/pull/14435,
but I think we were overly conservative there.

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2024-09-30 13:52:33 +01:00
Erik Johnston
ae4862c38f Optimise notifier mk2 (#17766)
Based on #17765.

Basically the idea is to reduce the overhead of calling
`ObservableDeferred` in a loop. The two gains are: a) just using a list
of deferreds rather than the machinery of `ObservableDeferred`, and b)
only calling `PreseverLoggingContext` once.

`PreseverLoggingContext` in particular is expensive to call a lot as
each time it needs to call `get_thread_resource_usage` twice, so that it
an update the CPU metrics of the log context.
2024-09-30 13:32:31 +01:00
dependabot[bot]
602956ef64 Bump ruff from 0.6.7 to 0.6.8 (#17774) 2024-09-30 13:08:56 +01:00
dependabot[bot]
444b565c76 Bump phonenumbers from 8.13.45 to 8.13.46 (#17773) 2024-09-30 13:07:57 +01:00
dependabot[bot]
8068f31146 Bump regex from 1.10.6 to 1.11.0 (#17770) 2024-09-30 13:06:43 +01:00
Erik Johnston
5210565c12 Reduce overhead of sliding sync E2EE loops (#17771)
Mainly toning down logging and only calling
`get_membership_from_event_ids` if something has changed.
2024-09-30 13:00:14 +01:00
Erik Johnston
de955293cf Add fast path for sliding sync streams that only ask for extensions (#17768)
Principally useful for EX e2ee sliding sync connections.
2024-09-30 12:59:50 +01:00
Erik Johnston
93889eb2e7 Optimise notifier (#17765)
The notifier is quite inefficient when it has to wake up many user
streams all at once

From a silly benchmark this takes the time to notify 1M user streams
from ~30s to ~5s
2024-09-30 12:58:13 +01:00
Erik Johnston
ece66ba61c Minor perf speed up for large accounts on SSS (#17751)
This works as instead of passing *all* rooms to `record_sent_rooms` we
only need to pass rooms that were previously not in the LIVE state.

This came from a py-spy where we were spending ~10% CPU calling these
functions. Note that `record_sent_rooms` is a no-op for rooms that are
already in the `LIVE` state, so we only need to call them for
`PREVIOUSLY` or `INITIAL` rooms.
2024-09-30 12:58:02 +01:00
Quentin Gliech
ef9ef99f59 Merge branch 'release-v1.116' into develop 2024-09-26 16:19:32 +02:00
Quentin Gliech
cfbddc258f 1.116.0rc2 2024-09-26 15:29:13 +02:00
Andrew Ferrazzutti
302534c348 Support MSC3757: Restricting who can overwrite a state event (#17513)
Link to the
MSC: https://github.com/matrix-org/matrix-spec-proposals/pull/3757

---------

Co-authored-by: Quentin Gliech <quenting@element.io>
2024-09-26 15:25:05 +02:00
Erik Johnston
f144b4c7e9 Remove spurious TODO in debian install step (#17749)
This was a note added in the PR to move to AGPL, which we failed to
remove before landing.

(The context for this was that we needed to decide if we were going to
change which debian repository we published too, but decided not to in
the end)
2024-09-26 13:18:28 +01:00
Quentin Gliech
13dea6949b Changelog fixes 2024-09-25 12:07:51 +02:00
Quentin Gliech
386cabda83 1.116.0rc1 2024-09-25 11:34:36 +02:00
dependabot[bot]
f53a3a56e2 Bump treq from 23.11.0 to 24.9.1 (#17744)
Bumps [treq](https://github.com/twisted/treq) from 23.11.0 to 24.9.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/twisted/treq/releases">treq's
releases</a>.</em></p>
<blockquote>
<h2>Treq 24.9.0</h2>
<h2>Features</h2>
<ul>
<li>treq now ships type annotations. (<a
href="https://redirect.github.com/twisted/treq/issues/366">#366</a>)</li>
<li>The new <code>treq.cookies</code> module provides helper functions
for working with <code>http.cookiejar.Cookie</code> and
<code>CookieJar</code> objects. (<a
href="https://redirect.github.com/twisted/treq/issues/384">#384</a>)</li>
<li>Python 3.13 is now supported. (<a
href="https://redirect.github.com/twisted/treq/issues/391">#391</a>)</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li><code>treq.content.text_content()</code> no longer generates
deprecation warnings due to use of the <code>cgi</code> module. (<a
href="https://redirect.github.com/twisted/treq/issues/355">#355</a>)</li>
</ul>
<h2>Deprecations and Removals</h2>
<ul>
<li>Mixing the <em>json</em> argument with <em>files</em> or
<em>data</em> now raises <code>TypeError</code>. (<a
href="https://redirect.github.com/twisted/treq/issues/297">#297</a>)</li>
<li>Passing non-string (<code>str</code> or <code>bytes</code>) values
as part of a dict to the <em>headers</em> argument now results in a
<code>TypeError</code>, as does passing any collection other than a
<code>dict</code> or <code>Headers</code> instance. (<a
href="https://redirect.github.com/twisted/treq/issues/302">#302</a>)</li>
<li>Support for Python 3.7 and PyPy 3.8, which have reached end of
support, has been dropped. (<a
href="https://redirect.github.com/twisted/treq/issues/378">#378</a>)</li>
</ul>
<h2>Misc</h2>
<ul>
<li><a
href="https://redirect.github.com/twisted/treq/issues/336">#336</a>, <a
href="https://redirect.github.com/twisted/treq/issues/382">#382</a>, <a
href="https://redirect.github.com/twisted/treq/issues/395">#395</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/twisted/treq/blob/trunk/CHANGELOG.rst">treq's
changelog</a>.</em></p>
<blockquote>
<h1>24.9.1 (2024-09-19)</h1>
<h2>Bugfixes</h2>
<ul>
<li>treq has vendored its dependency on the <code>multipart</code>
library to avoid import
conflicts with <code>python-multipart</code>; it should now be
installable alongside
that library. (<code>[#399](https://github.com/twisted/treq/issues/399)
&lt;https://github.com/twisted/treq/issues/399&gt;</code>__)</li>
</ul>
<h1>24.9.0 (2024-09-17)</h1>
<h2>Features</h2>
<ul>
<li>treq now ships type annotations.
(<code>[#366](https://github.com/twisted/treq/issues/366)
&lt;https://github.com/twisted/treq/issues/366&gt;</code>__)</li>
<li>The new :mod:<code>treq.cookies</code> module provides helper
functions for working with <code>http.cookiejar.Cookie</code> and
<code>CookieJar</code> objects.
(<code>[#384](https://github.com/twisted/treq/issues/384)
&lt;https://github.com/twisted/treq/issues/384&gt;</code>__)</li>
<li>Python 3.13 is now supported.
(<code>[#391](https://github.com/twisted/treq/issues/391)
&lt;https://github.com/twisted/treq/issues/391&gt;</code>__)</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>:mod:<code>treq.content.text_content()</code> no longer generates
deprecation warnings due to use of the <code>cgi</code> module.
(<code>[#355](https://github.com/twisted/treq/issues/355)
&lt;https://github.com/twisted/treq/issues/355&gt;</code>__)</li>
</ul>
<h2>Deprecations and Removals</h2>
<ul>
<li>Mixing the <em>json</em> argument with <em>files</em> or
<em>data</em> now raises <code>TypeError</code>.
(<code>[#297](https://github.com/twisted/treq/issues/297)
&lt;https://github.com/twisted/treq/issues/297&gt;</code>__)</li>
<li>Passing non-string (<code>str</code> or <code>bytes</code>) values
as part of a dict to the <em>headers</em> argument now results in a
<code>TypeError</code>, as does passing any collection other than a
<code>dict</code> or <code>Headers</code> instance.
(<code>[#302](https://github.com/twisted/treq/issues/302)
&lt;https://github.com/twisted/treq/issues/302&gt;</code>__)</li>
<li>Support for Python 3.7 and PyPy 3.8, which have reached end of
support, has been dropped.
(<code>[#378](https://github.com/twisted/treq/issues/378)
&lt;https://github.com/twisted/treq/issues/378&gt;</code>__)</li>
</ul>
<h2>Misc</h2>
<ul>
<li><code>[#336](https://github.com/twisted/treq/issues/336)
&lt;https://github.com/twisted/treq/issues/336&gt;</code><strong>,
<code>[#382](https://github.com/twisted/treq/issues/382)
&lt;https://github.com/twisted/treq/issues/382&gt;</code></strong>,
<code>[#395](https://github.com/twisted/treq/issues/395)
&lt;https://github.com/twisted/treq/issues/395&gt;</code>__</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="caaf9fcb62"><code>caaf9fc</code></a>
Release 24.9.1</li>
<li><a
href="9cedb088b4"><code>9cedb08</code></a>
Merge pull request <a
href="https://redirect.github.com/twisted/treq/issues/400">#400</a> from
twisted/vendor-multipart-for-now</li>
<li><a
href="4aa1ee8a3c"><code>4aa1ee8</code></a>
news fragment</li>
<li><a
href="d7c16de8f5"><code>d7c16de</code></a>
octothorpes rise up</li>
<li><a
href="4fd3c842c2"><code>4fd3c84</code></a>
try to make the linter happy</li>
<li><a
href="f0a5148cba"><code>f0a5148</code></a>
fix import, switch to <code>from</code></li>
<li><a
href="7f16b87f0a"><code>7f16b87</code></a>
correct import</li>
<li><a
href="1526431a37"><code>1526431</code></a>
add a lightly-modified vendored version of <a
href="https://github.com/defnull/multipa">https://github.com/defnull/multipa</a>...</li>
<li><a
href="7c52d4917f"><code>7c52d49</code></a>
remove dependency on <code>multipart</code> package</li>
<li><a
href="ca3966f57a"><code>ca3966f</code></a>
Merge pull request <a
href="https://redirect.github.com/twisted/treq/issues/398">#398</a> from
twisted/397-release-24.9.0</li>
<li>Additional commits viewable in <a
href="https://github.com/twisted/treq/compare/release-23.11.0...treq-24.9.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=treq&package-manager=pip&previous-version=23.11.0&new-version=24.9.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Quentin Gliech <quenting@element.io>
2024-09-25 11:19:03 +02:00
V02460
2fc43e4219 Remove the deprecated cgi module (#17741)
Removes all uses of the `cgi` module from Synapse. It was deprecated in
Python version 3.11 and removed in version 3.13 ([“dead
battery”](https://docs.python.org/3.13/whatsnew/3.13.html#pep-594-remove-dead-batteries-from-the-standard-library)).

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Quentin Gliech <quenting@element.io>
2024-09-25 11:15:34 +02:00
dependabot[bot]
b0d2aca164 Bump phonenumbers from 8.13.44 to 8.13.45 (#17762) 2024-09-25 06:38:37 +00:00
dependabot[bot]
f68e8d0021 Bump ruff from 0.6.5 to 0.6.7 (#17760) 2024-09-24 22:48:43 +00:00
dependabot[bot]
89e7609f5c Bump msgpack from 1.0.8 to 1.1.0 (#17759) 2024-09-24 22:34:37 +00:00
dependabot[bot]
b89a66f831 Bump idna from 3.8 to 3.10 (#17758) 2024-09-25 00:20:24 +02:00
dependabot[bot]
b066b3aa04 Bump types-setuptools from 74.1.0.20240907 to 75.1.0.20240917 (#17757)
Bumps [types-setuptools](https://github.com/python/typeshed) from
74.1.0.20240907 to 75.1.0.20240917.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=types-setuptools&package-manager=pip&previous-version=74.1.0.20240907&new-version=75.1.0.20240917)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 17:30:24 +00:00
dependabot[bot]
e4b0cd87cc Bump pydantic from 2.8.2 to 2.9.2 (#17756)
Bumps [pydantic](https://github.com/pydantic/pydantic) from 2.8.2 to
2.9.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/releases">pydantic's
releases</a>.</em></p>
<blockquote>
<h2>v2.9.2 (2024-09-17)</h2>
<h2>What's Changed</h2>
<h3>Fixes</h3>
<ul>
<li>Do not error when trying to evaluate annotations of private
attributes by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10358">#10358</a></li>
<li>Adding notes on designing sound <code>Callable</code> discriminators
by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10400">#10400</a></li>
<li>Fix serialization schema generation when using
<code>PlainValidator</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10427">#10427</a></li>
<li>Fix <code>Union</code> serialization warnings by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1449">pydantic/pydantic-core#1449</a></li>
<li>Fix variance issue in <code>_IncEx</code> type alias, only allow
<code>True</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10414">#10414</a></li>
<li>Fix <code>ZoneInfo</code> validation with various invalid types by
<a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10408">#10408</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.9.1...v2.9.2">https://github.com/pydantic/pydantic/compare/v2.9.1...v2.9.2</a></p>
<h2>v2.9.1 (2024-09-09)</h2>
<h2>What's Changed</h2>
<h3>Fixes</h3>
<ul>
<li>Fix Predicate issue in v2.9.0 by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10321">#10321</a></li>
<li>Fixing <code>annotated-types</code> bound to <code>&gt;=0.6.0</code>
by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10327">#10327</a></li>
<li>Turn <code>tzdata</code> install requirement into optional
<code>timezone</code> dependency by <a
href="https://github.com/jakob-keller"><code>@​jakob-keller</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10331">#10331</a></li>
<li>Fix <code>IncExc</code> type alias definition by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10339">#10339</a></li>
<li>Use correct types namespace when building namedtuple core schemas by
<a href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10337">#10337</a></li>
<li>Fix evaluation of stringified annotations during namespace
inspection by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10347">#10347</a></li>
<li>Fix tagged union serialization with alias generators by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1442">pydantic/pydantic-core#1442</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.9.0...v2.9.1">https://github.com/pydantic/pydantic/compare/v2.9.0...v2.9.1</a></p>
<h2>v2.9.0 (2024-09-05)</h2>
<p>The code released in v2.9.0 is practically identical to that of
v2.9.0b2.</p>
<p>Check out our <a
href="https://pydantic.dev/articles/pydantic-v2-9-release">blog post</a>
to learn more about the release highlights!</p>
<h2>What's Changed</h2>
<h3>Packaging</h3>
<ul>
<li>Bump <code>ruff</code> to <code>v0.5.0</code> and
<code>pyright</code> to <code>v1.1.369</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/9801">#9801</a></li>
<li>Bump <code>pydantic-extra-types</code> to <code>v2.9.0</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/9832">#9832</a></li>
<li>Support compatibility with <code>pdm v2.18.1</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10138">#10138</a></li>
<li>Bump <code>v1</code> version stub to <code>v1.10.18</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10214">#10214</a></li>
<li>Bump <code>pydantic-core</code> to <code>v2.23.2</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10311">#10311</a></li>
</ul>
<h3>New Features</h3>
<ul>
<li>Add support for <code>ZoneInfo</code> by <a
href="https://github.com/Youssefares"><code>@​Youssefares</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/9896">#9896</a></li>
<li>Add <code>Config.val_json_bytes</code> by <a
href="https://github.com/josh-newman"><code>@​josh-newman</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/9770">#9770</a></li>
<li>Add DSN for Snowflake by <a
href="https://github.com/aditkumar72"><code>@​aditkumar72</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/10128">#10128</a></li>
<li>Support <code>complex</code> number by <a
href="https://github.com/changhc"><code>@​changhc</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/9654">#9654</a></li>
<li>Add support for <code>annotated_types.Not</code> by <a
href="https://github.com/aditkumar72"><code>@​aditkumar72</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/10210">#10210</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/blob/main/HISTORY.md">pydantic's
changelog</a>.</em></p>
<blockquote>
<h2>v2.9.2 (2024-09-17)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.9.2">GitHub
release</a></p>
<h3>What's Changed</h3>
<h4>Fixes</h4>
<ul>
<li>Do not error when trying to evaluate annotations of private
attributes by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10358">#10358</a></li>
<li>Adding notes on designing sound <code>Callable</code> discriminators
by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10400">#10400</a></li>
<li>Fix serialization schema generation when using
<code>PlainValidator</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10427">#10427</a></li>
<li>Fix <code>Union</code> serialization warnings by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1449">pydantic/pydantic-core#1449</a></li>
<li>Fix variance issue in <code>_IncEx</code> type alias, only allow
<code>True</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10414">#10414</a></li>
<li>Fix <code>ZoneInfo</code> validation with various invalid types by
<a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10408">#10408</a></li>
</ul>
<h2>v2.9.1 (2024-09-09)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.9.1">GitHub
release</a></p>
<h3>What's Changed</h3>
<h4>Fixes</h4>
<ul>
<li>Fix Predicate issue in v2.9.0 by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10321">#10321</a></li>
<li>Fixing <code>annotated-types</code> bound to <code>&gt;=0.6.0</code>
by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10327">#10327</a></li>
<li>Turn <code>tzdata</code> install requirement into optional
<code>timezone</code> dependency by <a
href="https://github.com/jakob-keller"><code>@​jakob-keller</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10331">#10331</a></li>
<li>Fix <code>IncExc</code> type alias definition by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10339">#10339</a></li>
<li>Use correct types namespace when building namedtuple core schemas by
<a href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10337">#10337</a></li>
<li>Fix evaluation of stringified annotations during namespace
inspection by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10347">#10347</a></li>
<li>Fix tagged union serialization with alias generators by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1442">pydantic/pydantic-core#1442</a></li>
</ul>
<h2>v2.9.0 (2024-09-05)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.9.0">GitHub
release</a></p>
<p>The code released in v2.9.0 is practically identical to that of
v2.9.0b2.</p>
<h3>What's Changed</h3>
<h4>Packaging</h4>
<ul>
<li>Bump <code>ruff</code> to <code>v0.5.0</code> and
<code>pyright</code> to <code>v1.1.369</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/9801">#9801</a></li>
<li>Bump <code>pydantic-extra-types</code> to <code>v2.9.0</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/9832">#9832</a></li>
<li>Support compatibility with <code>pdm v2.18.1</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10138">#10138</a></li>
<li>Bump <code>v1</code> version stub to <code>v1.10.18</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10214">#10214</a></li>
<li>Bump <code>pydantic-core</code> to <code>v2.23.2</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10311">#10311</a></li>
</ul>
<h4>New Features</h4>
<ul>
<li>Add support for <code>ZoneInfo</code> by <a
href="https://github.com/Youssefares"><code>@​Youssefares</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/9896">#9896</a></li>
<li>Add <code>Config.val_json_bytes</code> by <a
href="https://github.com/josh-newman"><code>@​josh-newman</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/9770">#9770</a></li>
<li>Add DSN for Snowflake by <a
href="https://github.com/aditkumar72"><code>@​aditkumar72</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/10128">#10128</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7cedbfb03d"><code>7cedbfb</code></a>
history updates</li>
<li><a
href="7eab2b8f75"><code>7eab2b8</code></a>
v bump</li>
<li><a
href="c0a288f145"><code>c0a288f</code></a>
Fix <code>ZoneInfo</code> with various invalid types (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10408">#10408</a>)</li>
<li><a
href="ea6115de0f"><code>ea6115d</code></a>
Fix variance issue in <code>_IncEx</code> type alias, only allow
<code>True</code> (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10414">#10414</a>)</li>
<li><a
href="fbfe25a119"><code>fbfe25a</code></a>
Fix serialization schema generation when using
<code>PlainValidator</code> (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10427">#10427</a>)</li>
<li><a
href="26cff3ccf6"><code>26cff3c</code></a>
Adding notes on designing callable discriminators (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10400">#10400</a>)</li>
<li><a
href="8a0e7adf6a"><code>8a0e7ad</code></a>
Do not error when trying to evaluate annotations of private attributes
(<a
href="https://redirect.github.com/pydantic/pydantic/issues/10358">#10358</a>)</li>
<li><a
href="ecc5275d01"><code>ecc5275</code></a>
bump</li>
<li><a
href="2c61bfda43"><code>2c61bfd</code></a>
Fix evaluation of stringified annotations during namespace inspection
(<a
href="https://redirect.github.com/pydantic/pydantic/issues/10347">#10347</a>)</li>
<li><a
href="3d364cbf99"><code>3d364cb</code></a>
Use correct types namespace when building namedtuple core schemas (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10337">#10337</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pydantic/pydantic/compare/v2.8.2...v2.9.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pydantic&package-manager=pip&previous-version=2.8.2&new-version=2.9.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 17:28:08 +00:00
dependabot[bot]
985b3ab58d Bump types-pyyaml from 6.0.12.20240808 to 6.0.12.20240917 (#17755)
Bumps [types-pyyaml](https://github.com/python/typeshed) from
6.0.12.20240808 to 6.0.12.20240917.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=types-pyyaml&package-manager=pip&previous-version=6.0.12.20240808&new-version=6.0.12.20240917)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 17:21:38 +00:00
dependabot[bot]
afc3af7763 Bump prometheus-client from 0.20.0 to 0.21.0 (#17746)
Bumps [prometheus-client](https://github.com/prometheus/client_python)
from 0.20.0 to 0.21.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/prometheus/client_python/releases">prometheus-client's
releases</a>.</em></p>
<blockquote>
<h2>0.21.0 / 2024-09-20</h2>
<h2>What's Changed</h2>
<p>[CHANGE] Reject invalid (not GET or OPTION) HTTP methods. <a
href="https://redirect.github.com/prometheus/client_python/issues/1019">#1019</a>
[ENHANCEMENT] Allow writing metrics when holding a lock for the metric
in the same thread. <a
href="https://redirect.github.com/prometheus/client_python/issues/1014">#1014</a>
[BUGFIX] Check for and error on None label values. <a
href="https://redirect.github.com/prometheus/client_python/issues/1012">#1012</a>
[BUGFIX] Fix timestamp comparison. <a
href="https://redirect.github.com/prometheus/client_python/issues/1038">#1038</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3b183b4499"><code>3b183b4</code></a>
Release 0.21.0</li>
<li><a
href="0014e97763"><code>0014e97</code></a>
Use re-entrant lock. (<a
href="https://redirect.github.com/prometheus/client_python/issues/1014">#1014</a>)</li>
<li><a
href="7c45f84e5e"><code>7c45f84</code></a>
Reject invalid HTTP methods and resources (<a
href="https://redirect.github.com/prometheus/client_python/issues/1019">#1019</a>)</li>
<li><a
href="09a5ae3060"><code>09a5ae3</code></a>
Fix timestamp comparison (<a
href="https://redirect.github.com/prometheus/client_python/issues/1038">#1038</a>)</li>
<li><a
href="e364a96f50"><code>e364a96</code></a>
Fix a typo in ASGI docs (<a
href="https://redirect.github.com/prometheus/client_python/issues/1036">#1036</a>)</li>
<li><a
href="eeec421b2f"><code>eeec421</code></a>
Pin python 3.8 and 3.9 at patch level (<a
href="https://redirect.github.com/prometheus/client_python/issues/1024">#1024</a>)</li>
<li><a
href="7bc8cddfbb"><code>7bc8cdd</code></a>
docs: correct link to multiprocessing docs (<a
href="https://redirect.github.com/prometheus/client_python/issues/1023">#1023</a>)</li>
<li><a
href="4535ce0f43"><code>4535ce0</code></a>
Add sanity check for label value (<a
href="https://redirect.github.com/prometheus/client_python/issues/1012">#1012</a>)</li>
<li>See full diff in <a
href="https://github.com/prometheus/client_python/compare/v0.20.0...v0.21.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=prometheus-client&package-manager=pip&previous-version=0.20.0&new-version=0.21.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 18:51:24 +02:00
dependabot[bot]
af2da0e47a Bump pyasn1-modules from 0.4.0 to 0.4.1 (#17747)
Bumps [pyasn1-modules](https://github.com/pyasn1/pyasn1-modules) from
0.4.0 to 0.4.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pyasn1/pyasn1-modules/releases">pyasn1-modules's
releases</a>.</em></p>
<blockquote>
<h2>Release 0.4.1</h2>
<p>It's a minor release.</p>
<ul>
<li>Added support for Python 3.13.</li>
</ul>
<p>All changes are noted in the <a
href="https://github.com/pyasn1/pyasn1-modules/blob/main/CHANGES.txt">CHANGELOG</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyasn1/pyasn1-modules/blob/main/CHANGES.txt">pyasn1-modules's
changelog</a>.</em></p>
<blockquote>
<h2>Revision 0.4.1, released 10-09-2024</h2>
<ul>
<li>Added support for Python 3.13</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="36b036311a"><code>36b0363</code></a>
Prepare release 0.4.1</li>
<li><a
href="b0d849798a"><code>b0d8497</code></a>
Add support for Python 3.13 (<a
href="https://redirect.github.com/pyasn1/pyasn1-modules/issues/17">#17</a>)</li>
<li>See full diff in <a
href="https://github.com/pyasn1/pyasn1-modules/compare/v0.4.0...v0.4.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pyasn1-modules&package-manager=pip&previous-version=0.4.0&new-version=0.4.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 18:51:07 +02:00
dependabot[bot]
ac8c9ac50d Bump python-multipart from 0.0.9 to 0.0.10 (#17745)
Bumps [python-multipart](https://github.com/Kludex/python-multipart)
from 0.0.9 to 0.0.10.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/Kludex/python-multipart/releases">python-multipart's
releases</a>.</em></p>
<blockquote>
<h2>Version 0.0.10</h2>
<h2>What's Changed</h2>
<ul>
<li>Support <code>on_header_begin</code> by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/103">Kludex/python-multipart#103</a></li>
<li>Improve type hints on <code>FormParser</code> by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/104">Kludex/python-multipart#104</a></li>
<li>Fix <code>OnFileCallback</code> type by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/106">Kludex/python-multipart#106</a></li>
<li>Improve type hints by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/110">Kludex/python-multipart#110</a></li>
<li>Improve type hints on <code>File</code> by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/111">Kludex/python-multipart#111</a></li>
<li>Add type hint to helper functions by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/112">Kludex/python-multipart#112</a></li>
<li>Minor fix for Field.<strong>repr</strong> by <a
href="https://github.com/eltbus"><code>@​eltbus</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/114">Kludex/python-multipart#114</a></li>
<li>Fix use of chunk_size parameter by <a
href="https://github.com/jhnstrk"><code>@​jhnstrk</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/136">Kludex/python-multipart#136</a></li>
<li>Allow digits and valid token chars in headers by <a
href="https://github.com/jhnstrk"><code>@​jhnstrk</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/134">Kludex/python-multipart#134</a></li>
<li>Fix headers being carried between parts. fixes <a
href="https://redirect.github.com/Kludex/python-multipart/issues/63">#63</a>
by <a href="https://github.com/jhnstrk"><code>@​jhnstrk</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/135">Kludex/python-multipart#135</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/onuralpszr"><code>@​onuralpszr</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/108">Kludex/python-multipart#108</a></li>
<li><a
href="https://github.com/janusheide"><code>@​janusheide</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/119">Kludex/python-multipart#119</a></li>
<li><a
href="https://github.com/yecril23pl"><code>@​yecril23pl</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/121">Kludex/python-multipart#121</a></li>
<li><a href="https://github.com/manunio"><code>@​manunio</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/117">Kludex/python-multipart#117</a></li>
<li><a href="https://github.com/jhnstrk"><code>@​jhnstrk</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/136">Kludex/python-multipart#136</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Kludex/python-multipart/compare/0.0.9...0.0.10">https://github.com/Kludex/python-multipart/compare/0.0.9...0.0.10</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md">python-multipart's
changelog</a>.</em></p>
<blockquote>
<h2>0.0.10 (2024-09-21)</h2>
<ul>
<li>Support <code>on_header_begin</code> <a
href="https://redirect.github.com/Kludex/python-multipart/pull/103">#103</a>.</li>
<li>Improve type hints on <code>FormParser</code> <a
href="https://redirect.github.com/Kludex/python-multipart/pull/104">#104</a>.</li>
<li>Fix <code>OnFileCallback</code> type <a
href="https://redirect.github.com/Kludex/python-multipart/pull/106">#106</a>.</li>
<li>Improve type hints <a
href="https://redirect.github.com/Kludex/python-multipart/pull/110">#110</a>.</li>
<li>Improve type hints on <code>File</code> <a
href="https://redirect.github.com/Kludex/python-multipart/pull/111">#111</a>.</li>
<li>Add type hint to helper functions <a
href="https://redirect.github.com/Kludex/python-multipart/pull/112">#112</a>.</li>
<li>Minor fix for Field.<strong>repr</strong> <a
href="https://redirect.github.com/Kludex/python-multipart/pull/114">#114</a>.</li>
<li>Fix use of chunk_size parameter <a
href="https://redirect.github.com/Kludex/python-multipart/pull/136">#136</a>.</li>
<li>Allow digits and valid token chars in headers <a
href="https://redirect.github.com/Kludex/python-multipart/pull/134">#134</a>.</li>
<li>Fix headers being carried between parts <a
href="https://redirect.github.com/Kludex/python-multipart/pull/135">#135</a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="851a0263fc"><code>851a026</code></a>
Add entry to changelog (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/157">#157</a>)</li>
<li><a
href="265d6a4d1c"><code>265d6a4</code></a>
Upgrade documentation packages (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/156">#156</a>)</li>
<li><a
href="21825fced4"><code>21825fc</code></a>
Version 0.0.10 (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/155">#155</a>)</li>
<li><a
href="0defda6213"><code>0defda6</code></a>
Update pipelines (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/154">#154</a>)</li>
<li><a
href="c664cef3bb"><code>c664cef</code></a>
Use uv (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/153">#153</a>)</li>
<li><a
href="8b85d35fd7"><code>8b85d35</code></a>
Fix headers being carried between parts. fixes <a
href="https://redirect.github.com/Kludex/python-multipart/issues/63">#63</a>
(<a
href="https://redirect.github.com/Kludex/python-multipart/issues/135">#135</a>)</li>
<li><a
href="3ea51c714e"><code>3ea51c7</code></a>
Allow digits and valid token chars in headers (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/134">#134</a>)</li>
<li><a
href="3a722ed61a"><code>3a722ed</code></a>
Fix use of chunk_size parameter (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/136">#136</a>)</li>
<li><a
href="b5a5c19902"><code>b5a5c19</code></a>
Bump the python-packages group with 7 updates (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/138">#138</a>)</li>
<li><a
href="eb7b1fc392"><code>eb7b1fc</code></a>
Bump the github-actions group with 1 update (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/139">#139</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/Kludex/python-multipart/compare/0.0.9...0.0.10">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=python-multipart&package-manager=pip&previous-version=0.0.9&new-version=0.0.10)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 18:50:57 +02:00
dependabot[bot]
443a9eb335 Bump bytes from 1.7.1 to 1.7.2 (#17743)
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.7.1 to 1.7.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/releases">bytes's
releases</a>.</em></p>
<blockquote>
<h2>Bytes 1.7.2</h2>
<h1>1.7.2 (September 17, 2024)</h1>
<h3>Fixed</h3>
<ul>
<li>Fix default impl of <code>Buf::{get_int, get_int_le}</code> (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
</ul>
<h3>Documented</h3>
<ul>
<li>Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
</ul>
<h3>Internal changes</h3>
<ul>
<li>Ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md">bytes's
changelog</a>.</em></p>
<blockquote>
<h1>1.7.2 (September 17, 2024)</h1>
<h3>Fixed</h3>
<ul>
<li>Fix default impl of <code>Buf::{get_int, get_int_le}</code> (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
</ul>
<h3>Documented</h3>
<ul>
<li>Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
</ul>
<h3>Internal changes</h3>
<ul>
<li>Ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d7c1d658d9"><code>d7c1d65</code></a>
chore: prepare bytes v1.7.2 (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/736">#736</a>)</li>
<li><a
href="ac46ebdd46"><code>ac46ebd</code></a>
ci: update nightly to nightly-2024-09-15 (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/734">#734</a>)</li>
<li><a
href="79fb85323c"><code>79fb853</code></a>
fix: apply sign extension when decoding int (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
<li><a
href="291df5acc9"><code>291df5a</code></a>
Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
<li><a
href="ed7d5ff39e"><code>ed7d5ff</code></a>
test: ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
<li>See full diff in <a
href="https://github.com/tokio-rs/bytes/compare/v1.7.1...v1.7.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=bytes&package-manager=cargo&previous-version=1.7.1&new-version=1.7.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 18:33:57 +02:00
Erik Johnston
aad26cb93f Never return negative bump stamp (#17748)
Fixes #17737
2024-09-24 10:07:23 +00:00
Andrew Ferrazzutti
5173741c71 Support MSC4140: Delayed events (Futures) (#17326) 2024-09-23 13:33:48 +01:00
Erik Johnston
75e2c17d2a Speed up sorting of sliding sync rooms in initial request (#17734)
We do this by using the event stream cache.

---------

Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
2024-09-20 08:12:56 +01:00
Erik Johnston
a851f6b237 Sliding sync: Add connection tracking to the account_data extension (#17695)
This is basically exactly the same logic as for receipts. Essentially we
just need to track which room account data we have and haven't sent down
to clients, and use that when we pull stuff out.

I think this just needs a couple of extra tests written

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-09-19 19:51:51 +01:00
Eric Eastwood
c2e5e9e67c Sliding Sync: Avoid fetching left rooms and add back newly_left rooms (#17725)
Performance optimization: We can avoid fetching rooms that the user has
left themselves (which could be a significant amount), then only add
back rooms that the user has `newly_left` (left in the token range of an
incremental sync). It's a lot faster to fetch less rooms than fetch them
all and throw them away in most cases. Since the user only leaves a room
(or is state reset out) once in a blue moon, we can avoid a lot of work.

Based on @erikjohnston's branch, erikj/ss_perf


---------

Co-authored-by: Erik Johnston <erik@matrix.org>
2024-09-19 10:07:18 -05:00
Erik Johnston
07a51d2a56 Fix sliding sync for rooms with unknown room version (#17733)
Follow on from #17727
2024-09-19 14:01:11 +01:00
Eric Eastwood
83fc225030 Sliding Sync: Add cache to get_tags_for_room(...) (#17730)
Add cache to `get_tags_for_room(...)`

This helps Sliding Sync because `get_tags_for_room(...)` is going to be
used in https://github.com/element-hq/synapse/pull/17695

Essentially, we're just trying to match `get_account_data_for_room(...)`
which already has a tree cache.
2024-09-19 12:43:26 +01:00
Eric Eastwood
a9c0e27eb7 Sliding Sync: No need to sort if the range is large enough to cover all of the rooms (#17731)
No need to sort if the range is large enough to cover all of the rooms
in the list. Previously, we would only do this optimization if the range
was exactly large enough.

Follow-up to https://github.com/element-hq/synapse/pull/17672
2024-09-19 09:33:34 +01:00
Eric Eastwood
faf5b40520 Sliding Sync: Fix _bulk_get_max_event_pos(...) being inefficient (#17728)
Fix `_bulk_get_max_event_pos(...)` being inefficient. It kept adding all
of the `batch_results` to the `results` over and over every time we
checked a single room in the batch.

I think we still ended up with the right answer before because we
accumulate `recheck_rooms` and actually recheck them to overwrite the
bad data we wrote to the `results` before.

Introduced in
https://github.com/element-hq/synapse/pull/17606/files#diff-cbd54e4b5a2a1646299d659a2d5884d6cb14e608efd2e1658e72b465bb66e31bR1481
2024-09-19 09:32:16 +01:00
Eric Eastwood
af998e6c66 Sliding sync: Ignore invites from ignored users (#17729)
`m.ignored_user_list` in account data
2024-09-18 18:09:23 -05:00
Eric Eastwood
61b7c31772 Sliding Sync: Shortcut for checking if certain background updates have completed (#17724)
Shortcut for checking if certain background updates have completed

Pulling this change out from one of @erikjohnston's branches
(https://github.com/element-hq/synapse/compare/develop...erikj/ss_perf)

---------

Co-authored-by: Erik Johnston <erikj@element.io>
2024-09-18 13:12:14 -05:00
Kegan Dougal
3c8a116e1a Sliding Sync: bugfix: ensure we can sync with SSS even with missing rooms (#17727)
Fixes https://github.com/element-hq/element-x-ios/issues/3300

Some rooms are missing from `sliding_sync_joined_rooms`. When this
happens, the first call will succeed, but any subsequent calls for this
room ID will cause the cache to return `None` for the room ID, rather
than not having the key at all. This then causes the `<=` check to
throw.

Root cause: https://github.com/element-hq/synapse/issues/17726

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [ ] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2024-09-18 16:25:50 +00:00
Shay
51dd4df0a3 Add an Admin API endpoint to redact all a user's events (#17506) 2024-09-18 10:08:01 +00:00
Eric Eastwood
8881ad6d4b Sliding Sync: Short-circuit have_finished_sliding_sync_background_jobs (#17723)
We only need to check it if returned bump stamp is `None`, which is rare.

Pulling this change out from one of @erikjohnston's branches
(https://github.com/element-hq/synapse/compare/develop...erikj/ss_perf)
2024-09-17 17:36:59 -05:00
Olivier 'reivilibre
d40bc279ed Merge branch 'master' into develop 2024-09-17 15:47:32 +01:00
Olivier 'reivilibre
d10872ee75 1.115.0 2024-09-17 14:32:29 +01:00
216 changed files with 12001 additions and 1971 deletions

View File

@@ -36,11 +36,11 @@ IS_PR = os.environ["GITHUB_REF"].startswith("refs/pull/")
# First calculate the various trial jobs.
#
# For PRs, we only run each type of test with the oldest Python version supported (which
# is Python 3.8 right now)
# is Python 3.9 right now)
trial_sqlite_tests = [
{
"python-version": "3.8",
"python-version": "3.9",
"database": "sqlite",
"extras": "all",
}
@@ -53,12 +53,12 @@ if not IS_PR:
"database": "sqlite",
"extras": "all",
}
for version in ("3.9", "3.10", "3.11", "3.12")
for version in ("3.10", "3.11", "3.12", "3.13")
)
trial_postgres_tests = [
{
"python-version": "3.8",
"python-version": "3.9",
"database": "postgres",
"postgres-version": "11",
"extras": "all",
@@ -68,16 +68,16 @@ trial_postgres_tests = [
if not IS_PR:
trial_postgres_tests.append(
{
"python-version": "3.12",
"python-version": "3.13",
"database": "postgres",
"postgres-version": "16",
"postgres-version": "17",
"extras": "all",
}
)
trial_no_extra_tests = [
{
"python-version": "3.8",
"python-version": "3.9",
"database": "sqlite",
"extras": "",
}
@@ -99,24 +99,24 @@ set_output("trial_test_matrix", test_matrix)
# First calculate the various sytest jobs.
#
# For each type of test we only run on focal on PRs
# For each type of test we only run on bullseye on PRs
sytest_tests = [
{
"sytest-tag": "focal",
"sytest-tag": "bullseye",
},
{
"sytest-tag": "focal",
"sytest-tag": "bullseye",
"postgres": "postgres",
},
{
"sytest-tag": "focal",
"sytest-tag": "bullseye",
"postgres": "multi-postgres",
"workers": "workers",
},
{
"sytest-tag": "focal",
"sytest-tag": "bullseye",
"postgres": "multi-postgres",
"workers": "workers",
"reactor": "asyncio",
@@ -127,11 +127,11 @@ if not IS_PR:
sytest_tests.extend(
[
{
"sytest-tag": "focal",
"sytest-tag": "bullseye",
"reactor": "asyncio",
},
{
"sytest-tag": "focal",
"sytest-tag": "bullseye",
"postgres": "postgres",
"reactor": "asyncio",
},

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env bash
# this script is run by GitHub Actions in a plain `focal` container; it
# this script is run by GitHub Actions in a plain `jammy` container; it
# - installs the minimal system requirements, and poetry;
# - patches the project definition file to refer to old versions only;
# - creates a venv with these old versions using poetry; and finally

View File

@@ -30,7 +30,7 @@ jobs:
run: docker buildx inspect
- name: Install Cosign
uses: sigstore/cosign-installer@v3.6.0
uses: sigstore/cosign-installer@v3.7.0
- name: Checkout repository
uses: actions/checkout@v4

View File

@@ -132,9 +132,9 @@ jobs:
fail-fast: false
matrix:
include:
- sytest-tag: focal
- sytest-tag: bullseye
- sytest-tag: focal
- sytest-tag: bullseye
postgres: postgres
workers: workers
redis: redis

View File

@@ -91,10 +91,19 @@ jobs:
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Artifact name
id: artifact-name
# We can't have colons in the upload name of the artifact, so we convert
# e.g. `debian:sid` to `sid`.
env:
DISTRO: ${{ matrix.distro }}
run: |
echo "ARTIFACT_NAME=${DISTRO#*:}" >> "$GITHUB_OUTPUT"
- name: Upload debs as artifacts
uses: actions/upload-artifact@v3 # Don't upgrade to v4; broken: https://github.com/actions/upload-artifact#breaking-changes
uses: actions/upload-artifact@v4
with:
name: debs
name: debs-${{ steps.artifact-name.outputs.ARTIFACT_NAME }}
path: debs/*
build-wheels:
@@ -102,7 +111,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-20.04, macos-12]
os: [ubuntu-22.04, macos-13]
arch: [x86_64, aarch64]
# is_pr is a flag used to exclude certain jobs from the matrix on PRs.
# It is not read by the rest of the workflow.
@@ -112,9 +121,9 @@ jobs:
exclude:
# Don't build macos wheels on PR CI.
- is_pr: true
os: "macos-12"
os: "macos-13"
# Don't build aarch64 wheels on mac.
- os: "macos-12"
- os: "macos-13"
arch: aarch64
# Don't build aarch64 wheels on PR CI.
- is_pr: true
@@ -144,7 +153,7 @@ jobs:
- name: Only build a single wheel on PR
if: startsWith(github.ref, 'refs/pull/')
run: echo "CIBW_BUILD="cp38-manylinux_${{ matrix.arch }}"" >> $GITHUB_ENV
run: echo "CIBW_BUILD="cp39-manylinux_${{ matrix.arch }}"" >> $GITHUB_ENV
- name: Build wheels
run: python -m cibuildwheel --output-dir wheelhouse
@@ -156,9 +165,9 @@ jobs:
CARGO_NET_GIT_FETCH_WITH_CLI: true
CIBW_ENVIRONMENT_PASS_LINUX: CARGO_NET_GIT_FETCH_WITH_CLI
- uses: actions/upload-artifact@v3 # Don't upgrade to v4; broken: https://github.com/actions/upload-artifact#breaking-changes
- uses: actions/upload-artifact@v4
with:
name: Wheel
name: Wheel-${{ matrix.os }}-${{ matrix.arch }}
path: ./wheelhouse/*.whl
build-sdist:
@@ -177,7 +186,7 @@ jobs:
- name: Build sdist
run: python -m build --sdist
- uses: actions/upload-artifact@v3 # Don't upgrade to v4; broken: https://github.com/actions/upload-artifact#breaking-changes
- uses: actions/upload-artifact@v4
with:
name: Sdist
path: dist/*.tar.gz
@@ -194,19 +203,20 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Download all workflow run artifacts
uses: actions/download-artifact@v3 # Don't upgrade to v4, it should match upload-artifact
uses: actions/download-artifact@v4
- name: Build a tarball for the debs
run: tar -cvJf debs.tar.xz debs
# We need to merge all the debs uploads into one folder, then compress
# that.
run: |
mkdir debs
mv debs*/* debs/
tar -cvJf debs.tar.xz debs
- name: Attach to release
uses: softprops/action-gh-release@a929a66f232c1b11af63782948aa2210f981808a # PR#109
uses: softprops/action-gh-release@v2.0.5
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
files: |
Sdist/*
Wheel/*
Wheel*/*
debs.tar.xz
# if it's not already published, keep the release as a draft.
draft: true
# mark it as a prerelease if the tag contains 'rc'.
prerelease: ${{ contains(github.ref, 'rc') }}

View File

@@ -397,7 +397,7 @@ jobs:
needs:
- linting-done
- changes
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
@@ -409,12 +409,12 @@ jobs:
# their build dependencies
- run: |
sudo apt-get -qq update
sudo apt-get -qq install build-essential libffi-dev python-dev \
sudo apt-get -qq install build-essential libffi-dev python3-dev \
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev
- uses: actions/setup-python@v5
with:
python-version: '3.8'
python-version: '3.9'
- name: Prepare old deps
if: steps.cache-poetry-old-deps.outputs.cache-hit != 'true'
@@ -458,7 +458,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["pypy-3.8"]
python-version: ["pypy-3.9"]
extras: ["all"]
steps:
@@ -580,11 +580,11 @@ jobs:
strategy:
matrix:
include:
- python-version: "3.8"
- python-version: "3.9"
postgres-version: "11"
- python-version: "3.11"
postgres-version: "15"
- python-version: "3.13"
postgres-version: "17"
services:
postgres:

View File

@@ -99,11 +99,11 @@ jobs:
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
container:
# We're using ubuntu:focal because it uses Python 3.8 which is our minimum supported Python version.
# We're using debian:bullseye because it uses Python 3.9 which is our minimum supported Python version.
# This job is a canary to warn us about unreleased twisted changes that would cause problems for us if
# they were to be released immediately. For simplicity's sake (and to save CI runners) we use the oldest
# version, assuming that any incompatibilities on newer versions would also be present on the oldest.
image: matrixdotorg/sytest-synapse:focal
image: matrixdotorg/sytest-synapse:bullseye
volumes:
- ${{ github.workspace }}:/src

View File

@@ -1,3 +1,317 @@
# Synapse 1.120.0 (2024-11-26)
### Bugfixes
- Fix a bug introduced in Synapse v1.120rc1 which would cause the newly-introduced `delete_old_otks` job to fail in worker-mode deployments. ([\#17960](https://github.com/element-hq/synapse/issues/17960))
# Synapse 1.120.0rc1 (2024-11-20)
This release enables the enforcement of authenticated media by default, with exemptions for media that is already present in the
homeserver's media store.
Most homeservers operating in the public federation will not be impacted by this change, given that
the large homeserver `matrix.org` enabled this in September 2024 and therefore most clients and servers
will already have updated as a result.
Some server administrators may still wish to disable this enforcement for the time being, in the interest of compatibility with older clients
and older federated homeservers.
See the [upgrade notes](https://element-hq.github.io/synapse/v1.120/upgrade.html#authenticated-media-is-now-enforced-by-default) for more information.
### Features
- Enforce authenticated media by default. Administrators can revert this by configuring `enable_authenticated_media` to `false`. In a future release of Synapse, this option will be removed and become always-on. ([\#17889](https://github.com/element-hq/synapse/issues/17889))
- Add a one-off task to delete old One-Time Keys, to guard against us having old OTKs in the database that the client has long forgotten about. ([\#17934](https://github.com/element-hq/synapse/issues/17934))
### Improved Documentation
- Clarify the semantics of the `enable_authenticated_media` configuration option. ([\#17913](https://github.com/element-hq/synapse/issues/17913))
- Add documentation about backing up Synapse. ([\#17931](https://github.com/element-hq/synapse/issues/17931))
### Deprecations and Removals
- Remove support for [MSC3886: Simple client rendezvous capability](https://github.com/matrix-org/matrix-spec-proposals/pull/3886), which has been superseded by [MSC4108](https://github.com/matrix-org/matrix-spec-proposals/pull/4108) and therefore closed. ([\#17638](https://github.com/element-hq/synapse/issues/17638))
### Internal Changes
- Addressed some typos in docs and returned error message for unknown MXC ID. ([\#17865](https://github.com/element-hq/synapse/issues/17865))
- Unpin the upload release GHA action. ([\#17923](https://github.com/element-hq/synapse/issues/17923))
- Bump macOS version used to build wheels during release, as current version used is end-of-life. ([\#17924](https://github.com/element-hq/synapse/issues/17924))
- Move server event filtering logic to Rust. ([\#17928](https://github.com/element-hq/synapse/issues/17928))
- Support new package name of PyPI package `python-multipart` 0.0.13 so that distro packagers do not need to work around name conflict with PyPI package `multipart`. ([\#17932](https://github.com/element-hq/synapse/issues/17932))
- Speed up slow initial sliding syncs on large servers. ([\#17946](https://github.com/element-hq/synapse/issues/17946))
### Updates to locked dependencies
* Bump anyhow from 1.0.92 to 1.0.93. ([\#17920](https://github.com/element-hq/synapse/issues/17920))
* Bump bleach from 6.1.0 to 6.2.0. ([\#17918](https://github.com/element-hq/synapse/issues/17918))
* Bump immutabledict from 4.2.0 to 4.2.1. ([\#17941](https://github.com/element-hq/synapse/issues/17941))
* Bump packaging from 24.1 to 24.2. ([\#17940](https://github.com/element-hq/synapse/issues/17940))
* Bump phonenumbers from 8.13.49 to 8.13.50. ([\#17942](https://github.com/element-hq/synapse/issues/17942))
* Bump pygithub from 2.4.0 to 2.5.0. ([\#17917](https://github.com/element-hq/synapse/issues/17917))
* Bump ruff from 0.7.2 to 0.7.3. ([\#17919](https://github.com/element-hq/synapse/issues/17919))
* Bump serde from 1.0.214 to 1.0.215. ([\#17938](https://github.com/element-hq/synapse/issues/17938))
# Synapse 1.119.0 (2024-11-13)
No significant changes since 1.119.0rc2.
### Python 3.8 support dropped
Python 3.8 is [end-of-life](https://devguide.python.org/versions/) and is no longer supported by Synapse. The minimum supported Python version is now 3.9.
If you are running Synapse with Python 3.8, please upgrade to Python 3.9 (or greater) before upgrading Synapse.
# Synapse 1.119.0rc2 (2024-11-11)
Note that due to packaging issues there was no v1.119.0rc1.
### Features
- Support [MSC4151](https://github.com/matrix-org/matrix-spec-proposals/pull/4151)'s stable report room API. ([\#17374](https://github.com/element-hq/synapse/issues/17374))
- Add experimental support for [MSC4222](https://github.com/matrix-org/matrix-spec-proposals/pull/4222) (Adding `state_after` to sync v2). ([\#17888](https://github.com/element-hq/synapse/issues/17888))
### Bugfixes
- Fix bug with sliding sync where `$LAZY`-loading room members would not return `required_state` membership in incremental syncs. ([\#17809](https://github.com/element-hq/synapse/issues/17809))
- Check if user has membership in a room before tagging it. Contributed by Lama Alosaimi. ([\#17839](https://github.com/element-hq/synapse/issues/17839))
- Fix a bug in the admin redact endpoint where the background task would not run if a worker was specified in
the config option `run_background_tasks_on`. ([\#17847](https://github.com/element-hq/synapse/issues/17847))
- Fix bug where some presence and typing timeouts can expire early. ([\#17850](https://github.com/element-hq/synapse/issues/17850))
- Fix detection when the built Rust library was outdated when using source installations. ([\#17861](https://github.com/element-hq/synapse/issues/17861))
- Fix a long-standing bug in Synapse which could cause one-time keys to be issued in the incorrect order, causing message decryption failures. ([\#17903](https://github.com/element-hq/synapse/pull/17903))
- Fix experimental support for [MSC4222](https://github.com/matrix-org/matrix-spec-proposals/pull/4222) (Adding `state_after` to sync v2) where we would return the full state on incremental syncs when using lazy loaded members and there were no new events in the timeline. ([\#17915](https://github.com/element-hq/synapse/pull/17915))
### Internal Changes
- Remove support for python 3.8. ([\#17908](https://github.com/element-hq/synapse/issues/17908))
- Add a test for downloading and thumbnailing a CMYK JPEG. ([\#17786](https://github.com/element-hq/synapse/issues/17786))
- Refactor database calls to remove `Generator` usage. ([\#17813](https://github.com/element-hq/synapse/issues/17813), [\#17814](https://github.com/element-hq/synapse/issues/17814), [\#17815](https://github.com/element-hq/synapse/issues/17815), [\#17816](https://github.com/element-hq/synapse/issues/17816), [\#17817](https://github.com/element-hq/synapse/issues/17817), [\#17818](https://github.com/element-hq/synapse/issues/17818), [\#17890](https://github.com/element-hq/synapse/issues/17890))
- Include the destination in the error of 'Destination mismatch' on federation requests. ([\#17830](https://github.com/element-hq/synapse/issues/17830))
- The nix flake inside the repository no longer tracks nixpkgs/master to not catch the latest bugs from a PR merged 5 minutes ago. ([\#17852](https://github.com/element-hq/synapse/issues/17852))
- Minor speed-up of sliding sync by computing extensions results in parallel. ([\#17884](https://github.com/element-hq/synapse/issues/17884))
- Bump the default Python version in the Synapse Dockerfile from 3.11 -> 3.12. ([\#17887](https://github.com/element-hq/synapse/issues/17887))
- Remove usage of internal header encoding API. ([\#17894](https://github.com/element-hq/synapse/issues/17894))
- Use unique name for each os.arch variant when uploading Wheel artifacts. ([\#17905](https://github.com/element-hq/synapse/issues/17905))
- Fix tests to run with latest Twisted. ([\#17906](https://github.com/element-hq/synapse/pull/17906), [\#17907](https://github.com/element-hq/synapse/pull/17907), [\#17911](https://github.com/element-hq/synapse/pull/17911))
- Update version constraint to allow the latest poetry-core 1.9.1. ([\#17902](https://github.com/element-hq/synapse/pull/17902))
- Update the portdb CI to use Python 3.13 and Postgres 17 as latest dependencies. ([\#17909](https://github.com/element-hq/synapse/pull/17909))
- Add an index to `current_state_delta_stream` table. ([\#17912](https://github.com/element-hq/synapse/issues/17912))
- Fix building and attaching release artifacts during the release process. ([\#17921](https://github.com/element-hq/synapse/issues/17921))
### Updates to locked dependencies
* Bump actions/download-artifact & actions/upload-artifact from 3 to 4 in /.github/workflows. ([\#17657](https://github.com/element-hq/synapse/issues/17657))
* Bump anyhow from 1.0.89 to 1.0.92. ([\#17858](https://github.com/element-hq/synapse/issues/17858), [\#17876](https://github.com/element-hq/synapse/issues/17876), [\#17901](https://github.com/element-hq/synapse/issues/17901))
* Bump bytes from 1.7.2 to 1.8.0. ([\#17877](https://github.com/element-hq/synapse/issues/17877))
* Bump cryptography from 43.0.1 to 43.0.3. ([\#17853](https://github.com/element-hq/synapse/issues/17853))
* Bump mypy-zope from 1.0.7 to 1.0.8. ([\#17898](https://github.com/element-hq/synapse/issues/17898))
* Bump phonenumbers from 8.13.47 to 8.13.49. ([\#17880](https://github.com/element-hq/synapse/issues/17880), [\#17899](https://github.com/element-hq/synapse/issues/17899))
* Bump python-multipart from 0.0.12 to 0.0.16. ([\#17879](https://github.com/element-hq/synapse/issues/17879))
* Bump regex from 1.11.0 to 1.11.1. ([\#17874](https://github.com/element-hq/synapse/issues/17874))
* Bump ruff from 0.6.9 to 0.7.2. ([\#17868](https://github.com/element-hq/synapse/issues/17868), [\#17897](https://github.com/element-hq/synapse/issues/17897))
* Bump serde from 1.0.210 to 1.0.214. ([\#17875](https://github.com/element-hq/synapse/issues/17875), [\#17900](https://github.com/element-hq/synapse/issues/17900))
* Bump serde_json from 1.0.128 to 1.0.132. ([\#17857](https://github.com/element-hq/synapse/issues/17857))
* Bump types-psycopg2 from 2.9.21.20240819 to 2.9.21.20241019. ([\#17855](https://github.com/element-hq/synapse/issues/17855))
* Bump types-setuptools from 75.1.0.20241014 to 75.2.0.20241019. ([\#17856](https://github.com/element-hq/synapse/issues/17856))
# Synapse 1.118.0 (2024-10-29)
No significant changes since 1.118.0rc1.
### Python 3.8 support will be dropped in the next release
Python 3.8 is now [end-of-life](https://devguide.python.org/versions/). As per our [Deprecation Policy for Platform Dependencies](https://element-hq.github.io/synapse/latest/deprecation_policy.html#policy), Synapse will be dropping support for Python 3.8 in the next release; Synapse 1.119.0.
Synapse 1.118.x will be the final release to support Python 3.8. If you are running Synapse with Python 3.8, please upgrade before the 1.119.0 release, due in less than one month.
### Python 3.13 and PostgreSQL 17 support
On the other end of the spectrum, Synapse 1.118.0 is the first release to support [Python 3.13](https://www.python.org/downloads/release/python-3130/)! [PostgreSQL 17](https://www.postgresql.org/about/news/postgresql-17-released-2936/) is also supported as of this release.
# Synapse 1.118.0rc1 (2024-10-22)
### Features
- Added the `display_name_claim` option to the JWT configuration. This option allows specifying the claim key that contains the user's display name in the JWT payload. ([\#17708](https://github.com/element-hq/synapse/issues/17708))
- Implement [MSC4210](https://github.com/matrix-org/matrix-spec-proposals/pull/4210): Remove legacy mentions. Contributed by @tulir @ Beeper. ([\#17783](https://github.com/element-hq/synapse/issues/17783))
### Bugfixes
- Fix saving of PNG thumbnails, when the original image is in the CMYK color space. ([\#17736](https://github.com/element-hq/synapse/issues/17736))
- Fix bug with sliding sync where the server would not return state that was added to the `required_state` config. ([\#17785](https://github.com/element-hq/synapse/issues/17785), [\#17805](https://github.com/element-hq/synapse/issues/17805))
- Fix a bug in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync that would cause rooms to stay forgotten and hidden even after rejoining. ([\#17835](https://github.com/element-hq/synapse/issues/17835))
### Improved Documentation
- Clarify when the `user_may_invite` and `user_may_send_3pid_invite` module callbacks are called. ([\#17627](https://github.com/element-hq/synapse/issues/17627))
- Correct documentation to refer to the `--config-path` argument instead of `--config-file`. ([\#17802](https://github.com/element-hq/synapse/issues/17802))
- Fix typo in `target_cache_memory_usage` docs. ([\#17825](https://github.com/element-hq/synapse/issues/17825))
### Internal Changes
- Slight optimization when fetching state/events for Sliding Sync. ([\#17718](https://github.com/element-hq/synapse/issues/17718))
- Add Python 3.13 and Postgres 17 to the test matrix. ([\#17752](https://github.com/element-hq/synapse/issues/17752))
- Test github token before running release script steps. ([\#17803](https://github.com/element-hq/synapse/issues/17803))
- Build debian packages for new Ubuntu versions, and stop building for no longer supported versions. ([\#17824](https://github.com/element-hq/synapse/issues/17824))
- Enable the `.org.matrix.msc4028.encrypted_event` push rule by default in accordance with [MSC4028](https://github.com/matrix-org/matrix-spec-proposals/pull/4028). Note that the corresponding experimental feature must still be switched on for this push rule to have any effect. ([\#17826](https://github.com/element-hq/synapse/issues/17826))
- Fix some typing issues uncovered by upgrading mypy to 1.11.x. ([\#17842](https://github.com/element-hq/synapse/issues/17842))
### Updates to locked dependencies
* Bump mypy from 1.10.1 to 1.11.2. ([\#17842](https://github.com/element-hq/synapse/issues/17842))
* Bump mypy-zope from 1.0.5 to 1.0.7. ([\#17827](https://github.com/element-hq/synapse/issues/17827))
* Bump phonenumbers from 8.13.46 to 8.13.47. ([\#17797](https://github.com/element-hq/synapse/issues/17797))
* Bump psycopg2 from 2.9.9 to 2.9.10. ([\#17843](https://github.com/element-hq/synapse/issues/17843))
* Bump ruff from 0.6.8 to 0.6.9. ([\#17794](https://github.com/element-hq/synapse/issues/17794))
* Bump sentry-sdk from 2.14.0 to 2.15.0. ([\#17795](https://github.com/element-hq/synapse/issues/17795))
* Bump sentry-sdk from 2.15.0 to 2.16.0. ([\#17829](https://github.com/element-hq/synapse/issues/17829))
* Bump sentry-sdk from 2.16.0 to 2.17.0. ([\#17844](https://github.com/element-hq/synapse/issues/17844))
* Bump sigstore/cosign-installer from 3.6.0 to 3.7.0. ([\#17798](https://github.com/element-hq/synapse/issues/17798))
* Bump tomli from 2.0.1 to 2.0.2. ([\#17796](https://github.com/element-hq/synapse/issues/17796))
* Bump types-requests from 2.32.0.20240914 to 2.32.0.20241016. ([\#17841](https://github.com/element-hq/synapse/issues/17841))
* Bump types-setuptools from 75.1.0.20240917 to 75.1.0.20241014. ([\#17828](https://github.com/element-hq/synapse/issues/17828))
# Synapse 1.117.0 (2024-10-15)
No significant changes since 1.117.0rc1.
# Synapse 1.117.0rc1 (2024-10-08)
### Features
- Add config option `redis.password_path`. ([\#17717](https://github.com/element-hq/synapse/issues/17717))
### Bugfixes
- Fix a rare bug introduced in v1.29.0 where invalidating a user's access token from a worker could raise an error. ([\#17779](https://github.com/element-hq/synapse/issues/17779))
- In the response to `GET /_matrix/client/versions`, set the `unstable_features` flag for [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140) to `false` when server configuration disables support for delayed events. ([\#17780](https://github.com/element-hq/synapse/issues/17780))
- Improve input validation and room membership checks in admin redaction API. ([\#17792](https://github.com/element-hq/synapse/issues/17792))
### Improved Documentation
- Clarify the docstring of `test_forget_when_not_left`. ([\#17628](https://github.com/element-hq/synapse/issues/17628))
- Add documentation note about PYTHONMALLOC for accurate jemalloc memory tracking. Contributed by @hensg. ([\#17709](https://github.com/element-hq/synapse/issues/17709))
- Remove spurious "TODO UPDATE ALL THIS" note in the Debian installation docs. ([\#17749](https://github.com/element-hq/synapse/issues/17749))
- Explain how load balancing works for `federation_sender_instances`. ([\#17776](https://github.com/element-hq/synapse/issues/17776))
### Internal Changes
- Minor performance increase for large accounts using sliding sync. ([\#17751](https://github.com/element-hq/synapse/issues/17751))
- Increase performance of the notifier when there are many syncing users. ([\#17765](https://github.com/element-hq/synapse/issues/17765), [\#17766](https://github.com/element-hq/synapse/issues/17766))
- Fix performance of streams that don't change often. ([\#17767](https://github.com/element-hq/synapse/issues/17767))
- Improve performance of sliding sync connections that do not ask for any rooms. ([\#17768](https://github.com/element-hq/synapse/issues/17768))
- Reduce overhead of sliding sync E2EE loops. ([\#17771](https://github.com/element-hq/synapse/issues/17771))
- Sliding sync minor performance speed up using new table. ([\#17787](https://github.com/element-hq/synapse/issues/17787))
- Sliding sync minor performance improvement by omitting unchanged data from incremental responses. ([\#17788](https://github.com/element-hq/synapse/issues/17788))
- Speed up sliding sync when there are many active subscriptions. ([\#17789](https://github.com/element-hq/synapse/issues/17789))
- Add missing license headers on new source files. ([\#17799](https://github.com/element-hq/synapse/issues/17799))
### Updates to locked dependencies
* Bump phonenumbers from 8.13.45 to 8.13.46. ([\#17773](https://github.com/element-hq/synapse/issues/17773))
* Bump python-multipart from 0.0.10 to 0.0.12. ([\#17772](https://github.com/element-hq/synapse/issues/17772))
* Bump regex from 1.10.6 to 1.11.0. ([\#17770](https://github.com/element-hq/synapse/issues/17770))
* Bump ruff from 0.6.7 to 0.6.8. ([\#17774](https://github.com/element-hq/synapse/issues/17774))
# Synapse 1.116.0 (2024-10-01)
No significant changes since 1.116.0rc2.
# Synapse 1.116.0rc2 (2024-09-26)
### Features
- Add implementation of restricting who can overwrite a state event as proposed by [MSC3757](https://github.com/matrix-org/matrix-spec-proposals/pull/3757). ([\#17513](https://github.com/element-hq/synapse/issues/17513))
# Synapse 1.116.0rc1 (2024-09-25)
### Features
- Add initial implementation of delayed events as proposed by [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140). ([\#17326](https://github.com/element-hq/synapse/issues/17326))
- Add an asynchronous Admin API endpoint [to redact all a user's events](https://element-hq.github.io/synapse/v1.116/admin_api/user_admin_api.html#redact-all-the-events-of-a-user),
and [an endpoint to check on the status of that redaction task](https://element-hq.github.io/synapse/v1.116/admin_api/user_admin_api.html#check-the-status-of-a-redaction-process). ([\#17506](https://github.com/element-hq/synapse/issues/17506))
- Add support for the `tags` and `not_tags` filters for [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17662](https://github.com/element-hq/synapse/issues/17662))
- Guests can use the new media endpoints to download media, as described by [MSC4189](https://github.com/matrix-org/matrix-spec-proposals/pull/4189). ([\#17675](https://github.com/element-hq/synapse/issues/17675))
- Add config option `turn_shared_secret_path`. ([\#17690](https://github.com/element-hq/synapse/issues/17690))
- Return room tags in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync account data extension. ([\#17707](https://github.com/element-hq/synapse/issues/17707))
### Bugfixes
- Make sure we get up-to-date state information when using the new [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync tables to derive room membership. ([\#17692](https://github.com/element-hq/synapse/issues/17692))
- Fix bug where room account data would not correctly be sent down [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync for old rooms. ([\#17695](https://github.com/element-hq/synapse/issues/17695))
- Fix a bug in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync which could prevent /sync from working for certain user accounts. ([\#17727](https://github.com/element-hq/synapse/issues/17727), [\#17733](https://github.com/element-hq/synapse/issues/17733))
- Ignore invites from ignored users in Sliding Sync. ([\#17729](https://github.com/element-hq/synapse/issues/17729))
- Fix bug in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync where the server would incorrectly return a negative bump stamp, which caused Element X apps to stop syncing. ([\#17748](https://github.com/element-hq/synapse/issues/17748))
### Internal Changes
- Import pydantic objects from the `_pydantic_compat` module.
This allows `check_pydantic_models.py` to mock those pydantic objects
only in the synapse module, and not interfere with pydantic objects in
external dependencies. ([\#17667](https://github.com/element-hq/synapse/issues/17667))
- Use [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync tables as a bulk shortcut for getting the max `event_stream_ordering` of rooms. ([\#17693](https://github.com/element-hq/synapse/issues/17693))
- Speed up [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync requests a bit where there are many room changes. ([\#17696](https://github.com/element-hq/synapse/issues/17696))
- Refactor [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync filter unit tests so the sliding sync API has better test coverage. ([\#17703](https://github.com/element-hq/synapse/issues/17703))
- Fetch `bump_stamp`s more efficiently in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17723](https://github.com/element-hq/synapse/issues/17723))
- Shortcut for checking if certain background updates have completed (utilized in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync). ([\#17724](https://github.com/element-hq/synapse/issues/17724))
- More efficiently fetch rooms for [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17725](https://github.com/element-hq/synapse/issues/17725))
- Fix `_bulk_get_max_event_pos` being inefficient. ([\#17728](https://github.com/element-hq/synapse/issues/17728))
- Add cache to `get_tags_for_room(...)`. ([\#17730](https://github.com/element-hq/synapse/issues/17730))
- Small performance improvement in speeding up [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17731](https://github.com/element-hq/synapse/issues/17731))
- Minor speed up of initial [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync requests. ([\#17734](https://github.com/element-hq/synapse/issues/17734))
- Remove usage of the deprecated `cgi` module, deprecated in Python 3.11 and removed in Python 3.13. ([\#17741](https://github.com/element-hq/synapse/issues/17741))
- Fix typing of a variable that is not `Unknown` anymore after updating `treq`. ([\#17744](https://github.com/element-hq/synapse/issues/17744))
### Updates to locked dependencies
* Bump anyhow from 1.0.86 to 1.0.89. ([\#17685](https://github.com/element-hq/synapse/issues/17685), [\#17716](https://github.com/element-hq/synapse/issues/17716))
* Bump bytes from 1.7.1 to 1.7.2. ([\#17743](https://github.com/element-hq/synapse/issues/17743))
* Bump cryptography from 43.0.0 to 43.0.1. ([\#17689](https://github.com/element-hq/synapse/issues/17689))
* Bump idna from 3.8 to 3.10. ([\#17758](https://github.com/element-hq/synapse/issues/17758))
* Bump msgpack from 1.0.8 to 1.1.0. ([\#17759](https://github.com/element-hq/synapse/issues/17759))
* Bump phonenumbers from 8.13.44 to 8.13.45. ([\#17762](https://github.com/element-hq/synapse/issues/17762))
* Bump prometheus-client from 0.20.0 to 0.21.0. ([\#17746](https://github.com/element-hq/synapse/issues/17746))
* Bump pyasn1 from 0.6.0 to 0.6.1. ([\#17714](https://github.com/element-hq/synapse/issues/17714))
* Bump pyasn1-modules from 0.4.0 to 0.4.1. ([\#17747](https://github.com/element-hq/synapse/issues/17747))
* Bump pydantic from 2.8.2 to 2.9.2. ([\#17756](https://github.com/element-hq/synapse/issues/17756))
* Bump python-multipart from 0.0.9 to 0.0.10. ([\#17745](https://github.com/element-hq/synapse/issues/17745))
* Bump ruff from 0.6.4 to 0.6.7. ([\#17715](https://github.com/element-hq/synapse/issues/17715), [\#17760](https://github.com/element-hq/synapse/issues/17760))
* Bump sentry-sdk from 2.13.0 to 2.14.0. ([\#17712](https://github.com/element-hq/synapse/issues/17712))
* Bump serde from 1.0.209 to 1.0.210. ([\#17686](https://github.com/element-hq/synapse/issues/17686))
* Bump serde_json from 1.0.127 to 1.0.128. ([\#17687](https://github.com/element-hq/synapse/issues/17687))
* Bump treq from 23.11.0 to 24.9.1. ([\#17744](https://github.com/element-hq/synapse/issues/17744))
* Bump types-pyyaml from 6.0.12.20240808 to 6.0.12.20240917. ([\#17755](https://github.com/element-hq/synapse/issues/17755))
* Bump types-requests from 2.32.0.20240712 to 2.32.0.20240914. ([\#17713](https://github.com/element-hq/synapse/issues/17713))
* Bump types-setuptools from 74.1.0.20240907 to 75.1.0.20240917. ([\#17757](https://github.com/element-hq/synapse/issues/17757))
# Synapse 1.115.0 (2024-09-17)
No significant changes since 1.115.0rc2.
# Synapse 1.115.0rc2 (2024-09-12)
### Internal Changes

198
Cargo.lock generated
View File

@@ -13,9 +13,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.89"
version = "1.0.93"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86fdf8605db99b54d3cd748a44c6d04df638eb5dafb219b135d0149bd0db01f6"
checksum = "4c95c10ba0b00a02636238b814946408b1322d5ac4760326e6fb8ec956d85775"
[[package]]
name = "arc-swap"
@@ -35,12 +35,6 @@ version = "0.21.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d297deb1925b89f2ccc13d7635fa0714f12c87adce1c75356b39ca9b7178567"
[[package]]
name = "bitflags"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf4b9d6a944f767f8e5e0db018570623c85f3d925ac718db4e06d0187adb21c1"
[[package]]
name = "blake2"
version = "0.10.6"
@@ -67,9 +61,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
[[package]]
name = "bytes"
version = "1.7.1"
version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8318a53db07bb3f8dca91a600466bdb3f2eaadeedfdbcf02e1accbad9271ba50"
checksum = "9ac0150caa2ae65ca5bd83f25c7de183dea78d4d366469f148435e2acfbad0da"
[[package]]
name = "cfg-if"
@@ -162,9 +156,9 @@ dependencies = [
[[package]]
name = "heck"
version = "0.4.1"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8"
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]]
name = "hex"
@@ -222,16 +216,6 @@ version = "0.2.154"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ae743338b92ff9146ce83992f766a31066a91a8c84a45e0e9f21e7cf6de6d346"
[[package]]
name = "lock_api"
version = "0.4.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "07af8b9cdd281b7915f413fa73f29ebd5d55d0d3f0155584dade1ff18cea1b17"
dependencies = [
"autocfg",
"scopeguard",
]
[[package]]
name = "log"
version = "0.4.22"
@@ -265,29 +249,6 @@ version = "1.19.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92"
[[package]]
name = "parking_lot"
version = "0.12.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7e4af0ca4f6caed20e900d564c242b8e5d4903fdacf31d3daf527b66fe6f42fb"
dependencies = [
"lock_api",
"parking_lot_core",
]
[[package]]
name = "parking_lot_core"
version = "0.9.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e401f977ab385c9e4e3ab30627d6f26d00e2c73eef317493c4ec6d468726cf8"
dependencies = [
"cfg-if",
"libc",
"redox_syscall",
"smallvec",
"windows-targets",
]
[[package]]
name = "portable-atomic"
version = "1.6.0"
@@ -302,25 +263,25 @@ checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de"
[[package]]
name = "proc-macro2"
version = "1.0.82"
version = "1.0.89"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ad3d49ab951a01fbaafe34f2ec74122942fe18a3f9814c3268f1bb72042131b"
checksum = "f139b0662de085916d1fb67d2b4169d1addddda1919e696f3252b740b629986e"
dependencies = [
"unicode-ident",
]
[[package]]
name = "pyo3"
version = "0.21.2"
version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a5e00b96a521718e08e03b1a622f01c8a8deb50719335de3f60b3b3950f069d8"
checksum = "f54b3d09cbdd1f8c20650b28e7b09e338881482f4aa908a5f61a00c98fba2690"
dependencies = [
"anyhow",
"cfg-if",
"indoc",
"libc",
"memoffset",
"parking_lot",
"once_cell",
"portable-atomic",
"pyo3-build-config",
"pyo3-ffi",
@@ -330,9 +291,9 @@ dependencies = [
[[package]]
name = "pyo3-build-config"
version = "0.21.2"
version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7883df5835fafdad87c0d888b266c8ec0f4c9ca48a5bed6bbb592e8dedee1b50"
checksum = "3015cf985888fe66cfb63ce0e321c603706cd541b7aec7ddd35c281390af45d8"
dependencies = [
"once_cell",
"target-lexicon",
@@ -340,9 +301,9 @@ dependencies = [
[[package]]
name = "pyo3-ffi"
version = "0.21.2"
version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "01be5843dc60b916ab4dad1dca6d20b9b4e6ddc8e15f50c47fe6d85f1fb97403"
checksum = "6fca7cd8fd809b5ac4eefb89c1f98f7a7651d3739dfb341ca6980090f554c270"
dependencies = [
"libc",
"pyo3-build-config",
@@ -350,9 +311,9 @@ dependencies = [
[[package]]
name = "pyo3-log"
version = "0.10.0"
version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2af49834b8d2ecd555177e63b273b708dea75150abc6f5341d0a6e1a9623976c"
checksum = "3eb421dc86d38d08e04b927b02424db480be71b777fa3a56f32e2f2a3a1a3b08"
dependencies = [
"arc-swap",
"log",
@@ -361,9 +322,9 @@ dependencies = [
[[package]]
name = "pyo3-macros"
version = "0.21.2"
version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77b34069fc0682e11b31dbd10321cbf94808394c56fd996796ce45217dfac53c"
checksum = "34e657fa5379a79151b6ff5328d9216a84f55dc93b17b08e7c3609a969b73aa0"
dependencies = [
"proc-macro2",
"pyo3-macros-backend",
@@ -373,9 +334,9 @@ dependencies = [
[[package]]
name = "pyo3-macros-backend"
version = "0.21.2"
version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08260721f32db5e1a5beae69a55553f56b99bd0e1c3e6e0a5e8851a9d0f5a85c"
checksum = "295548d5ffd95fd1981d2d3cf4458831b21d60af046b729b6fd143b0ba7aee2f"
dependencies = [
"heck",
"proc-macro2",
@@ -386,9 +347,9 @@ dependencies = [
[[package]]
name = "pythonize"
version = "0.21.1"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d0664248812c38cc55a4ed07f88e4df516ce82604b93b1ffdc041aa77a6cb3c"
checksum = "91a6ee7a084f913f98d70cdc3ebec07e852b735ae3059a1500db2661265da9ff"
dependencies = [
"pyo3",
"serde",
@@ -433,20 +394,11 @@ dependencies = [
"getrandom",
]
[[package]]
name = "redox_syscall"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "469052894dcb553421e483e4209ee581a45100d31b4018de03e5a7ad86374a7e"
dependencies = [
"bitflags",
]
[[package]]
name = "regex"
version = "1.10.6"
version = "1.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4219d74c6b67a3654a9fbebc4b419e22126d13d2f3c4a07ee0cb61ff79a79619"
checksum = "b544ef1b4eac5dc2db33ea63606ae9ffcfac26c1416a2806ae0bf5f56b201191"
dependencies = [
"aho-corasick",
"memchr",
@@ -456,9 +408,9 @@ dependencies = [
[[package]]
name = "regex-automata"
version = "0.4.6"
version = "0.4.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86b83b8b9847f9bf95ef68afb0b8e6cdb80f498442f5179a29fad448fcc1eaea"
checksum = "368758f23274712b504848e9d5a6f010445cc8b87a7cdb4d7cbee666c1288da3"
dependencies = [
"aho-corasick",
"memchr",
@@ -467,9 +419,9 @@ dependencies = [
[[package]]
name = "regex-syntax"
version = "0.8.3"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "adad44e29e4c806119491a7f06f03de4d1af22c3a680dd47f1e6e179439d1f56"
checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
[[package]]
name = "ryu"
@@ -477,26 +429,20 @@ version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f"
[[package]]
name = "scopeguard"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]]
name = "serde"
version = "1.0.210"
version = "1.0.215"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8e3592472072e6e22e0a54d5904d9febf8508f65fb8552499a1abc7d1078c3a"
checksum = "6513c1ad0b11a9376da888e3e0baa0077f1aed55c17f50e7b2397136129fb88f"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.210"
version = "1.0.215"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "243902eda00fad750862fc144cea25caca5e20d615af0a81bee94ca738f1df1f"
checksum = "ad1e866f866923f252f05c889987993144fb74e722403468a4ebd70c3cd756c0"
dependencies = [
"proc-macro2",
"quote",
@@ -505,9 +451,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.128"
version = "1.0.133"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6ff5456707a1de34e7e37f2a6fd3d3f808c318259cbd01ab6377795054b483d8"
checksum = "c7fceb2473b9166b2294ef05efcb65a3db80803f0b03ef86a5fc88a2b85ee377"
dependencies = [
"itoa",
"memchr",
@@ -537,12 +483,6 @@ dependencies = [
"digest",
]
[[package]]
name = "smallvec"
version = "1.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c5e1a9a646d36c3599cd173a41282daf47c44583ad367b8e6837255952e5c67"
[[package]]
name = "subtle"
version = "2.5.0"
@@ -551,9 +491,9 @@ checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc"
[[package]]
name = "syn"
version = "2.0.61"
version = "2.0.85"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c993ed8ccba56ae856363b1845da7266a7cb78e1d146c8a32d54b45a8b831fc9"
checksum = "5023162dfcd14ef8f32034d8bcd4cc5ddc61ef7a247c024a33e24e1f24d21b56"
dependencies = [
"proc-macro2",
"quote",
@@ -694,67 +634,3 @@ dependencies = [
"js-sys",
"wasm-bindgen",
]
[[package]]
name = "windows-targets"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f0713a46559409d202e70e28227288446bf7841d3211583a4b53e3f6d96e7eb"
dependencies = [
"windows_aarch64_gnullvm",
"windows_aarch64_msvc",
"windows_i686_gnu",
"windows_i686_gnullvm",
"windows_i686_msvc",
"windows_x86_64_gnu",
"windows_x86_64_gnullvm",
"windows_x86_64_msvc",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7088eed71e8b8dda258ecc8bac5fb1153c5cffaf2578fc8ff5d61e23578d3263"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9985fd1504e250c615ca5f281c3f7a6da76213ebd5ccc9561496568a2752afb6"
[[package]]
name = "windows_i686_gnu"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "88ba073cf16d5372720ec942a8ccbf61626074c6d4dd2e745299726ce8b89670"
[[package]]
name = "windows_i686_gnullvm"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87f4261229030a858f36b459e748ae97545d6f1ec60e5e0d6a3d32e0dc232ee9"
[[package]]
name = "windows_i686_msvc"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db3c2bf3d13d5b658be73463284eaf12830ac9a26a90c717b7f771dfe97487bf"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e4246f76bdeff09eb48875a0fd3e2af6aada79d409d33011886d3e1581517d9"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "852298e482cd67c356ddd9570386e2862b5673c85bd5f88df9ab6802b334c596"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bec47e5bfd1bff0eeaf6d8b485cc1074891a197ab4225d504cb7a1ab88b02bf0"

1
changelog.d/17253.misc Normal file
View File

@@ -0,0 +1 @@
[MSC4108](https://github.com/matrix-org/matrix-spec-proposals/pull/4108): Add a `Content-Type` header on the `PUT` response to work around a faulty behavior in some caching reverse proxies.

View File

@@ -1 +0,0 @@
Add support for the `tags` and `not_tags` filters for simplified sliding sync.

View File

@@ -1,5 +0,0 @@
Import pydantic objects from the `_pydantic_compat` module.
This allows `check_pydantic_models.py` to mock those pydantic objects
only in the synapse module, and not interfere with pydantic objects in
external dependencies.

View File

@@ -1 +0,0 @@
Guests can use the new media endpoints to download media, as described by [MSC4189](https://github.com/matrix-org/matrix-spec-proposals/pull/4189).

View File

@@ -1 +0,0 @@
Add config option `turn_shared_secret_path`.

View File

@@ -1 +0,0 @@
Make sure we get up-to-date state information when using the new Sliding Sync tables to derive room membership.

View File

@@ -1 +0,0 @@
Use Sliding Sync tables as a bulk shortcut for getting the max `event_stream_ordering` of rooms.

View File

@@ -1 +0,0 @@
Speed up sliding sync requests a bit where there are many room changes.

View File

@@ -1 +0,0 @@
Refactor sliding sync filter unit tests so the sliding sync API has better test coverage.

View File

@@ -1 +0,0 @@
Return room tags in Sliding Sync account data extension.

1
changelog.d/17872.doc Normal file
View File

@@ -0,0 +1 @@
Add OIDC example configuration for Forgejo (fork of Gitea).

1
changelog.d/17933.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix long-standing bug where read receipts could get overly delayed being sent over federation.

1
changelog.d/17936.misc Normal file
View File

@@ -0,0 +1 @@
Fix incorrect comment in new schema delta.

1
changelog.d/17944.misc Normal file
View File

@@ -0,0 +1 @@
Raise setuptools_rust version cap to 1.10.2.

1
changelog.d/17945.misc Normal file
View File

@@ -0,0 +1 @@
Enable encrypted appservice related experimental features in the complement docker image.

1
changelog.d/17952.misc Normal file
View File

@@ -0,0 +1 @@
Return whether the user is suspended when querying the user account in the Admin API.

1
changelog.d/17953.doc Normal file
View File

@@ -0,0 +1 @@
Link to element-docker-demo from contrib/docker*.

1
changelog.d/17966.misc Normal file
View File

@@ -0,0 +1 @@
Bump pyo3 and dependencies to v0.23.2.

1
changelog.d/17970.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix release process to not create duplicate releases.

View File

@@ -30,3 +30,6 @@ docker-compose up -d
### More information
For more information on required environment variables and mounts, see the main docker documentation at [/docker/README.md](../../docker/README.md)
**For a more comprehensive Docker Compose example showcasing a full Matrix 2.0 stack, please see
https://github.com/element-hq/element-docker-demo**

View File

@@ -8,6 +8,9 @@ All examples and snippets assume that your Synapse service is called `synapse` i
An example Docker Compose file can be found [here](docker-compose.yaml).
**For a more comprehensive Docker Compose example, showcasing a full Matrix 2.0 stack (originally based on this
docker-compose.yaml), please see https://github.com/element-hq/element-docker-demo**
## Worker Service Examples in Docker Compose
In order to start the Synapse container as a worker, you must specify an `entrypoint` that loads both the `homeserver.yaml` and the configuration for the worker (`synapse-generic-worker-1.yaml` in the example below). You must also include the worker type in the environment variable `SYNAPSE_WORKER` or alternatively pass `-m synapse.app.generic_worker` as part of the `entrypoint` after `"/start.py", "run"`).

View File

@@ -20,8 +20,8 @@
#
import argparse
import cgi
import datetime
import html
import json
import urllib.request
from typing import List
@@ -85,7 +85,7 @@ def make_graph(pdus: List[dict], filename_prefix: str) -> None:
"name": name,
"type": pdu.get("pdu_type"),
"state_key": pdu.get("state_key"),
"content": cgi.escape(json.dumps(pdu.get("content")), quote=True),
"content": html.escape(json.dumps(pdu.get("content")), quote=True),
"time": t,
"depth": pdu.get("depth"),
}

78
debian/changelog vendored
View File

@@ -1,3 +1,81 @@
matrix-synapse-py3 (1.120.0) stable; urgency=medium
* New synapse release 1.120.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Nov 2024 13:10:23 +0000
matrix-synapse-py3 (1.120.0~rc1) stable; urgency=medium
* New Synapse release 1.120.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 20 Nov 2024 15:02:21 +0000
matrix-synapse-py3 (1.119.0) stable; urgency=medium
* New Synapse release 1.119.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 13 Nov 2024 13:57:51 +0000
matrix-synapse-py3 (1.119.0~rc2) stable; urgency=medium
* New Synapse release 1.119.0rc2.
-- Synapse Packaging team <packages@matrix.org> Mon, 11 Nov 2024 14:33:02 +0000
matrix-synapse-py3 (1.119.0~rc1) stable; urgency=medium
* New Synapse release 1.119.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 06 Nov 2024 08:59:43 -0700
matrix-synapse-py3 (1.118.0) stable; urgency=medium
* New Synapse release 1.118.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 29 Oct 2024 15:29:53 +0100
matrix-synapse-py3 (1.118.0~rc1) stable; urgency=medium
* New Synapse release 1.118.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Oct 2024 11:48:14 +0100
matrix-synapse-py3 (1.117.0) stable; urgency=medium
* New Synapse release 1.117.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Oct 2024 10:46:30 +0100
matrix-synapse-py3 (1.117.0~rc1) stable; urgency=medium
* New Synapse release 1.117.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 08 Oct 2024 14:37:11 +0100
matrix-synapse-py3 (1.116.0) stable; urgency=medium
* New Synapse release 1.116.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Oct 2024 11:14:07 +0100
matrix-synapse-py3 (1.116.0~rc2) stable; urgency=medium
* New synapse release 1.116.0rc2.
-- Synapse Packaging team <packages@matrix.org> Thu, 26 Sep 2024 13:28:43 +0000
matrix-synapse-py3 (1.116.0~rc1) stable; urgency=medium
* New synapse release 1.116.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 25 Sep 2024 09:34:07 +0000
matrix-synapse-py3 (1.115.0) stable; urgency=medium
* New Synapse release 1.115.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 17 Sep 2024 14:32:10 +0100
matrix-synapse-py3 (1.115.0~rc2) stable; urgency=medium
* New Synapse release 1.115.0rc2.

View File

@@ -20,7 +20,7 @@
# `poetry export | pip install -r /dev/stdin`, but beware: we have experienced bugs in
# in `poetry export` in the past.
ARG PYTHON_VERSION=3.11
ARG PYTHON_VERSION=3.12
###
### Stage 0: generate requirements.txt

View File

@@ -104,6 +104,16 @@ experimental_features:
msc3967_enabled: true
# Expose a room summary for public rooms
msc3266_enabled: true
# Send to-device messages to application services
msc2409_to_device_messages_enabled: true
# Allow application services to masquerade devices
msc3202_device_masquerading: true
# Sending device list changes, one-time key counts and fallback key usage to application services
msc3202_transaction_extensions: true
# Proxy OTK claim requests to exclusive ASes
msc3983_appservice_otk_claims: true
# Proxy key queries to exclusive ASes
msc3984_appservice_key_query: true
server_notices:
system_mxid_localpart: _server
@@ -111,6 +121,9 @@ server_notices:
system_mxid_avatar_url: ""
room_name: "Server Alert"
# Enable delayed events (msc4140)
max_event_delay_duration: 24h
# Disable sync cache so that initial `/sync` requests are up-to-date.
caches:

View File

@@ -54,6 +54,7 @@
- [Using `synctl` with Workers](synctl_workers.md)
- [Systemd](systemd-with-workers/README.md)
- [Administration](usage/administration/README.md)
- [Backups](usage/administration/backups.md)
- [Admin API](usage/administration/admin_api/README.md)
- [Account Validity](admin_api/account_validity.md)
- [Background Updates](usage/administration/admin_api/background_updates.md)

View File

@@ -5,6 +5,7 @@ basis. The currently supported features are:
- [MSC3881](https://github.com/matrix-org/matrix-spec-proposals/pull/3881): enable remotely toggling push notifications
for another client
- [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575): enable experimental sliding sync support
- [MSC4222](https://github.com/matrix-org/matrix-spec-proposals/pull/4222): adding `state_after` to sync v2
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api/).

View File

@@ -55,7 +55,8 @@ It returns a JSON body like the following:
}
],
"user_type": null,
"locked": false
"locked": false,
"suspended": false
}
```
@@ -1361,3 +1362,86 @@ Returns a `404` HTTP status code if no user was found, with a response body like
```
_Added in Synapse 1.72.0._
## Redact all the events of a user
This endpoint allows an admin to redact the events of a given user. There are no restrictions on redactions for a
local user. By default, we puppet the user who sent the message to redact it themselves. Redactions for non-local users are issued using the admin user, and will fail in rooms where the admin user is not admin/does not have the specified power level to issue redactions.
The API is
```
POST /_synapse/admin/v1/user/$user_id/redact
{
"rooms": ["!roomid1", "!roomid2"]
}
```
If an empty list is provided as the key for `rooms`, all events in all the rooms the user is member of will be redacted,
otherwise all the events in the rooms provided in the request will be redacted.
The API starts redaction process running, and returns immediately with a JSON body with
a redact id which can be used to query the status of the redaction process:
```json
{
"redact_id": "<opaque id>"
}
```
**Parameters**
The following parameters should be set in the URL:
- `user_id` - The fully qualified MXID of the user: for example, `@user:server.com`.
The following JSON body parameter must be provided:
- `rooms` - A list of rooms to redact the user's events in. If an empty list is provided all events in all rooms
the user is a member of will be redacted
_Added in Synapse 1.116.0._
The following JSON body parameters are optional:
- `reason` - Reason the redaction is being requested, ie "spam", "abuse", etc. This will be included in each redaction event, and be visible to users.
- `limit` - a limit on the number of the user's events to search for ones that can be redacted (events are redacted newest to oldest) in each room, defaults to 1000 if not provided
## Check the status of a redaction process
It is possible to query the status of the background task for redacting a user's events.
The status can be queried up to 24 hours after completion of the task,
or until Synapse is restarted (whichever happens first).
The API is:
```
GET /_synapse/admin/v1/user/redact_status/$redact_id
```
A response body like the following is returned:
```
{
"status": "active",
"failed_redactions": [],
}
```
**Parameters**
The following parameters should be set in the URL:
* `redact_id` - string - The ID for this redaction process, provided when the redaction was requested.
**Response**
The following fields are returned in the JSON response body:
- `status` - string - one of scheduled/active/completed/failed, indicating the status of the redaction job
- `failed_redactions` - dictionary - the keys of the dict are event ids the process was unable to redact, if any, and the values are
the corresponding error that caused the redaction to fail
_Added in Synapse 1.116.0._

View File

@@ -322,7 +322,7 @@ The following command will let you run the integration test with the most common
configuration:
```sh
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:focal
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:bullseye
```
(Note that the paths must be full paths! You could also write `$(realpath relative/path)` if needed.)

View File

@@ -76,8 +76,9 @@ _Changed in Synapse v1.62.0: `synapse.module_api.NOT_SPAM` and `synapse.module_a
async def user_may_invite(inviter: str, invitee: str, room_id: str) -> Union["synapse.module_api.NOT_SPAM", "synapse.module_api.errors.Codes", bool]
```
Called when processing an invitation. Both inviter and invitee are
represented by their Matrix user ID (e.g. `@alice:example.com`).
Called when processing an invitation, both when one is created locally or when
receiving an invite over federation. Both inviter and invitee are represented by
their Matrix user ID (e.g. `@alice:example.com`).
The callback must return one of:
@@ -112,7 +113,9 @@ async def user_may_send_3pid_invite(
```
Called when processing an invitation using a third-party identifier (also called a 3PID,
e.g. an email address or a phone number).
e.g. an email address or a phone number). It is only called when a 3PID invite is created
locally - not when one is received in a room over federation. If the 3PID is already associated
with a Matrix ID, the spam check will go through the `user_may_invite` callback instead.
The inviter is represented by their Matrix user ID (e.g. `@alice:example.com`), and the
invitee is represented by its medium (e.g. "email") and its address

View File

@@ -336,6 +336,36 @@ but it has a `response_types_supported` which excludes "code" (which we rely on,
is even mentioned in their [documentation](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow#login)),
so we have to disable discovery and configure the URIs manually.
### Forgejo
Forgejo is a fork of Gitea that can act as an OAuth2 provider.
The implementation of OAuth2 is improved compared to Gitea, as it provides a correctly defined `subject_claim` and `scopes`.
Synapse config:
```yaml
oidc_providers:
- idp_id: forgejo
idp_name: Forgejo
discover: false
issuer: "https://your-forgejo.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
client_auth_method: client_secret_post
scopes: ["openid", "profile", "email", "groups"]
authorization_endpoint: "https://your-forgejo.com/login/oauth/authorize"
token_endpoint: "https://your-forgejo.com/login/oauth/access_token"
userinfo_endpoint: "https://your-forgejo.com/api/v1/user"
user_mapping_provider:
config:
subject_claim: "sub"
picture_claim: "picture"
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
email_template: "{{ user.email }}"
```
### GitHub
[GitHub][github-idp] is a bit special as it is not an OpenID Connect compliant provider, but

View File

@@ -100,6 +100,10 @@ database:
keepalives_count: 3
```
## Backups
Don't forget to [back up](./usage/administration/backups.md#database) your database!
## Tuning Postgres
The default settings should be fine for most deployments. For larger

View File

@@ -52,8 +52,6 @@ architecture via <https://packages.matrix.org/debian/>.
To install the latest release:
TODO UPDATE ALL THIS
```sh
sudo apt install -y lsb-release wget apt-transport-https
sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
@@ -210,7 +208,7 @@ When following this route please make sure that the [Platform-specific prerequis
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 3.8 or later, up to Python 3.11.
- Python 3.9 or later, up to Python 3.13.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
If building on an uncommon architecture for which pre-built wheels are
@@ -316,7 +314,7 @@ sudo dnf group install "Development Tools"
*Note: The term "RHEL" below refers to both Red Hat Enterprise Linux and Rocky Linux. The distributions are 1:1 binary compatible.*
It's recommended to use the latest Python versions.
It's recommended to use the latest Python versions.
RHEL 8 in particular ships with Python 3.6 by default which is EOL and therefore no longer supported by Synapse. RHEL 9 ship with Python 3.9 which is still supported by the Python core team as of this writing. However, newer Python versions provide significant performance improvements and they're available in official distributions' repositories. Therefore it's recommended to use them.
@@ -346,7 +344,7 @@ dnf install python3.12 python3.12-devel
```
Finally, install common prerequisites
```bash
dnf install libicu libicu-devel libpq5 libpq5-devel lz4 pkgconf
dnf install libicu libicu-devel libpq5 libpq5-devel lz4 pkgconf
dnf group install "Development Tools"
```
###### Using venv module instead of virtualenv command
@@ -355,7 +353,7 @@ It's recommended to use Python venv module directly rather than the virtualenv c
* On RHEL 9, virtualenv is only available on [EPEL](https://docs.fedoraproject.org/en-US/epel/).
* On RHEL 8, virtualenv is based on Python 3.6. It does not support creating 3.11/3.12 virtual environments.
Here's an example of creating Python 3.12 virtual environment and installing Synapse from PyPI.
Here's an example of creating Python 3.12 virtual environment and installing Synapse from PyPI.
```bash
mkdir -p ~/synapse
@@ -658,6 +656,10 @@ This also requires the optional `lxml` python dependency to be installed. This
in turn requires the `libxml2` library to be available - on Debian/Ubuntu this
means `apt-get install libxml2-dev`, or equivalent for your OS.
### Backups
Don't forget to take [backups](../usage/administration/backups.md) of your new server!
### Troubleshooting Installation
`pip` seems to leak *lots* of memory during installation. For instance, a Linux

View File

@@ -117,6 +117,51 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.120.0
## Removal of experimental MSC3886 feature
[MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886)
has been closed (and will not enter the Matrix spec). As such, we are
removing the experimental support for it in this release.
The `experimental_features.msc3886_endpoint` configuration option has
been removed.
## Authenticated media is now enforced by default
The [`enable_authenticated_media`] configuration option now defaults to true.
This means that clients and remote (federated) homeservers now need to use
the authenticated media endpoints in order to download media from your
homeserver.
As an exception, existing media that was stored on the server prior to
this option changing to `true` will still be accessible over the
unauthenticated endpoints.
The matrix.org homeserver has already been running with this option enabled
since September 2024, so most common clients and homeservers should already
be compatible.
With that said, administrators who wish to disable this feature for broader
compatibility can still do so by manually configuring
`enable_authenticated_media: False`.
[`enable_authenticated_media`]: usage/configuration/config_documentation.md#enable_authenticated_media
# Upgrading to v1.119.0
## Minimum supported Python version
The minimum supported Python version has been increased from v3.8 to v3.9.
You will need Python 3.9+ to run Synapse v1.119.0 (due out Nov 7th, 2024).
If you use current versions of the Matrix.org-distributed Docker images, no action is required.
Please note that support for Ubuntu `focal` was dropped as well since it uses Python 3.8.
# Upgrading to v1.111.0
## New worker endpoints for authenticated client and federation media

View File

@@ -255,6 +255,8 @@ line to `/etc/default/matrix-synapse`:
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2
*Note*: You may need to set `PYTHONMALLOC=malloc` to ensure that `jemalloc` can accurately calculate memory usage. By default, Python uses its internal small-object allocator, which may interfere with jemalloc's ability to track memory consumption correctly. This could prevent the [cache_autotuning](../configuration/config_documentation.md#caches-and-associated-values) feature from functioning as expected, as the Python allocator may not reach the memory threshold set by `max_cache_memory_usage`, thus not triggering the cache eviction process.
This made a significant difference on Python 2.7 - it's unclear how
much of an improvement it provides on Python 3.x.

View File

@@ -0,0 +1,125 @@
# How to back up a Synapse homeserver
It is critical to maintain good backups of your server, to guard against
hardware failure as well as potential corruption due to bugs or administrator
error.
This page documents the things you will need to consider backing up as part of
a Synapse installation.
## Configuration files
Keep a copy of your configuration file (`homeserver.yaml`), as well as any
auxiliary config files it refers to such as the
[`log_config`](../configuration/config_documentation.md#log_config) file,
[`app_service_config_files`](../configuration/config_documentation.md#app_service_config_files).
Often, all such config files will be kept in a single directory such as
`/etc/synapse`, which will make this easier.
## Server signing key
Your server has a [signing
key](../configuration/config_documentation.md#signing_key_path) which it uses
to sign events and outgoing federation requests. It is easiest to back it up
with your configuration files, but an alternative is to have Synapse create a
new signing key if you have to restore.
If you do decide to replace the signing key, you should add the old *public*
key to
[`old_signing_keys`](../configuration/config_documentation.md#old_signing_keys).
## Database
Synapse's support for SQLite is only suitable for testing purposes, so for the
purposes of this document, we'll assume you are using
[PostgreSQL](../../postgres.md).
A full discussion of backup strategies for PostgreSQL is out of scope for this
document; see the [PostgreSQL
documentation](https://www.postgresql.org/docs/current/backup.html) for
detailed information.
### Synapse-specfic details
* Be very careful not to restore into a database that already has tables
present. At best, this will error; at worst, it will lead to subtle database
inconsistencies.
* The `e2e_one_time_keys_json` table should **not** be backed up, or if it is
backed up, should be
[`TRUNCATE`d](https://www.postgresql.org/docs/current/sql-truncate.html)
after restoring the database before Synapse is started.
[Background: restoring the database to an older backup can cause
used one-time-keys to be re-issued, causing subsequent [message decryption
errors](https://github.com/element-hq/element-meta/issues/2155). Clearing
all one-time-keys from the database ensures that this cannot happen, and
will prompt clients to generate and upload new one-time-keys.]
### Quick and easy database backup and restore
Typically, the easiest solution is to use `pg_dump` to take a copy of the whole
database. We recommend `pg_dump`'s custom dump format, as it produces
significantly smaller backup files.
```shell
sudo -u postgres pg_dump -Fc --exclude-table-data e2e_one_time_keys_json synapse > synapse.dump
```
There is no need to stop Postgres or Synapse while `pg_dump` is running: it
will take a consistent snapshot of the databse.
To restore, you will need to recreate the database as described in [Using
Postgres](../../postgres.md#set-up-database),
then load the dump into it with `pg_restore`:
```shell
sudo -u postgres createdb --encoding=UTF8 --locale=C --template=template0 --owner=synapse_user synapse
sudo -u postgres pg_restore -d synapse < synapse.dump
```
(If you forgot to exclude `e2e_one_time_keys_json` during `pg_dump`, remember
to connect to the new database and `TRUNCATE e2e_one_time_keys_json;` before
starting Synapse.)
To reiterate: do **not** restore a dump over an existing database.
Again, if you plan to run your homeserver at any sort of production level, we
recommend studying the PostgreSQL documentation on backup options.
## Media store
Synapse keeps a copy of media uploaded by users, including avatars and message
attachments, in its [Media
store](../configuration/config_documentation.md#media-store).
It is a directory on the local disk, containing the following directories:
* `local_content`: this is content uploaded by your local users. As a general
rule, you should back this up: it may represent the only copy of those
media files anywhere in the federation, and if they are lost, users will
see errors when viewing user or room avatars, and messages with attachments.
* `local_thumbnails`: "thumbnails" of images uploaded by your users. If
[`dynamic_thumbnails`](../configuration/config_documentation.md#dynamic_thumbnails)
is enabled, these will be regenerated if they are removed from the disk, and
there is therefore no need to back them up.
If `dynamic_thumbnails` is *not* enabled (the default): although this can
theoretically be regenerated from `local_content`, there is no tooling to do
so. We recommend that these are backed up too.
* `remote_content`: this is a cache of content that was uploaded by a user on
another server, and has since been requested by a user on your own server.
Typically there is no need to back up this directory: if a file in this directory
is removed, Synapse will attempt to fetch it again from the remote
server.
* `remote_thumbnails`: thumbnails of images uploaded by users on other
servers. As with `remote_content`, there is normally no need to back this
up.
* `url_cache`, `url_cache_thumbnails`: temporary caches of files downloaded
by the [URL previews](../../setup/installation.md#url-previews) feature.
These do not need to be backed up.

View File

@@ -761,6 +761,19 @@ email:
password_reset: "[%(server_name)s] Password reset"
email_validation: "[%(server_name)s] Validate your email"
```
---
### `max_event_delay_duration`
The maximum allowed duration by which sent events can be delayed, as per
[MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140).
Must be a positive value if set.
Defaults to no duration (`null`), which disallows sending delayed events.
Example configuration:
```yaml
max_event_delay_duration: 24h
```
## Homeserver blocking
Useful options for Synapse admins.
@@ -1421,7 +1434,7 @@ number of entries that can be stored.
Please see the [Config Conventions](#config-conventions) for information on how to specify memory size and cache expiry
durations.
* `max_cache_memory_usage` sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
They will continue to be evicted until the memory usage drops below the `target_cache_memory_usage`, set in
the setting below, or until the `min_cache_ttl` is hit. There is no default value for this option.
* `target_cache_memory_usage` sets a rough target for the desired memory usage of the caches. There is no default value
for this option.
@@ -1874,12 +1887,33 @@ Config options related to Synapse's media store.
When set to true, all subsequent media uploads will be marked as authenticated, and will not be available over legacy
unauthenticated media endpoints (`/_matrix/media/(r0|v3|v1)/download` and `/_matrix/media/(r0|v3|v1)/thumbnail`) - requests for authenticated media over these endpoints will result in a 404. All media, including authenticated media, will be available over the authenticated media endpoints `_matrix/client/v1/media/download` and `_matrix/client/v1/media/thumbnail`. Media uploaded prior to setting this option to true will still be available over the legacy endpoints. Note if the setting is switched to false
after enabling, media marked as authenticated will be available over legacy endpoints. Defaults to false, but
this will change to true in a future Synapse release.
after enabling, media marked as authenticated will be available over legacy endpoints. Defaults to true (previously false). In a future release of Synapse, this option will be removed and become always-on.
In all cases, authenticated requests to download media will succeed, but for unauthenticated requests, this
case-by-case breakdown describes whether media downloads are permitted:
* `enable_authenticated_media = False`:
* unauthenticated client or homeserver requesting local media: allowed
* unauthenticated client or homeserver requesting remote media: allowed as long as the media is in the cache,
or as long as the remote homeserver does not require authentication to retrieve the media
* `enable_authenticated_media = True`:
* unauthenticated client or homeserver requesting local media:
allowed if the media was stored on the server whilst `enable_authenticated_media` was `False` (or in a previous Synapse version where this option did not exist);
otherwise denied.
* unauthenticated client or homeserver requesting remote media: the same as for local media;
allowed if the media was stored on the server whilst `enable_authenticated_media` was `False` (or in a previous Synapse version where this option did not exist);
otherwise denied.
It is especially notable that media downloaded before this option existed (in older Synapse versions), or whilst this option was set to `False`,
will perpetually be available over the legacy, unauthenticated endpoint, even after this option is set to `True`.
This is for backwards compatibility with older clients and homeservers that do not yet support requesting authenticated media;
those older clients or homeservers will not be cut off from media they can already see.
_Changed in Synapse 1.120:_ This option now defaults to `True` when not set, whereas before this version it defaulted to `False`.
Example configuration:
```yaml
enable_authenticated_media: true
enable_authenticated_media: false
```
---
### `enable_media_repo`
@@ -3095,6 +3129,15 @@ it was last used.
It is possible to build an entry from an old `signing.key` file using the
`export_signing_key` script which is provided with synapse.
If you have lost the private key file, you can ask another server you trust to
tell you the public keys it has seen from your server. To fetch the keys from
`matrix.org`, try something like:
```
curl https://matrix-federation.matrix.org/_matrix/key/v2/query/myserver.example.com |
jq '.server_keys | map(.verify_keys) | add'
```
Example configuration:
```yaml
old_signing_keys:
@@ -3709,6 +3752,8 @@ Additional sub-options for this setting include:
Required if `enabled` is set to true.
* `subject_claim`: Name of the claim containing a unique identifier for the user.
Optional, defaults to `sub`.
* `display_name_claim`: Name of the claim containing the display name for the user. Optional.
If provided, the display name will be set to the value of this claim upon first login.
* `issuer`: The issuer to validate the "iss" claim against. Optional. If provided the
"iss" claim will be required and validated for all JSON web tokens.
* `audiences`: A list of audiences to validate the "aud" claim against. Optional.
@@ -3723,6 +3768,7 @@ jwt_config:
secret: "provided-by-your-issuer"
algorithm: "provided-by-your-issuer"
subject_claim: "name_of_claim"
display_name_claim: "name_of_claim"
issuer: "provided-by-your-issuer"
audiences:
- "provided-by-your-issuer"
@@ -4357,6 +4403,12 @@ a `federation_sender_instances` map. Doing so will remove handling of this funct
the main process. Multiple workers can be added to this map, in which case the work is
balanced across them.
The way that the load balancing works is any outbound federation request will be assigned
to a federation sender worker based on the hash of the destination server name. This
means that all requests being sent to the same destination will be processed by the same
worker instance. Multiple `federation_sender_instances` are useful if there is a federation
with multiple servers.
This configuration setting must be shared between all workers handling federation
sending, and if changed all federation sender workers must be stopped at the same time
and then started, to ensure that all instances are running with the same config (otherwise
@@ -4505,6 +4557,9 @@ This setting has the following sub-options:
* `path`: The full path to a local Unix socket file. **If this is used, `host` and
`port` are ignored.** Defaults to `/tmp/redis.sock'
* `password`: Optional password if configured on the Redis instance.
* `password_path`: Alternative to `password`, reading the password from an
external file. The file should be a plain text file, containing only the
password. Synapse reads the password from the given file once at startup.
* `dbid`: Optional redis dbid if needs to connect to specific redis logical db.
* `use_tls`: Whether to use tls connection. Defaults to false.
* `certificate_file`: Optional path to the certificate file
@@ -4518,13 +4573,16 @@ This setting has the following sub-options:
_Changed in Synapse 1.85.0: Added path option to use a local Unix socket_
_Changed in Synapse 1.116.0: Added password\_path_
Example configuration:
```yaml
redis:
enabled: true
host: localhost
port: 6379
password: <secret_password>
password_path: <path_to_the_password_file>
# OR password: <secret_password>
dbid: <dbid>
#use_tls: True
#certificate_file: <path_to_the_certificate_file>
@@ -4702,7 +4760,7 @@ This setting has the following sub-options:
* `only_for_direct_messages`: Whether invites should be automatically accepted for all room types, or only
for direct messages. Defaults to false.
* `only_from_local_users`: Whether to only automatically accept invites from users on this homeserver. Defaults to false.
* `worker_to_run_on`: Which worker to run this module on. This must match
* `worker_to_run_on`: Which worker to run this module on. This must match
the "worker_name". If not set or `null`, invites will be accepted on the
main process.

View File

@@ -177,11 +177,11 @@ The following applies to Synapse installations that have been installed from sou
You can start the main Synapse process with Poetry by running the following command:
```console
poetry run synapse_homeserver --config-file [your homeserver.yaml]
poetry run synapse_homeserver --config-path [your homeserver.yaml]
```
For worker setups, you can run the following command
```console
poetry run synapse_worker --config-file [your homeserver.yaml] --config-file [your worker.yaml]
poetry run synapse_worker --config-path [your homeserver.yaml] --config-path [your worker.yaml]
```
## Available worker applications
@@ -290,6 +290,7 @@ information.
Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/
^/_matrix/client/unstable/org.matrix.msc4140/delayed_events
Pagination requests can also be handled, but all requests for a given
room must be routed to the same instance. Additionally, care must be taken to

56
flake.lock generated
View File

@@ -56,24 +56,6 @@
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1681202837,
"narHash": "sha256-H+Rh19JDwRtpVPAWp64F+rlEtxUWBAQW28eAi3SRSzg=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "cfacdce06f30d2b68473a46042957675eebb3401",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"gitignore": {
"inputs": {
"nixpkgs": [
@@ -186,27 +168,27 @@
},
"nixpkgs_2": {
"locked": {
"lastModified": 1690535733,
"narHash": "sha256-WgjUPscQOw3cB8yySDGlyzo6cZNihnRzUwE9kadv/5I=",
"lastModified": 1729265718,
"narHash": "sha256-4HQI+6LsO3kpWTYuVGIzhJs1cetFcwT7quWCk/6rqeo=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "8cacc05fbfffeaab910e8c2c9e2a7c6b32ce881a",
"rev": "ccc0c2126893dd20963580b6478d1a10a4512185",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "master",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_3": {
"locked": {
"lastModified": 1681358109,
"narHash": "sha256-eKyxW4OohHQx9Urxi7TQlFBTDWII+F+x2hklDOQPB50=",
"lastModified": 1728538411,
"narHash": "sha256-f0SBJz1eZ2yOuKUr5CA9BHULGXVSn6miBuUWdTyhUhU=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "96ba1c52e54e74c3197f4d43026b3f3d92e83ff9",
"rev": "b69de56fac8c2b6f8fd27f2eca01dcda8e0a4221",
"type": "github"
},
"original": {
@@ -249,20 +231,19 @@
"devenv": "devenv",
"nixpkgs": "nixpkgs_2",
"rust-overlay": "rust-overlay",
"systems": "systems_3"
"systems": "systems_2"
}
},
"rust-overlay": {
"inputs": {
"flake-utils": "flake-utils_2",
"nixpkgs": "nixpkgs_3"
},
"locked": {
"lastModified": 1693966243,
"narHash": "sha256-a2CA1aMIPE67JWSVIGoGtD3EGlFdK9+OlJQs0FOWCKY=",
"lastModified": 1731897198,
"narHash": "sha256-Ou7vLETSKwmE/HRQz4cImXXJBr/k9gp4J4z/PF8LzTE=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "a8b4bb4cbb744baaabc3e69099f352f99164e2c1",
"rev": "0be641045af6d8666c11c2c40e45ffc9667839b5",
"type": "github"
},
"original": {
@@ -300,21 +281,6 @@
"repo": "default",
"type": "github"
}
},
"systems_3": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",

View File

@@ -3,13 +3,13 @@
# (https://github.com/matrix-org/complement) Matrix homeserver test suites are also
# installed automatically.
#
# You must have already installed Nix (https://nixos.org) on your system to use this.
# Nix can be installed on Linux or MacOS; NixOS is not required. Windows is not
# directly supported, but Nix can be installed inside of WSL2 or even Docker
# You must have already installed Nix (https://nixos.org/download/) on your system to use this.
# Nix can be installed on any Linux distribiution or MacOS; NixOS is not required.
# Windows is not directly supported, but Nix can be installed inside of WSL2 or even Docker
# containers. Please refer to https://nixos.org/download for details.
#
# You must also enable support for flakes in Nix. See the following for how to
# do so permanently: https://nixos.wiki/wiki/Flakes#Enable_flakes
# do so permanently: https://wiki.nixos.org/wiki/Flakes#Other_Distros,_without_Home-Manager
#
# Be warned: you'll need over 3.75 GB of free space to download all the dependencies.
#
@@ -20,7 +20,7 @@
# locally from "services", such as PostgreSQL and Redis.
#
# You should now be dropped into a new shell with all programs and dependencies
# availabile to you!
# available to you!
#
# You can start up pre-configured local Synapse, PostgreSQL and Redis instances by
# running: `devenv up`. To stop them, use Ctrl-C.
@@ -39,9 +39,9 @@
{
inputs = {
# Use the master/unstable branch of nixpkgs. Used to fetch the latest
# Use the rolling/unstable branch of nixpkgs. Used to fetch the latest
# available versions of packages.
nixpkgs.url = "github:NixOS/nixpkgs/master";
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
# Output a development shell for x86_64/aarch64 Linux/Darwin (MacOS).
systems.url = "github:nix-systems/default";
# A development environment manager built on Nix. See https://devenv.sh.
@@ -50,7 +50,7 @@
rust-overlay.url = "github:oxalica/rust-overlay";
};
outputs = { self, nixpkgs, devenv, systems, rust-overlay, ... } @ inputs:
outputs = { nixpkgs, devenv, systems, rust-overlay, ... } @ inputs:
let
forEachSystem = nixpkgs.lib.genAttrs (import systems);
in {
@@ -82,7 +82,7 @@
#
# NOTE: We currently need to set the Rust version unnecessarily high
# in order to work around https://github.com/matrix-org/synapse/issues/15939
(rust-bin.stable."1.71.1".default.override {
(rust-bin.stable."1.82.0".default.override {
# Additionally install the "rust-src" extension to allow diving into the
# Rust source code in an IDE (rust-analyzer will also make use of it).
extensions = [ "rust-src" ];
@@ -126,7 +126,7 @@
# Automatically activate the poetry virtualenv upon entering the shell.
languages.python.poetry.activate.enable = true;
# Install all extra Python dependencies; this is needed to run the unit
# tests and utilitise all Synapse features.
# tests and utilise all Synapse features.
languages.python.poetry.install.arguments = ["--extras all"];
# Install the 'matrix-synapse' package from the local checkout.
languages.python.poetry.install.installRootPackage = true;
@@ -163,8 +163,8 @@
# Create a postgres user called 'synapse_user' which has ownership
# over the 'synapse' database.
services.postgres.initialScript = ''
CREATE USER synapse_user;
ALTER DATABASE synapse OWNER TO synapse_user;
CREATE USER synapse_user;
ALTER DATABASE synapse OWNER TO synapse_user;
'';
# Redis is needed in order to run Synapse in worker mode.
@@ -205,7 +205,7 @@
# corresponding Nix packages on https://search.nixos.org/packages.
#
# This was done until `./install-deps.pl --dryrun` produced no output.
env.PERL5LIB = "${with pkgs.perl536Packages; makePerlPath [
env.PERL5LIB = "${with pkgs.perl538Packages; makePerlPath [
DBI
ClassMethodModifiers
CryptEd25519

View File

@@ -26,7 +26,7 @@ strict_equality = True
# Run mypy type checking with the minimum supported Python version to catch new usage
# that isn't backwards-compatible (types, overloads, etc).
python_version = 3.8
python_version = 3.9
files =
docker/,

885
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -36,7 +36,7 @@
[tool.ruff]
line-length = 88
target-version = "py38"
target-version = "py39"
[tool.ruff.lint]
# See https://beta.ruff.rs/docs/rules/#error-e
@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.115.0rc2"
version = "1.120.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"
@@ -155,7 +155,7 @@ synapse_review_recent_signups = "synapse._scripts.review_recent_signups:main"
update_synapse_database = "synapse._scripts.update_synapse_database:main"
[tool.poetry.dependencies]
python = "^3.8.0"
python = "^3.9.0"
# Mandatory Dependencies
# ----------------------
@@ -178,7 +178,7 @@ Twisted = {extras = ["tls"], version = ">=18.9.0"}
treq = ">=15.1"
# Twisted has required pyopenssl 16.0 since about Twisted 16.6.
pyOpenSSL = ">=16.0.0"
PyYAML = ">=3.13"
PyYAML = ">=5.3"
pyasn1 = ">=0.1.9"
pyasn1-modules = ">=0.0.7"
bcrypt = ">=3.1.7"
@@ -241,7 +241,7 @@ authlib = { version = ">=0.15.1", optional = true }
# `contrib/systemd/log_config.yaml`.
# Note: systemd-python 231 appears to have been yanked from pypi
systemd-python = { version = ">=231", optional = true }
lxml = { version = ">=4.2.0", optional = true }
lxml = { version = ">=4.5.2", optional = true }
sentry-sdk = { version = ">=0.7.2", optional = true }
opentracing = { version = ">=2.2.0", optional = true }
jaeger-client = { version = ">=4.0.0", optional = true }
@@ -320,7 +320,7 @@ all = [
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.6.5"
ruff = "0.7.3"
# Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2"
@@ -370,7 +370,7 @@ tomli = ">=1.2.3"
# runtime errors caused by build system changes.
# We are happy to raise these upper bounds upon request,
# provided we check that it's safe to do so (i.e. that CI passes).
requires = ["poetry-core>=1.1.0,<=1.9.0", "setuptools_rust>=1.3,<=1.8.1"]
requires = ["poetry-core>=1.1.0,<=1.9.1", "setuptools_rust>=1.3,<=1.10.2"]
build-backend = "poetry.core.masonry.api"
@@ -378,13 +378,13 @@ build-backend = "poetry.core.masonry.api"
# Skip unsupported platforms (by us or by Rust).
# See https://cibuildwheel.readthedocs.io/en/stable/options/#build-skip for the list of build targets.
# We skip:
# - CPython 3.6 and 3.7: EOLed
# - PyPy 3.7: we only support Python 3.8+
# - CPython 3.6, 3.7 and 3.8: EOLed
# - PyPy 3.7 and 3.8: we only support Python 3.9+
# - musllinux i686: excluded to reduce number of wheels we build.
# c.f. https://github.com/matrix-org/synapse/pull/12595#discussion_r963107677
# - PyPy on Aarch64 and musllinux on aarch64: too slow to build.
# c.f. https://github.com/matrix-org/synapse/pull/14259
skip = "cp36* cp37* pp37* *-musllinux_i686 pp*aarch64 *-musllinux_aarch64"
skip = "cp36* cp37* cp38* pp37* pp38* *-musllinux_i686 pp*aarch64 *-musllinux_aarch64"
# We need a rust compiler
before-all = "curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain stable -y --profile minimal"

View File

@@ -30,14 +30,14 @@ http = "1.1.0"
lazy_static = "1.4.0"
log = "0.4.17"
mime = "0.3.17"
pyo3 = { version = "0.21.0", features = [
pyo3 = { version = "0.23.2", features = [
"macros",
"anyhow",
"abi3",
"abi3-py38",
] }
pyo3-log = "0.10.0"
pythonize = "0.21.0"
pyo3-log = "0.12.0"
pythonize = "0.23.0"
regex = "1.6.0"
sha2 = "0.10.8"
serde = { version = "1.0.144", features = ["derive"] }

View File

@@ -60,6 +60,7 @@ fn bench_match_exact(b: &mut Bencher) {
true,
vec![],
false,
false,
)
.unwrap();
@@ -105,6 +106,7 @@ fn bench_match_word(b: &mut Bencher) {
true,
vec![],
false,
false,
)
.unwrap();
@@ -150,6 +152,7 @@ fn bench_match_word_miss(b: &mut Bencher) {
true,
vec![],
false,
false,
)
.unwrap();
@@ -195,6 +198,7 @@ fn bench_eval_message(b: &mut Bencher) {
true,
vec![],
false,
false,
)
.unwrap();
@@ -205,6 +209,7 @@ fn bench_eval_message(b: &mut Bencher) {
false,
false,
false,
false,
);
b.iter(|| eval.run(&rules, Some("bob"), Some("person")));

View File

@@ -32,14 +32,14 @@ use crate::push::utils::{glob_to_regex, GlobMatchType};
/// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "acl")?;
let child_module = PyModule::new(py, "acl")?;
child_module.add_class::<ServerAclEvaluator>()?;
m.add_submodule(&child_module)?;
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import acl` work.
py.import_bound("sys")?
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.acl", child_module)?;

107
rust/src/events/filter.rs Normal file
View File

@@ -0,0 +1,107 @@
/*
* This file is licensed under the Affero General Public License (AGPL) version 3.
*
* Copyright (C) 2024 New Vector, Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* See the GNU Affero General Public License for more details:
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*/
use std::collections::HashMap;
use pyo3::{exceptions::PyValueError, pyfunction, PyResult};
use crate::{
identifier::UserID,
matrix_const::{
HISTORY_VISIBILITY_INVITED, HISTORY_VISIBILITY_JOINED, MEMBERSHIP_INVITE, MEMBERSHIP_JOIN,
},
};
#[pyfunction(name = "event_visible_to_server")]
pub fn event_visible_to_server_py(
sender: String,
target_server_name: String,
history_visibility: String,
erased_senders: HashMap<String, bool>,
partial_state_invisible: bool,
memberships: Vec<(String, String)>, // (state_key, membership)
) -> PyResult<bool> {
event_visible_to_server(
sender,
target_server_name,
history_visibility,
erased_senders,
partial_state_invisible,
memberships,
)
.map_err(|e| PyValueError::new_err(format!("{e}")))
}
/// Return whether the target server is allowed to see the event.
///
/// For a fully stated room, the target server is allowed to see an event E if:
/// - the state at E has world readable or shared history vis, OR
/// - the state at E says that the target server is in the room.
///
/// For a partially stated room, the target server is allowed to see E if:
/// - E was created by this homeserver, AND:
/// - the partial state at E has world readable or shared history vis, OR
/// - the partial state at E says that the target server is in the room.
pub fn event_visible_to_server(
sender: String,
target_server_name: String,
history_visibility: String,
erased_senders: HashMap<String, bool>,
partial_state_invisible: bool,
memberships: Vec<(String, String)>, // (state_key, membership)
) -> anyhow::Result<bool> {
if let Some(&erased) = erased_senders.get(&sender) {
if erased {
return Ok(false);
}
}
if partial_state_invisible {
return Ok(false);
}
if history_visibility != HISTORY_VISIBILITY_INVITED
&& history_visibility != HISTORY_VISIBILITY_JOINED
{
return Ok(true);
}
let mut visible = false;
for (state_key, membership) in memberships {
let state_key = UserID::try_from(state_key.as_ref())
.map_err(|e| anyhow::anyhow!(format!("invalid user_id ({state_key}): {e}")))?;
if state_key.server_name() != target_server_name {
return Err(anyhow::anyhow!(
"state_key.server_name ({}) does not match target_server_name ({target_server_name})",
state_key.server_name()
));
}
match membership.as_str() {
MEMBERSHIP_INVITE => {
if history_visibility == HISTORY_VISIBILITY_INVITED {
visible = true;
break;
}
}
MEMBERSHIP_JOIN => {
visible = true;
break;
}
_ => continue,
}
}
Ok(visible)
}

View File

@@ -41,9 +41,11 @@ use pyo3::{
pybacked::PyBackedStr,
pyclass, pymethods,
types::{PyAnyMethods, PyDict, PyDictMethods, PyString},
Bound, IntoPy, PyAny, PyObject, PyResult, Python,
Bound, IntoPyObject, PyAny, PyObject, PyResult, Python,
};
use crate::UnwrapInfallible;
/// Definitions of the various fields of the internal metadata.
#[derive(Clone)]
enum EventInternalMetadataData {
@@ -60,31 +62,59 @@ enum EventInternalMetadataData {
impl EventInternalMetadataData {
/// Convert the field to its name and python object.
fn to_python_pair<'a>(&self, py: Python<'a>) -> (&'a Bound<'a, PyString>, PyObject) {
fn to_python_pair<'a>(&self, py: Python<'a>) -> (&'a Bound<'a, PyString>, Bound<'a, PyAny>) {
match self {
EventInternalMetadataData::OutOfBandMembership(o) => {
(pyo3::intern!(py, "out_of_band_membership"), o.into_py(py))
}
EventInternalMetadataData::SendOnBehalfOf(o) => {
(pyo3::intern!(py, "send_on_behalf_of"), o.into_py(py))
}
EventInternalMetadataData::RecheckRedaction(o) => {
(pyo3::intern!(py, "recheck_redaction"), o.into_py(py))
}
EventInternalMetadataData::SoftFailed(o) => {
(pyo3::intern!(py, "soft_failed"), o.into_py(py))
}
EventInternalMetadataData::ProactivelySend(o) => {
(pyo3::intern!(py, "proactively_send"), o.into_py(py))
}
EventInternalMetadataData::Redacted(o) => {
(pyo3::intern!(py, "redacted"), o.into_py(py))
}
EventInternalMetadataData::TxnId(o) => (pyo3::intern!(py, "txn_id"), o.into_py(py)),
EventInternalMetadataData::TokenId(o) => (pyo3::intern!(py, "token_id"), o.into_py(py)),
EventInternalMetadataData::DeviceId(o) => {
(pyo3::intern!(py, "device_id"), o.into_py(py))
}
EventInternalMetadataData::OutOfBandMembership(o) => (
pyo3::intern!(py, "out_of_band_membership"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::SendOnBehalfOf(o) => (
pyo3::intern!(py, "send_on_behalf_of"),
o.into_pyobject(py).unwrap_infallible().into_any(),
),
EventInternalMetadataData::RecheckRedaction(o) => (
pyo3::intern!(py, "recheck_redaction"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::SoftFailed(o) => (
pyo3::intern!(py, "soft_failed"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::ProactivelySend(o) => (
pyo3::intern!(py, "proactively_send"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::Redacted(o) => (
pyo3::intern!(py, "redacted"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::TxnId(o) => (
pyo3::intern!(py, "txn_id"),
o.into_pyobject(py).unwrap_infallible().into_any(),
),
EventInternalMetadataData::TokenId(o) => (
pyo3::intern!(py, "token_id"),
o.into_pyobject(py).unwrap_infallible().into_any(),
),
EventInternalMetadataData::DeviceId(o) => (
pyo3::intern!(py, "device_id"),
o.into_pyobject(py).unwrap_infallible().into_any(),
),
}
}
@@ -247,7 +277,7 @@ impl EventInternalMetadata {
///
/// Note that `outlier` and `stream_ordering` are stored in separate columns so are not returned here.
fn get_dict(&self, py: Python<'_>) -> PyResult<PyObject> {
let dict = PyDict::new_bound(py);
let dict = PyDict::new(py);
for entry in &self.data {
let (key, value) = entry.to_python_pair(py);

View File

@@ -22,21 +22,23 @@
use pyo3::{
types::{PyAnyMethods, PyModule, PyModuleMethods},
Bound, PyResult, Python,
wrap_pyfunction, Bound, PyResult, Python,
};
pub mod filter;
mod internal_metadata;
/// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "events")?;
let child_module = PyModule::new(py, "events")?;
child_module.add_class::<internal_metadata::EventInternalMetadata>()?;
child_module.add_function(wrap_pyfunction!(filter::event_visible_to_server_py, m)?)?;
m.add_submodule(&child_module)?;
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import events` work.
py.import_bound("sys")?
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.events", child_module)?;

View File

@@ -70,7 +70,7 @@ pub fn http_request_from_twisted(request: &Bound<'_, PyAny>) -> PyResult<Request
let headers_iter = request
.getattr("requestHeaders")?
.call_method0("getAllRawHeaders")?
.iter()?;
.try_iter()?;
for header in headers_iter {
let header = header?;

86
rust/src/identifier.rs Normal file
View File

@@ -0,0 +1,86 @@
/*
* This file is licensed under the Affero General Public License (AGPL) version 3.
*
* Copyright (C) 2024 New Vector, Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* See the GNU Affero General Public License for more details:
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*/
//! # Matrix Identifiers
//!
//! This module contains definitions and utilities for working with matrix identifiers.
use std::{fmt, ops::Deref};
/// Errors that can occur when parsing a matrix identifier.
#[derive(Clone, Debug, PartialEq)]
pub enum IdentifierError {
IncorrectSigil,
MissingColon,
}
impl fmt::Display for IdentifierError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:?}", self)
}
}
/// A Matrix user_id.
#[derive(Clone, Debug, PartialEq)]
pub struct UserID(String);
impl UserID {
/// Returns the `localpart` of the user_id.
pub fn localpart(&self) -> &str {
&self[1..self.colon_pos()]
}
/// Returns the `server_name` / `domain` of the user_id.
pub fn server_name(&self) -> &str {
&self[self.colon_pos() + 1..]
}
/// Returns the position of the ':' inside of the user_id.
/// Used when splitting the user_id into it's respective parts.
fn colon_pos(&self) -> usize {
self.find(':').unwrap()
}
}
impl TryFrom<&str> for UserID {
type Error = IdentifierError;
/// Will try creating a `UserID` from the provided `&str`.
/// Can fail if the user_id is incorrectly formatted.
fn try_from(s: &str) -> Result<Self, Self::Error> {
if !s.starts_with('@') {
return Err(IdentifierError::IncorrectSigil);
}
if s.find(':').is_none() {
return Err(IdentifierError::MissingColon);
}
Ok(UserID(s.to_string()))
}
}
impl Deref for UserID {
type Target = str;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl fmt::Display for UserID {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.0)
}
}

View File

@@ -1,3 +1,5 @@
use std::convert::Infallible;
use lazy_static::lazy_static;
use pyo3::prelude::*;
use pyo3_log::ResetHandle;
@@ -6,6 +8,8 @@ pub mod acl;
pub mod errors;
pub mod events;
pub mod http;
pub mod identifier;
pub mod matrix_const;
pub mod push;
pub mod rendezvous;
@@ -50,3 +54,16 @@ fn synapse_rust(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
Ok(())
}
pub trait UnwrapInfallible<T> {
fn unwrap_infallible(self) -> T;
}
impl<T> UnwrapInfallible<T> for Result<T, Infallible> {
fn unwrap_infallible(self) -> T {
match self {
Ok(val) => val,
Err(never) => match never {},
}
}
}

28
rust/src/matrix_const.rs Normal file
View File

@@ -0,0 +1,28 @@
/*
* This file is licensed under the Affero General Public License (AGPL) version 3.
*
* Copyright (C) 2024 New Vector, Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* See the GNU Affero General Public License for more details:
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*/
//! # Matrix Constants
//!
//! This module contains definitions for constant values described by the matrix specification.
pub const HISTORY_VISIBILITY_WORLD_READABLE: &str = "world_readable";
pub const HISTORY_VISIBILITY_SHARED: &str = "shared";
pub const HISTORY_VISIBILITY_INVITED: &str = "invited";
pub const HISTORY_VISIBILITY_JOINED: &str = "joined";
pub const MEMBERSHIP_BAN: &str = "ban";
pub const MEMBERSHIP_LEAVE: &str = "leave";
pub const MEMBERSHIP_KNOCK: &str = "knock";
pub const MEMBERSHIP_INVITE: &str = "invite";
pub const MEMBERSHIP_JOIN: &str = "join";

View File

@@ -81,7 +81,7 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
))]),
actions: Cow::Borrowed(&[Action::Notify]),
default: true,
default_enabled: false,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.suppress_notices"),

View File

@@ -105,6 +105,9 @@ pub struct PushRuleEvaluator {
/// If MSC3931 (room version feature flags) is enabled. Usually controlled by the same
/// flag as MSC1767 (extensible events core).
msc3931_enabled: bool,
// If MSC4210 (remove legacy mentions) is enabled.
msc4210_enabled: bool,
}
#[pymethods]
@@ -122,6 +125,7 @@ impl PushRuleEvaluator {
related_event_match_enabled,
room_version_feature_flags,
msc3931_enabled,
msc4210_enabled,
))]
pub fn py_new(
flattened_keys: BTreeMap<String, JsonValue>,
@@ -133,6 +137,7 @@ impl PushRuleEvaluator {
related_event_match_enabled: bool,
room_version_feature_flags: Vec<String>,
msc3931_enabled: bool,
msc4210_enabled: bool,
) -> Result<Self, Error> {
let body = match flattened_keys.get("content.body") {
Some(JsonValue::Value(SimpleJsonValue::Str(s))) => s.clone().into_owned(),
@@ -150,6 +155,7 @@ impl PushRuleEvaluator {
related_event_match_enabled,
room_version_feature_flags,
msc3931_enabled,
msc4210_enabled,
})
}
@@ -161,6 +167,7 @@ impl PushRuleEvaluator {
///
/// Returns the set of actions, if any, that match (filtering out any
/// `dont_notify` and `coalesce` actions).
#[pyo3(signature = (push_rules, user_id=None, display_name=None))]
pub fn run(
&self,
push_rules: &FilteredPushRules,
@@ -176,7 +183,8 @@ impl PushRuleEvaluator {
// For backwards-compatibility the legacy mention rules are disabled
// if the event contains the 'm.mentions' property.
if self.has_mentions
// Additionally, MSC4210 always disables the legacy rules.
if (self.has_mentions || self.msc4210_enabled)
&& (rule_id == "global/override/.m.rule.contains_display_name"
|| rule_id == "global/content/.m.rule.contains_user_name"
|| rule_id == "global/override/.m.rule.roomnotif")
@@ -229,6 +237,7 @@ impl PushRuleEvaluator {
}
/// Check if the given condition matches.
#[pyo3(signature = (condition, user_id=None, display_name=None))]
fn matches(
&self,
condition: Condition,
@@ -526,6 +535,7 @@ fn push_rule_evaluator() {
true,
vec![],
true,
false,
)
.unwrap();
@@ -555,6 +565,7 @@ fn test_requires_room_version_supports_condition() {
false,
flags,
true,
false,
)
.unwrap();
@@ -582,7 +593,7 @@ fn test_requires_room_version_supports_condition() {
};
let rules = PushRules::new(vec![custom_rule]);
result = evaluator.run(
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false),
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false, false),
None,
None,
);

View File

@@ -65,8 +65,8 @@ use anyhow::{Context, Error};
use log::warn;
use pyo3::exceptions::PyTypeError;
use pyo3::prelude::*;
use pyo3::types::{PyBool, PyList, PyLong, PyString};
use pythonize::{depythonize_bound, pythonize};
use pyo3::types::{PyBool, PyInt, PyList, PyString};
use pythonize::{depythonize, pythonize, PythonizeError};
use serde::de::Error as _;
use serde::{Deserialize, Serialize};
use serde_json::Value;
@@ -79,7 +79,7 @@ pub mod utils;
/// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "push")?;
let child_module = PyModule::new(py, "push")?;
child_module.add_class::<PushRule>()?;
child_module.add_class::<PushRules>()?;
child_module.add_class::<FilteredPushRules>()?;
@@ -90,7 +90,7 @@ pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()>
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import push` work.
py.import_bound("sys")?
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.push", child_module)?;
@@ -182,12 +182,16 @@ pub enum Action {
Unknown(Value),
}
impl IntoPy<PyObject> for Action {
fn into_py(self, py: Python<'_>) -> PyObject {
impl<'py> IntoPyObject<'py> for Action {
type Target = PyAny;
type Output = Bound<'py, Self::Target>;
type Error = PythonizeError;
fn into_pyobject(self, py: Python<'py>) -> Result<Self::Output, Self::Error> {
// When we pass the `Action` struct to Python we want it to be converted
// to a dict. We use `pythonize`, which converts the struct using the
// `serde` serialization.
pythonize(py, &self).expect("valid action")
pythonize(py, &self)
}
}
@@ -270,13 +274,13 @@ pub enum SimpleJsonValue {
}
impl<'source> FromPyObject<'source> for SimpleJsonValue {
fn extract(ob: &'source PyAny) -> PyResult<Self> {
fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
if let Ok(s) = ob.downcast::<PyString>() {
Ok(SimpleJsonValue::Str(Cow::Owned(s.to_string())))
// A bool *is* an int, ensure we try bool first.
} else if let Ok(b) = ob.downcast::<PyBool>() {
Ok(SimpleJsonValue::Bool(b.extract()?))
} else if let Ok(i) = ob.downcast::<PyLong>() {
} else if let Ok(i) = ob.downcast::<PyInt>() {
Ok(SimpleJsonValue::Int(i.extract()?))
} else if ob.is_none() {
Ok(SimpleJsonValue::Null)
@@ -298,15 +302,19 @@ pub enum JsonValue {
}
impl<'source> FromPyObject<'source> for JsonValue {
fn extract(ob: &'source PyAny) -> PyResult<Self> {
fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
if let Ok(l) = ob.downcast::<PyList>() {
match l.iter().map(SimpleJsonValue::extract).collect() {
match l
.iter()
.map(|it| SimpleJsonValue::extract_bound(&it))
.collect()
{
Ok(a) => Ok(JsonValue::Array(a)),
Err(e) => Err(PyTypeError::new_err(format!(
"Can't convert to JsonValue::Array: {e}"
))),
}
} else if let Ok(v) = SimpleJsonValue::extract(ob) {
} else if let Ok(v) = SimpleJsonValue::extract_bound(ob) {
Ok(JsonValue::Value(v))
} else {
Err(PyTypeError::new_err(format!(
@@ -363,15 +371,19 @@ pub enum KnownCondition {
},
}
impl IntoPy<PyObject> for Condition {
fn into_py(self, py: Python<'_>) -> PyObject {
pythonize(py, &self).expect("valid condition")
impl<'source> IntoPyObject<'source> for Condition {
type Target = PyAny;
type Output = Bound<'source, Self::Target>;
type Error = PythonizeError;
fn into_pyobject(self, py: Python<'source>) -> Result<Self::Output, Self::Error> {
pythonize(py, &self)
}
}
impl<'source> FromPyObject<'source> for Condition {
fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
Ok(depythonize_bound(ob.clone())?)
Ok(depythonize(ob)?)
}
}
@@ -534,6 +546,7 @@ pub struct FilteredPushRules {
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc4028_push_encrypted_events: bool,
msc4210_enabled: bool,
}
#[pymethods]
@@ -546,6 +559,7 @@ impl FilteredPushRules {
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc4028_push_encrypted_events: bool,
msc4210_enabled: bool,
) -> Self {
Self {
push_rules,
@@ -554,6 +568,7 @@ impl FilteredPushRules {
msc3381_polls_enabled,
msc3664_enabled,
msc4028_push_encrypted_events,
msc4210_enabled,
}
}
@@ -596,6 +611,14 @@ impl FilteredPushRules {
return false;
}
if self.msc4210_enabled
&& (rule.rule_id == "global/override/.m.rule.contains_display_name"
|| rule.rule_id == "global/content/.m.rule.contains_user_name"
|| rule.rule_id == "global/override/.m.rule.roomnotif")
{
return false;
}
true
})
.map(|r| {

View File

@@ -23,7 +23,6 @@ use anyhow::bail;
use anyhow::Context;
use anyhow::Error;
use lazy_static::lazy_static;
use regex;
use regex::Regex;
use regex::RegexBuilder;

View File

@@ -29,7 +29,7 @@ use pyo3::{
exceptions::PyValueError,
pyclass, pymethods,
types::{PyAnyMethods, PyModule, PyModuleMethods},
Bound, Py, PyAny, PyObject, PyResult, Python, ToPyObject,
Bound, IntoPyObject, Py, PyAny, PyObject, PyResult, Python,
};
use ulid::Ulid;
@@ -37,6 +37,7 @@ use self::session::Session;
use crate::{
errors::{NotFoundError, SynapseError},
http::{http_request_from_twisted, http_response_to_twisted, HeaderMapPyExt},
UnwrapInfallible,
};
mod session;
@@ -125,7 +126,11 @@ impl RendezvousHandler {
let base = Uri::try_from(format!("{base}_synapse/client/rendezvous"))
.map_err(|_| PyValueError::new_err("Invalid base URI"))?;
let clock = homeserver.call_method0("get_clock")?.to_object(py);
let clock = homeserver
.call_method0("get_clock")?
.into_pyobject(py)
.unwrap_infallible()
.unbind();
// Construct a Python object so that we can get a reference to the
// evict method and schedule it to run.
@@ -288,6 +293,13 @@ impl RendezvousHandler {
let mut response = Response::new(Bytes::new());
*response.status_mut() = StatusCode::ACCEPTED;
prepare_headers(response.headers_mut(), session);
// Even though this isn't mandated by the MSC, we set a Content-Type on the response. It
// doesn't do any harm as the body is empty, but this helps escape a bug in some reverse
// proxy/cache setup which strips the ETag header if there is no Content-Type set.
// Specifically, we noticed this behaviour when placing Synapse behind Cloudflare.
response.headers_mut().typed_insert(ContentType::text());
http_response_to_twisted(twisted_request, response)?;
Ok(())
@@ -311,7 +323,7 @@ impl RendezvousHandler {
}
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new_bound(py, "rendezvous")?;
let child_module = PyModule::new(py, "rendezvous")?;
child_module.add_class::<RendezvousHandler>()?;
@@ -319,7 +331,7 @@ pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()>
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import rendezvous` work.
py.import_bound("sys")?
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.rendezvous", child_module)?;

View File

@@ -28,12 +28,11 @@ from typing import Collection, Optional, Sequence, Set
# example)
DISTS = (
"debian:bullseye", # (EOL ~2024-07) (our EOL forced by Python 3.9 is 2025-10-05)
"debian:bookworm", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24)
"debian:sid", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24)
"ubuntu:focal", # 20.04 LTS (EOL 2025-04) (our EOL forced by Python 3.8 is 2024-10-14)
"debian:bookworm", # (EOL 2026-06) (our EOL forced by Python 3.11 is 2027-10-24)
"debian:sid", # (rolling distro, no EOL)
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)
"ubuntu:lunar", # 23.04 (EOL 2024-01) (our EOL forced by Python 3.11 is 2027-10-24)
"ubuntu:mantic", # 23.10 (EOL 2024-07) (our EOL forced by Python 3.11 is 2027-10-24)
"ubuntu:noble", # 24.04 LTS (EOL 2029-06)
"ubuntu:oracular", # 24.10 (EOL 2025-07)
"debian:trixie", # (EOL not specified yet)
)

View File

@@ -220,9 +220,11 @@ test_packages=(
./tests/msc3874
./tests/msc3890
./tests/msc3391
./tests/msc3757
./tests/msc3930
./tests/msc3902
./tests/msc3967
./tests/msc4140
)
# Enable dirty runs, so tests will reuse the same container where possible.

View File

@@ -360,7 +360,7 @@ def is_cacheable(
# For a type alias, check if the underlying real type is cachable.
return is_cacheable(mypy.types.get_proper_type(rt), signature, verbose)
elif isinstance(rt, UninhabitedType) and rt.is_noreturn:
elif isinstance(rt, UninhabitedType):
# There is no return value, just consider it cachable. This is only used
# in tests.
return True, None

View File

@@ -40,7 +40,7 @@ import commonmark
import git
from click.exceptions import ClickException
from git import GitCommandError, Repo
from github import Github
from github import BadCredentialsException, Github
from packaging import version
@@ -323,10 +323,8 @@ def tag(gh_token: Optional[str]) -> None:
def _tag(gh_token: Optional[str]) -> None:
"""Tags the release and generates a draft GitHub release"""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Test that the GH Token is valid before continuing.
check_valid_gh_token(gh_token)
# Make sure we're in a git repo.
repo = get_repo_and_check_clean_checkout()
@@ -469,10 +467,8 @@ def upload(gh_token: Optional[str]) -> None:
def _upload(gh_token: Optional[str]) -> None:
"""Upload release to pypi."""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Test that the GH Token is valid before continuing.
check_valid_gh_token(gh_token)
current_version = get_package_version()
tag_name = f"v{current_version}"
@@ -569,10 +565,8 @@ def wait_for_actions(gh_token: Optional[str]) -> None:
def _wait_for_actions(gh_token: Optional[str]) -> None:
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Test that the GH Token is valid before continuing.
check_valid_gh_token(gh_token)
# Find out the version and tag name.
current_version = get_package_version()
@@ -806,6 +800,22 @@ def get_repo_and_check_clean_checkout(
return repo
def check_valid_gh_token(gh_token: Optional[str]) -> None:
"""Check that a github token is valid, if supplied"""
if not gh_token:
# No github token supplied, so nothing to do.
return
try:
gh = Github(gh_token)
# We need to lookup name to trigger a request.
_name = gh.get_user().name
except BadCredentialsException as e:
raise click.ClickException(f"Github credentials are bad: {e}")
def find_ref(repo: git.Repo, ref_name: str) -> Optional[git.HEAD]:
"""Find the branch/ref, looking first locally then in the remote."""
if ref_name in repo.references:

View File

@@ -39,8 +39,8 @@ ImageFile.LOAD_TRUNCATED_IMAGES = True
# Note that we use an (unneeded) variable here so that pyupgrade doesn't nuke the
# if-statement completely.
py_version = sys.version_info
if py_version < (3, 8):
print("Synapse requires Python 3.8 or above.")
if py_version < (3, 9):
print("Synapse requires Python 3.9 or above.")
sys.exit(1)
# Allow using the asyncio reactor via env var.

View File

@@ -88,6 +88,7 @@ from synapse.storage.databases.main.relations import RelationsWorkerStore
from synapse.storage.databases.main.room import RoomBackgroundUpdateStore
from synapse.storage.databases.main.roommember import RoomMemberBackgroundUpdateStore
from synapse.storage.databases.main.search import SearchBackgroundUpdateStore
from synapse.storage.databases.main.sliding_sync import SlidingSyncStore
from synapse.storage.databases.main.state import MainStateBackgroundUpdateStore
from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.user_directory import (
@@ -255,6 +256,7 @@ class Store(
ReceiptsBackgroundUpdateStore,
RelationsWorkerStore,
EventFederationWorkerStore,
SlidingSyncStore,
):
def execute(self, f: Callable[..., R], *args: Any, **kwargs: Any) -> Awaitable[R]:
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)

View File

@@ -338,7 +338,7 @@ class MSC3861DelegatedAuth(BaseAuth):
logger.exception("Failed to introspect token")
raise SynapseError(503, "Unable to introspect the access token")
logger.info(f"Introspection result: {introspection_result!r}")
logger.debug("Introspection result: %r", introspection_result)
# TODO: introspection verification should be more extensive, especially:
# - verify the audience

View File

@@ -107,6 +107,8 @@ class RoomVersion:
# support the flag. Unknown flags are ignored by the evaluator, making conditions
# fail if used.
msc3931_push_features: Tuple[str, ...] # values from PushRuleRoomFlag
# MSC3757: Restricting who can overwrite a state event
msc3757_enabled: bool
class RoomVersions:
@@ -128,6 +130,7 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V2 = RoomVersion(
"2",
@@ -147,6 +150,7 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V3 = RoomVersion(
"3",
@@ -166,6 +170,7 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V4 = RoomVersion(
"4",
@@ -185,6 +190,7 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V5 = RoomVersion(
"5",
@@ -204,6 +210,7 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V6 = RoomVersion(
"6",
@@ -223,6 +230,7 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V7 = RoomVersion(
"7",
@@ -242,6 +250,7 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V8 = RoomVersion(
"8",
@@ -261,6 +270,7 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V9 = RoomVersion(
"9",
@@ -280,6 +290,7 @@ class RoomVersions:
knock_restricted_join_rule=False,
enforce_int_power_levels=False,
msc3931_push_features=(),
msc3757_enabled=False,
)
V10 = RoomVersion(
"10",
@@ -299,6 +310,7 @@ class RoomVersions:
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(),
msc3757_enabled=False,
)
MSC1767v10 = RoomVersion(
# MSC1767 (Extensible Events) based on room version "10"
@@ -319,6 +331,28 @@ class RoomVersions:
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(PushRuleRoomFlag.EXTENSIBLE_EVENTS,),
msc3757_enabled=False,
)
MSC3757v10 = RoomVersion(
# MSC3757 (Restricting who can overwrite a state event) based on room version "10"
"org.matrix.msc3757.10",
RoomDisposition.UNSTABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
implicit_room_creator=False,
updated_redaction_rules=False,
restricted_join_rule=True,
restricted_join_rule_fix=True,
knock_join_rule=True,
msc3389_relation_redactions=False,
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(),
msc3757_enabled=True,
)
V11 = RoomVersion(
"11",
@@ -338,6 +372,28 @@ class RoomVersions:
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(),
msc3757_enabled=False,
)
MSC3757v11 = RoomVersion(
# MSC3757 (Restricting who can overwrite a state event) based on room version "11"
"org.matrix.msc3757.11",
RoomDisposition.UNSTABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
implicit_room_creator=True, # Used by MSC3820
updated_redaction_rules=True, # Used by MSC3820
restricted_join_rule=True,
restricted_join_rule_fix=True,
knock_join_rule=True,
msc3389_relation_redactions=False,
knock_restricted_join_rule=True,
enforce_int_power_levels=True,
msc3931_push_features=(),
msc3757_enabled=True,
)
@@ -355,6 +411,8 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
RoomVersions.V9,
RoomVersions.V10,
RoomVersions.V11,
RoomVersions.MSC3757v10,
RoomVersions.MSC3757v11,
)
}

View File

@@ -3,7 +3,7 @@
#
# Copyright 2020 The Matrix.org Foundation C.I.C.
# Copyright 2016 OpenMarket Ltd
# Copyright (C) 2023 New Vector, Ltd
# Copyright (C) 2023-2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
@@ -65,6 +65,7 @@ from synapse.storage.databases.main.appservice import (
)
from synapse.storage.databases.main.censor_events import CensorEventsStore
from synapse.storage.databases.main.client_ips import ClientIpWorkerStore
from synapse.storage.databases.main.delayed_events import DelayedEventsStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.databases.main.devices import DeviceWorkerStore
from synapse.storage.databases.main.directory import DirectoryWorkerStore
@@ -161,6 +162,7 @@ class GenericWorkerStore(
TaskSchedulerWorkerStore,
ExperimentalFeaturesStore,
SlidingSyncStore,
DelayedEventsStore,
):
# Properties that multiple storage classes define. Tell mypy what the
# expected type is.

View File

@@ -365,11 +365,6 @@ class ExperimentalConfig(Config):
# MSC3874: Filtering /messages with rel_types / not_rel_types.
self.msc3874_enabled: bool = experimental.get("msc3874_enabled", False)
# MSC3886: Simple client rendezvous capability
self.msc3886_endpoint: Optional[str] = experimental.get(
"msc3886_endpoint", None
)
# MSC3890: Remotely silence local notifications
# Note: This option requires "experimental_features.msc3391_enabled" to be
# set to "true", in order to communicate account data deletions to clients.
@@ -447,3 +442,9 @@ class ExperimentalConfig(Config):
# MSC4151: Report room API (Client-Server API)
self.msc4151_enabled: bool = experimental.get("msc4151_enabled", False)
# MSC4210: Remove legacy mentions
self.msc4210_enabled: bool = experimental.get("msc4210_enabled", False)
# MSC4222: Adding `state_after` to sync v2
self.msc4222_enabled: bool = experimental.get("msc4222_enabled", False)

View File

@@ -38,6 +38,7 @@ class JWTConfig(Config):
self.jwt_algorithm = jwt_config["algorithm"]
self.jwt_subject_claim = jwt_config.get("subject_claim", "sub")
self.jwt_display_name_claim = jwt_config.get("display_name_claim")
# The issuer and audiences are optional, if provided, it is asserted
# that the claims exist on the JWT.
@@ -49,5 +50,6 @@ class JWTConfig(Config):
self.jwt_secret = None
self.jwt_algorithm = None
self.jwt_subject_claim = None
self.jwt_display_name_claim = None
self.jwt_issuer = None
self.jwt_audiences = None

View File

@@ -21,10 +21,15 @@
from typing import Any
from synapse.config._base import Config
from synapse.config._base import Config, ConfigError, read_file
from synapse.types import JsonDict
from synapse.util.check_dependencies import check_requirements
CONFLICTING_PASSWORD_OPTS_ERROR = """\
You have configured both `redis.password` and `redis.password_path`.
These are mutually incompatible.
"""
class RedisConfig(Config):
section = "redis"
@@ -43,6 +48,17 @@ class RedisConfig(Config):
self.redis_path = redis_config.get("path", None)
self.redis_dbid = redis_config.get("dbid", None)
self.redis_password = redis_config.get("password")
redis_password_path = redis_config.get("password_path")
if redis_password_path:
if self.redis_password:
raise ConfigError(CONFLICTING_PASSWORD_OPTS_ERROR)
self.redis_password = read_file(
redis_password_path,
(
"redis",
"password_path",
),
).strip()
self.redis_use_tls = redis_config.get("use_tls", False)
self.redis_certificate = redis_config.get("certificate_file", None)

View File

@@ -272,9 +272,7 @@ class ContentRepositoryConfig(Config):
remote_media_lifetime
)
self.enable_authenticated_media = config.get(
"enable_authenticated_media", False
)
self.enable_authenticated_media = config.get("enable_authenticated_media", True)
def generate_config_section(self, data_dir_path: str, **kwargs: Any) -> str:
assert data_dir_path is not None

View File

@@ -2,7 +2,7 @@
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2014-2021 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023 New Vector, Ltd
# Copyright (C) 2023-2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
@@ -215,9 +215,6 @@ class HttpListenerConfig:
additional_resources: Dict[str, dict] = attr.Factory(dict)
tag: Optional[str] = None
request_id_header: Optional[str] = None
# If true, the listener will return CORS response headers compatible with MSC3886:
# https://github.com/matrix-org/matrix-spec-proposals/pull/3886
experimental_cors_msc3886: bool = False
@attr.s(slots=True, frozen=True, auto_attribs=True)
@@ -780,6 +777,17 @@ class ServerConfig(Config):
else:
self.delete_stale_devices_after = None
# The maximum allowed delay duration for delayed events (MSC4140).
max_event_delay_duration = config.get("max_event_delay_duration")
if max_event_delay_duration is not None:
self.max_event_delay_ms: Optional[int] = self.parse_duration(
max_event_delay_duration
)
if self.max_event_delay_ms <= 0:
raise ConfigError("max_event_delay_duration must be a positive value")
else:
self.max_event_delay_ms = None
def has_tls_listener(self) -> bool:
return any(listener.is_tls() for listener in self.listeners)
@@ -993,7 +1001,6 @@ def parse_listener_def(num: int, listener: Any) -> ListenerConfig:
additional_resources=listener.get("additional_resources", {}),
tag=listener.get("tag"),
request_id_header=listener.get("request_id_header"),
experimental_cors_msc3886=listener.get("experimental_cors_msc3886", False),
)
if socket_path:

View File

@@ -388,6 +388,7 @@ LENIENT_EVENT_BYTE_LIMITS_ROOM_VERSIONS = {
RoomVersions.V9,
RoomVersions.V10,
RoomVersions.MSC1767v10,
RoomVersions.MSC3757v10,
}
@@ -790,9 +791,10 @@ def get_send_level(
def _can_send_event(event: "EventBase", auth_events: StateMap["EventBase"]) -> bool:
state_key = event.get_state_key()
power_levels_event = get_power_level_event(auth_events)
send_level = get_send_level(event.type, event.get("state_key"), power_levels_event)
send_level = get_send_level(event.type, state_key, power_levels_event)
user_level = get_user_power_level(event.user_id, auth_events)
if user_level < send_level:
@@ -803,11 +805,34 @@ def _can_send_event(event: "EventBase", auth_events: StateMap["EventBase"]) -> b
errcode=Codes.INSUFFICIENT_POWER,
)
# Check state_key
if hasattr(event, "state_key"):
if event.state_key.startswith("@"):
if event.state_key != event.user_id:
raise AuthError(403, "You are not allowed to set others state")
if (
state_key is not None
and state_key.startswith("@")
and state_key != event.user_id
):
if event.room_version.msc3757_enabled:
try:
colon_idx = state_key.index(":", 1)
suffix_idx = state_key.find("_", colon_idx + 1)
state_key_user_id = (
state_key[:suffix_idx] if suffix_idx != -1 else state_key
)
if not UserID.is_valid(state_key_user_id):
raise ValueError
except ValueError:
raise SynapseError(
400,
"State key neither equals a valid user ID, nor starts with one plus an underscore",
errcode=Codes.BAD_JSON,
)
if (
# sender is owner of the state key
state_key_user_id == event.user_id
# sender has higher PL than the owner of the state key
or user_level > get_user_power_level(state_key_user_id, auth_events)
):
return True
raise AuthError(403, "You are not allowed to set others state")
return True

View File

@@ -140,7 +140,6 @@ from typing import (
Iterable,
List,
Optional,
Set,
Tuple,
)
@@ -170,7 +169,13 @@ from synapse.metrics.background_process_metrics import (
run_as_background_process,
wrap_as_background_process,
)
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken, StrCollection
from synapse.types import (
JsonDict,
ReadReceipt,
RoomStreamToken,
StrCollection,
get_domain_from_id,
)
from synapse.util import Clock
from synapse.util.metrics import Measure
from synapse.util.retryutils import filter_destinations_by_retry_limiter
@@ -297,12 +302,10 @@ class _DestinationWakeupQueue:
# being woken up.
_MAX_TIME_IN_QUEUE = 30.0
# The maximum duration in seconds between waking up consecutive destination
# queues.
_MAX_DELAY = 0.1
sender: "FederationSender" = attr.ib()
clock: Clock = attr.ib()
max_delay_s: int = attr.ib()
queue: "OrderedDict[str, Literal[None]]" = attr.ib(factory=OrderedDict)
processing: bool = attr.ib(default=False)
@@ -332,7 +335,7 @@ class _DestinationWakeupQueue:
# We also add an upper bound to the delay, to gracefully handle the
# case where the queue only has a few entries in it.
current_sleep_seconds = min(
self._MAX_DELAY, self._MAX_TIME_IN_QUEUE / len(self.queue)
self.max_delay_s, self._MAX_TIME_IN_QUEUE / len(self.queue)
)
while self.queue:
@@ -416,19 +419,14 @@ class FederationSender(AbstractFederationSender):
self._is_processing = False
self._last_poked_id = -1
# map from room_id to a set of PerDestinationQueues which we believe are
# awaiting a call to flush_read_receipts_for_room. The presence of an entry
# here for a given room means that we are rate-limiting RR flushes to that room,
# and that there is a pending call to _flush_rrs_for_room in the system.
self._queues_awaiting_rr_flush_by_room: Dict[str, Set[PerDestinationQueue]] = {}
self._rr_txn_interval_per_room_ms = (
1000.0
/ hs.config.ratelimiting.federation_rr_transactions_per_room_per_second
)
self._external_cache = hs.get_external_cache()
self._destination_wakeup_queue = _DestinationWakeupQueue(self, self.clock)
rr_txn_interval_per_room_s = (
1.0 / hs.config.ratelimiting.federation_rr_transactions_per_room_per_second
)
self._destination_wakeup_queue = _DestinationWakeupQueue(
self, self.clock, max_delay_s=rr_txn_interval_per_room_s
)
# Regularly wake up destinations that have outstanding PDUs to be caught up
self.clock.looping_call_now(
@@ -745,37 +743,48 @@ class FederationSender(AbstractFederationSender):
# Some background on the rate-limiting going on here.
#
# It turns out that if we attempt to send out RRs as soon as we get them from
# a client, then we end up trying to do several hundred Hz of federation
# transactions. (The number of transactions scales as O(N^2) on the size of a
# room, since in a large room we have both more RRs coming in, and more servers
# to send them to.)
# It turns out that if we attempt to send out RRs as soon as we get them
# from a client, then we end up trying to do several hundred Hz of
# federation transactions. (The number of transactions scales as O(N^2)
# on the size of a room, since in a large room we have both more RRs
# coming in, and more servers to send them to.)
#
# This leads to a lot of CPU load, and we end up getting behind. The solution
# currently adopted is as follows:
# This leads to a lot of CPU load, and we end up getting behind. The
# solution currently adopted is to differentiate between receipts and
# destinations we should immediately send to, and those we can trickle
# the receipts to.
#
# The first receipt in a given room is sent out immediately, at time T0. Any
# further receipts are, in theory, batched up for N seconds, where N is calculated
# based on the number of servers in the room to achieve a transaction frequency
# of around 50Hz. So, for example, if there were 100 servers in the room, then
# N would be 100 / 50Hz = 2 seconds.
# The current logic is to send receipts out immediately if:
# - the room is "small", i.e. there's only N servers to send receipts
# to, and so sending out the receipts immediately doesn't cause too
# much load; or
# - the receipt is for an event that happened recently, as users
# notice if receipts are delayed when they know other users are
# currently reading the room; or
# - the receipt is being sent to the server that sent the event, so
# that users see receipts for their own receipts quickly.
#
# Then, after T+N, we flush out any receipts that have accumulated, and restart
# the timer to flush out more receipts at T+2N, etc. If no receipts accumulate,
# we stop the cycle and go back to the start.
# For destinations that we should delay sending the receipt to, we queue
# the receipts up to be sent in the next transaction, but don't trigger
# a new transaction to be sent. We then add the destination to the
# `DestinationWakeupQueue`, which will slowly iterate over each
# destination and trigger a new transaction to be sent.
#
# However, in practice, it is often possible to flush out receipts earlier: in
# particular, if we are sending a transaction to a given server anyway (for
# example, because we have a PDU or a RR in another room to send), then we may
# as well send out all of the pending RRs for that server. So it may be that
# by the time we get to T+N, we don't actually have any RRs left to send out.
# Nevertheless we continue to buffer up RRs for the room in question until we
# reach the point that no RRs arrive between timer ticks.
# However, in practice, it is often possible to send out delayed
# receipts earlier: in particular, if we are sending a transaction to a
# given server anyway (for example, because we have a PDU or a RR in
# another room to send), then we may as well send out all of the pending
# RRs for that server. So it may be that by the time we get to waking up
# the destination, we don't actually have any RRs left to send out.
#
# For even more background, see https://github.com/matrix-org/synapse/issues/4730.
# For even more background, see
# https://github.com/matrix-org/synapse/issues/4730.
room_id = receipt.room_id
# Local read receipts always have 1 event ID.
event_id = receipt.event_ids[0]
# Work out which remote servers should be poked and poke them.
domains_set = await self._storage_controllers.state.get_current_hosts_in_room_or_partial_state_approximation(
room_id
@@ -797,49 +806,51 @@ class FederationSender(AbstractFederationSender):
if not domains:
return
queues_pending_flush = self._queues_awaiting_rr_flush_by_room.get(room_id)
# We now split which domains we want to wake up immediately vs which we
# want to delay waking up.
immediate_domains: StrCollection
delay_domains: StrCollection
# if there is no flush yet scheduled, we will send out these receipts with
# immediate flushes, and schedule the next flush for this room.
if queues_pending_flush is not None:
logger.debug("Queuing receipt for: %r", domains)
if len(domains) < 10:
# For "small" rooms send to all domains immediately
immediate_domains = domains
delay_domains = ()
else:
logger.debug("Sending receipt to: %r", domains)
self._schedule_rr_flush_for_room(room_id, len(domains))
metadata = await self.store.get_metadata_for_event(
receipt.room_id, event_id
)
assert metadata is not None
for domain in domains:
sender_domain = get_domain_from_id(metadata.sender)
if self.clock.time_msec() - metadata.received_ts < 60_000:
# We always send receipts for recent messages immediately
immediate_domains = domains
delay_domains = ()
else:
# Otherwise, we delay waking up all destinations except for the
# sender's domain.
immediate_domains = []
delay_domains = []
for domain in domains:
if domain == sender_domain:
immediate_domains.append(domain)
else:
delay_domains.append(domain)
for domain in immediate_domains:
# Add to destination queue and wake the destination up
queue = self._get_per_destination_queue(domain)
queue.queue_read_receipt(receipt)
queue.attempt_new_transaction()
for domain in delay_domains:
# Add to destination queue...
queue = self._get_per_destination_queue(domain)
queue.queue_read_receipt(receipt)
# if there is already a RR flush pending for this room, then make sure this
# destination is registered for the flush
if queues_pending_flush is not None:
queues_pending_flush.add(queue)
else:
queue.flush_read_receipts_for_room(room_id)
def _schedule_rr_flush_for_room(self, room_id: str, n_domains: int) -> None:
# that is going to cause approximately len(domains) transactions, so now back
# off for that multiplied by RR_TXN_INTERVAL_PER_ROOM
backoff_ms = self._rr_txn_interval_per_room_ms * n_domains
logger.debug("Scheduling RR flush in %s in %d ms", room_id, backoff_ms)
self.clock.call_later(backoff_ms, self._flush_rrs_for_room, room_id)
self._queues_awaiting_rr_flush_by_room[room_id] = set()
def _flush_rrs_for_room(self, room_id: str) -> None:
queues = self._queues_awaiting_rr_flush_by_room.pop(room_id)
logger.debug("Flushing RRs in %s to %s", room_id, queues)
if not queues:
# no more RRs arrived for this room; we are done.
return
# schedule the next flush
self._schedule_rr_flush_for_room(room_id, len(queues))
for queue in queues:
queue.flush_read_receipts_for_room(room_id)
# ... and schedule the destination to be woken up.
self._destination_wakeup_queue.add_to_queue(domain)
async def send_presence_to_destinations(
self, states: Iterable[UserPresenceState], destinations: Iterable[str]

View File

@@ -156,7 +156,6 @@ class PerDestinationQueue:
# Each receipt can only have a single receipt per
# (room ID, receipt type, user ID, thread ID) tuple.
self._pending_receipt_edus: List[Dict[str, Dict[str, Dict[str, dict]]]] = []
self._rrs_pending_flush = False
# stream_id of last successfully sent to-device message.
# NB: may be a long or an int.
@@ -258,15 +257,7 @@ class PerDestinationQueue:
}
)
def flush_read_receipts_for_room(self, room_id: str) -> None:
# If there are any pending receipts for this room then force-flush them
# in a new transaction.
for edu in self._pending_receipt_edus:
if room_id in edu:
self._rrs_pending_flush = True
self.attempt_new_transaction()
# No use in checking remaining EDUs if the room was found.
break
self.mark_new_data()
def send_keyed_edu(self, edu: Edu, key: Hashable) -> None:
self._pending_edus_keyed[(edu.edu_type, key)] = edu
@@ -603,12 +594,9 @@ class PerDestinationQueue:
self._destination, last_successful_stream_ordering
)
def _get_receipt_edus(self, force_flush: bool, limit: int) -> Iterable[Edu]:
def _get_receipt_edus(self, limit: int) -> Iterable[Edu]:
if not self._pending_receipt_edus:
return
if not force_flush and not self._rrs_pending_flush:
# not yet time for this lot
return
# Send at most limit EDUs for receipts.
for content in self._pending_receipt_edus[:limit]:
@@ -747,7 +735,7 @@ class _TransactionQueueManager:
)
# Add read receipt EDUs.
pending_edus.extend(self.queue._get_receipt_edus(force_flush=False, limit=5))
pending_edus.extend(self.queue._get_receipt_edus(limit=5))
edu_limit = MAX_EDUS_PER_TRANSACTION - len(pending_edus)
# Next, prioritize to-device messages so that existing encryption channels
@@ -795,13 +783,6 @@ class _TransactionQueueManager:
if not self._pdus and not pending_edus:
return [], []
# if we've decided to send a transaction anyway, and we have room, we
# may as well send any pending RRs
if edu_limit:
pending_edus.extend(
self.queue._get_receipt_edus(force_flush=True, limit=edu_limit)
)
if self._pdus:
self._last_stream_ordering = self._pdus[
-1

View File

@@ -113,7 +113,7 @@ class Authenticator:
):
raise AuthenticationError(
HTTPStatus.UNAUTHORIZED,
"Destination mismatch in auth header",
f"Destination mismatch in auth header, received: {destination!r}",
Codes.UNAUTHORIZED,
)
if (

View File

@@ -33,7 +33,7 @@ from synapse.replication.http.account_data import (
ReplicationRemoveUserAccountDataRestServlet,
)
from synapse.streams import EventSource
from synapse.types import JsonDict, StrCollection, StreamKeyType, UserID
from synapse.types import JsonDict, JsonMapping, StrCollection, StreamKeyType, UserID
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -253,7 +253,7 @@ class AccountDataHandler:
return response["max_stream_id"]
async def add_tag_to_room(
self, user_id: str, room_id: str, tag: str, content: JsonDict
self, user_id: str, room_id: str, tag: str, content: JsonMapping
) -> int:
"""Add a tag to a room for a user.

View File

@@ -21,13 +21,34 @@
import abc
import logging
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Sequence, Set
from typing import (
TYPE_CHECKING,
Any,
Dict,
List,
Mapping,
Optional,
Sequence,
Set,
Tuple,
)
import attr
from synapse.api.constants import Direction, Membership
from synapse.api.constants import Direction, EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.events import EventBase
from synapse.types import JsonMapping, RoomStreamToken, StateMap, UserID, UserInfo
from synapse.types import (
JsonMapping,
Requester,
RoomStreamToken,
ScheduledTask,
StateMap,
TaskStatus,
UserID,
UserInfo,
create_requester,
)
from synapse.visibility import filter_events_for_client
if TYPE_CHECKING:
@@ -35,6 +56,8 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
REDACT_ALL_EVENTS_ACTION_NAME = "redact_all_events"
class AdminHandler:
def __init__(self, hs: "HomeServer"):
@@ -43,6 +66,22 @@ class AdminHandler:
self._storage_controllers = hs.get_storage_controllers()
self._state_storage_controller = self._storage_controllers.state
self._msc3866_enabled = hs.config.experimental.msc3866.enabled
self.event_creation_handler = hs.get_event_creation_handler()
self._task_scheduler = hs.get_task_scheduler()
self._task_scheduler.register_action(
self._redact_all_events, REDACT_ALL_EVENTS_ACTION_NAME
)
self.hs = hs
async def get_redact_task(self, redact_id: str) -> Optional[ScheduledTask]:
"""Get the current status of an active redaction process
Args:
redact_id: redact_id returned by start_redact_events.
"""
return await self._task_scheduler.get_task(redact_id)
async def get_whois(self, user: UserID) -> JsonMapping:
connections = []
@@ -85,6 +124,7 @@ class AdminHandler:
"consent_ts": user_info.consent_ts,
"user_type": user_info.user_type,
"is_guest": user_info.is_guest,
"suspended": user_info.suspended,
}
if self._msc3866_enabled:
@@ -313,6 +353,155 @@ class AdminHandler:
return writer.finished()
async def start_redact_events(
self,
user_id: str,
rooms: list,
requester: JsonMapping,
reason: Optional[str],
limit: Optional[int],
) -> str:
"""
Start a task redacting the events of the given user in the given rooms
Args:
user_id: the user ID of the user whose events should be redacted
rooms: the rooms in which to redact the user's events
requester: the user requesting the events
reason: reason for requesting the redaction, ie spam, etc
limit: limit on the number of events in each room to redact
Returns:
a unique ID which can be used to query the status of the task
"""
active_tasks = await self._task_scheduler.get_tasks(
actions=[REDACT_ALL_EVENTS_ACTION_NAME],
resource_id=user_id,
statuses=[TaskStatus.ACTIVE],
)
if len(active_tasks) > 0:
raise SynapseError(
400, "Redact already in progress for user %s" % (user_id,)
)
if not limit:
limit = 1000
redact_id = await self._task_scheduler.schedule_task(
REDACT_ALL_EVENTS_ACTION_NAME,
resource_id=user_id,
params={
"rooms": rooms,
"requester": requester,
"user_id": user_id,
"reason": reason,
"limit": limit,
},
)
logger.info(
"starting redact events with redact_id %s",
redact_id,
)
return redact_id
async def _redact_all_events(
self, task: ScheduledTask
) -> Tuple[TaskStatus, Optional[Mapping[str, Any]], Optional[str]]:
"""
Task to redact all of a users events in the given rooms, tracking which, if any, events
whose redaction failed
"""
assert task.params is not None
rooms = task.params.get("rooms")
assert rooms is not None
r = task.params.get("requester")
assert r is not None
admin = Requester.deserialize(self._store, r)
user_id = task.params.get("user_id")
assert user_id is not None
# puppet the user if they're ours, otherwise use admin to redact
requester = create_requester(
user_id if self.hs.is_mine_id(user_id) else admin.user.to_string(),
authenticated_entity=admin.user.to_string(),
)
reason = task.params.get("reason")
limit = task.params.get("limit")
assert limit is not None
result: Mapping[str, Any] = (
task.result if task.result else {"failed_redactions": {}}
)
for room in rooms:
room_version = await self._store.get_room_version(room)
event_ids = await self._store.get_events_sent_by_user_in_room(
user_id,
room,
limit,
["m.room.member", "m.room.message"],
)
if not event_ids:
# nothing to redact in this room
continue
events = await self._store.get_events_as_list(event_ids)
for event in events:
# we care about join events but not other membership events
if event.type == "m.room.member":
content = event.content
if content:
if content.get("membership") == Membership.JOIN:
pass
else:
continue
relations = await self._store.get_relations_for_event(
room, event.event_id, event, event_type=EventTypes.Redaction
)
# if we've already successfully redacted this event then skip processing it
if relations[0]:
continue
event_dict = {
"type": EventTypes.Redaction,
"content": {"reason": reason} if reason else {},
"room_id": room,
"sender": user_id,
}
if room_version.updated_redaction_rules:
event_dict["content"]["redacts"] = event.event_id
else:
event_dict["redacts"] = event.event_id
try:
# set the prev event to the offending message to allow for redactions
# to be processed in the case where the user has been kicked/banned before
# redactions are requested
(
redaction,
_,
) = await self.event_creation_handler.create_and_send_nonmember_event(
requester,
event_dict,
prev_event_ids=[event.event_id],
ratelimit=False,
)
except Exception as ex:
logger.info(
f"Redaction of event {event.event_id} failed due to: {ex}"
)
result["failed_redactions"][event.event_id] = str(ex)
await self._task_scheduler.update_task(task.id, result=result)
return TaskStatus.COMPLETE, result, None
class ExfiltrationWriter(metaclass=abc.ABCMeta):
"""Interface used to specify how to write exported data."""

View File

@@ -0,0 +1,498 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import logging
from typing import TYPE_CHECKING, List, Optional, Set, Tuple
from twisted.internet.interfaces import IDelayedCall
from synapse.api.constants import EventTypes
from synapse.api.errors import ShadowBanError
from synapse.config.workers import MAIN_PROCESS_INSTANCE_NAME
from synapse.logging.opentracing import set_tag
from synapse.metrics import event_processing_positions
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.delayed_events import (
ReplicationAddedDelayedEventRestServlet,
)
from synapse.storage.databases.main.delayed_events import (
DelayedEventDetails,
DelayID,
EventType,
StateKey,
Timestamp,
UserLocalpart,
)
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import (
JsonDict,
Requester,
RoomID,
UserID,
create_requester,
)
from synapse.util.events import generate_fake_event_id
from synapse.util.metrics import Measure
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class DelayedEventsHandler:
def __init__(self, hs: "HomeServer"):
self._store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers()
self._config = hs.config
self._clock = hs.get_clock()
self._request_ratelimiter = hs.get_request_ratelimiter()
self._event_creation_handler = hs.get_event_creation_handler()
self._room_member_handler = hs.get_room_member_handler()
self._next_delayed_event_call: Optional[IDelayedCall] = None
# The current position in the current_state_delta stream
self._event_pos: Optional[int] = None
# Guard to ensure we only process event deltas one at a time
self._event_processing = False
if hs.config.worker.worker_app is None:
self._repl_client = None
async def _schedule_db_events() -> None:
# We kick this off to pick up outstanding work from before the last restart.
# Block until we're up to date.
await self._unsafe_process_new_event()
hs.get_notifier().add_replication_callback(self.notify_new_event)
# Kick off again (without blocking) to catch any missed notifications
# that may have fired before the callback was added.
self._clock.call_later(0, self.notify_new_event)
# Delayed events that are already marked as processed on startup might not have been
# sent properly on the last run of the server, so unmark them to send them again.
# Caveat: this will double-send delayed events that successfully persisted, but failed
# to be removed from the DB table of delayed events.
# TODO: To avoid double-sending, scan the timeline to find which of these events were
# already sent. To do so, must store delay_ids in sent events to retrieve them later.
await self._store.unprocess_delayed_events()
events, next_send_ts = await self._store.process_timeout_delayed_events(
self._get_current_ts()
)
if next_send_ts:
self._schedule_next_at(next_send_ts)
# Can send the events in background after having awaited on marking them as processed
run_as_background_process(
"_send_events",
self._send_events,
events,
)
self._initialized_from_db = run_as_background_process(
"_schedule_db_events", _schedule_db_events
)
else:
self._repl_client = ReplicationAddedDelayedEventRestServlet.make_client(hs)
@property
def _is_master(self) -> bool:
return self._repl_client is None
def notify_new_event(self) -> None:
"""
Called when there may be more state event deltas to process,
which should cancel pending delayed events for the same state.
"""
if self._event_processing:
return
self._event_processing = True
async def process() -> None:
try:
await self._unsafe_process_new_event()
finally:
self._event_processing = False
run_as_background_process("delayed_events.notify_new_event", process)
async def _unsafe_process_new_event(self) -> None:
# If self._event_pos is None then means we haven't fetched it from the DB yet
if self._event_pos is None:
self._event_pos = await self._store.get_delayed_events_stream_pos()
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self._event_pos > room_max_stream_ordering:
# apparently, we've processed more events than exist in the database!
# this can happen if events are removed with history purge or similar.
logger.warning(
"Event stream ordering appears to have gone backwards (%i -> %i): "
"rewinding delayed events processor",
self._event_pos,
room_max_stream_ordering,
)
self._event_pos = room_max_stream_ordering
# Loop round handling deltas until we're up to date
while True:
with Measure(self._clock, "delayed_events_delta"):
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self._event_pos == room_max_stream_ordering:
return
logger.debug(
"Processing delayed events %s->%s",
self._event_pos,
room_max_stream_ordering,
)
(
max_pos,
deltas,
) = await self._storage_controllers.state.get_current_state_deltas(
self._event_pos, room_max_stream_ordering
)
logger.debug(
"Handling %d state deltas for delayed events processing",
len(deltas),
)
await self._handle_state_deltas(deltas)
self._event_pos = max_pos
# Expose current event processing position to prometheus
event_processing_positions.labels("delayed_events").set(max_pos)
await self._store.update_delayed_events_stream_pos(max_pos)
async def _handle_state_deltas(self, deltas: List[StateDelta]) -> None:
"""
Process current state deltas to cancel pending delayed events
that target the same state.
"""
for delta in deltas:
logger.debug(
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
)
next_send_ts = await self._store.cancel_delayed_state_events(
room_id=delta.room_id,
event_type=delta.event_type,
state_key=delta.state_key,
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at_or_none(next_send_ts)
async def add(
self,
requester: Requester,
*,
room_id: str,
event_type: str,
state_key: Optional[str],
origin_server_ts: Optional[int],
content: JsonDict,
delay: int,
) -> str:
"""
Creates a new delayed event and schedules its delivery.
Args:
requester: The requester of the delayed event, who will be its owner.
room_id: The ID of the room where the event should be sent to.
event_type: The type of event to be sent.
state_key: The state key of the event to be sent, or None if it is not a state event.
origin_server_ts: The custom timestamp to send the event with.
If None, the timestamp will be the actual time when the event is sent.
content: The content of the event to be sent.
delay: How long (in milliseconds) to wait before automatically sending the event.
Returns: The ID of the added delayed event.
Raises:
SynapseError: if the delayed event fails validation checks.
"""
await self._request_ratelimiter.ratelimit(requester)
self._event_creation_handler.validator.validate_builder(
self._event_creation_handler.event_builder_factory.for_room_version(
await self._store.get_room_version(room_id),
{
"type": event_type,
"content": content,
"room_id": room_id,
"sender": str(requester.user),
**({"state_key": state_key} if state_key is not None else {}),
},
)
)
creation_ts = self._get_current_ts()
delay_id, next_send_ts = await self._store.add_delayed_event(
user_localpart=requester.user.localpart,
device_id=requester.device_id,
creation_ts=creation_ts,
room_id=room_id,
event_type=event_type,
state_key=state_key,
origin_server_ts=origin_server_ts,
content=content,
delay=delay,
)
if self._repl_client is not None:
# NOTE: If this throws, the delayed event will remain in the DB and
# will be picked up once the main worker gets another delayed event.
await self._repl_client(
instance_name=MAIN_PROCESS_INSTANCE_NAME,
next_send_ts=next_send_ts,
)
elif self._next_send_ts_changed(next_send_ts):
self._schedule_next_at(next_send_ts)
return delay_id
def on_added(self, next_send_ts: int) -> None:
next_send_ts = Timestamp(next_send_ts)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at(next_send_ts)
async def cancel(self, requester: Requester, delay_id: str) -> None:
"""
Cancels the scheduled delivery of the matching delayed event.
Args:
requester: The owner of the delayed event to act on.
delay_id: The ID of the delayed event to act on.
Raises:
NotFoundError: if no matching delayed event could be found.
"""
assert self._is_master
await self._request_ratelimiter.ratelimit(requester)
await self._initialized_from_db
next_send_ts = await self._store.cancel_delayed_event(
delay_id=delay_id,
user_localpart=requester.user.localpart,
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at_or_none(next_send_ts)
async def restart(self, requester: Requester, delay_id: str) -> None:
"""
Restarts the scheduled delivery of the matching delayed event.
Args:
requester: The owner of the delayed event to act on.
delay_id: The ID of the delayed event to act on.
Raises:
NotFoundError: if no matching delayed event could be found.
"""
assert self._is_master
await self._request_ratelimiter.ratelimit(requester)
await self._initialized_from_db
next_send_ts = await self._store.restart_delayed_event(
delay_id=delay_id,
user_localpart=requester.user.localpart,
current_ts=self._get_current_ts(),
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at(next_send_ts)
async def send(self, requester: Requester, delay_id: str) -> None:
"""
Immediately sends the matching delayed event, instead of waiting for its scheduled delivery.
Args:
requester: The owner of the delayed event to act on.
delay_id: The ID of the delayed event to act on.
Raises:
NotFoundError: if no matching delayed event could be found.
"""
assert self._is_master
await self._request_ratelimiter.ratelimit(requester)
await self._initialized_from_db
event, next_send_ts = await self._store.process_target_delayed_event(
delay_id=delay_id,
user_localpart=requester.user.localpart,
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at_or_none(next_send_ts)
await self._send_event(
DelayedEventDetails(
delay_id=DelayID(delay_id),
user_localpart=UserLocalpart(requester.user.localpart),
room_id=event.room_id,
type=event.type,
state_key=event.state_key,
origin_server_ts=event.origin_server_ts,
content=event.content,
device_id=event.device_id,
)
)
async def _send_on_timeout(self) -> None:
self._next_delayed_event_call = None
events, next_send_ts = await self._store.process_timeout_delayed_events(
self._get_current_ts()
)
if next_send_ts:
self._schedule_next_at(next_send_ts)
await self._send_events(events)
async def _send_events(self, events: List[DelayedEventDetails]) -> None:
sent_state: Set[Tuple[RoomID, EventType, StateKey]] = set()
for event in events:
if event.state_key is not None:
state_info = (event.room_id, event.type, event.state_key)
if state_info in sent_state:
continue
else:
state_info = None
try:
# TODO: send in background if message event or non-conflicting state event
await self._send_event(event)
if state_info is not None:
sent_state.add(state_info)
except Exception:
logger.exception("Failed to send delayed event")
for room_id, event_type, state_key in sent_state:
await self._store.delete_processed_delayed_state_events(
room_id=str(room_id),
event_type=event_type,
state_key=state_key,
)
def _schedule_next_at_or_none(self, next_send_ts: Optional[Timestamp]) -> None:
if next_send_ts is not None:
self._schedule_next_at(next_send_ts)
elif self._next_delayed_event_call is not None:
self._next_delayed_event_call.cancel()
self._next_delayed_event_call = None
def _schedule_next_at(self, next_send_ts: Timestamp) -> None:
delay = next_send_ts - self._get_current_ts()
delay_sec = delay / 1000 if delay > 0 else 0
if self._next_delayed_event_call is None:
self._next_delayed_event_call = self._clock.call_later(
delay_sec,
run_as_background_process,
"_send_on_timeout",
self._send_on_timeout,
)
else:
self._next_delayed_event_call.reset(delay_sec)
async def get_all_for_user(self, requester: Requester) -> List[JsonDict]:
"""Return all pending delayed events requested by the given user."""
await self._request_ratelimiter.ratelimit(requester)
return await self._store.get_all_delayed_events_for_user(
requester.user.localpart
)
async def _send_event(
self,
event: DelayedEventDetails,
txn_id: Optional[str] = None,
) -> None:
user_id = UserID(event.user_localpart, self._config.server.server_name)
user_id_str = user_id.to_string()
# Create a new requester from what data is currently available
requester = create_requester(
user_id,
is_guest=await self._store.is_guest(user_id_str),
device_id=event.device_id,
)
try:
if event.state_key is not None and event.type == EventTypes.Member:
membership = event.content.get("membership")
assert membership is not None
event_id, _ = await self._room_member_handler.update_membership(
requester,
target=UserID.from_string(event.state_key),
room_id=event.room_id.to_string(),
action=membership,
content=event.content,
origin_server_ts=event.origin_server_ts,
)
else:
event_dict: JsonDict = {
"type": event.type,
"content": event.content,
"room_id": event.room_id.to_string(),
"sender": user_id_str,
}
if event.origin_server_ts is not None:
event_dict["origin_server_ts"] = event.origin_server_ts
if event.state_key is not None:
event_dict["state_key"] = event.state_key
(
sent_event,
_,
) = await self._event_creation_handler.create_and_send_nonmember_event(
requester,
event_dict,
txn_id=txn_id,
)
event_id = sent_event.event_id
except ShadowBanError:
event_id = generate_fake_event_id()
finally:
# TODO: If this is a temporary error, retry. Otherwise, consider notifying clients of the failure
try:
await self._store.delete_processed_delayed_event(
event.delay_id, event.user_localpart
)
except Exception:
logger.exception("Failed to delete processed delayed event")
set_tag("event_id", event_id)
def _get_current_ts(self) -> Timestamp:
return Timestamp(self._clock.time_msec())
def _next_send_ts_changed(self, next_send_ts: Optional[Timestamp]) -> bool:
# The DB alone knows if the next send time changed after adding/modifying
# a delayed event, but if we were to ever miss updating our delayed call's
# firing time, we may miss other updates. So, keep track of changes to the
# the next send time here instead of in the DB.
cached_next_send_ts = (
int(self._next_delayed_event_call.getTime() * 1000)
if self._next_delayed_event_call is not None
else None
)
return next_send_ts != cached_next_send_ts

View File

@@ -48,6 +48,7 @@ from synapse.metrics.background_process_metrics import (
wrap_as_background_process,
)
from synapse.storage.databases.main.client_ips import DeviceLastConnectionInfo
from synapse.storage.databases.main.roommember import EventIdMembership
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import (
DeviceListUpdates,
@@ -222,7 +223,6 @@ class DeviceWorkerHandler:
return changed
@trace
@measure_func("device.get_user_ids_changed")
@cancellable
async def get_user_ids_changed(
self, user_id: str, from_token: StreamToken
@@ -290,9 +290,11 @@ class DeviceWorkerHandler:
memberships_to_fetch.add(delta.prev_event_id)
# Fetch all the memberships for the membership events
event_id_to_memberships = await self.store.get_membership_from_event_ids(
memberships_to_fetch
)
event_id_to_memberships: Mapping[str, Optional[EventIdMembership]] = {}
if memberships_to_fetch:
event_id_to_memberships = await self.store.get_membership_from_event_ids(
memberships_to_fetch
)
joined_invited_knocked = (
Membership.JOIN,
@@ -349,7 +351,6 @@ class DeviceWorkerHandler:
return device_list_updates
@measure_func("_generate_sync_entry_for_device_list")
async def generate_sync_entry_for_device_list(
self,
user_id: str,

View File

@@ -39,6 +39,8 @@ from synapse.replication.http.devices import ReplicationUploadKeysForUserRestSer
from synapse.types import (
JsonDict,
JsonMapping,
ScheduledTask,
TaskStatus,
UserID,
get_domain_from_id,
get_verify_key_from_cross_signing_key,
@@ -70,6 +72,7 @@ class E2eKeysHandler:
self.is_mine = hs.is_mine
self.clock = hs.get_clock()
self._worker_lock_handler = hs.get_worker_locks_handler()
self._task_scheduler = hs.get_task_scheduler()
federation_registry = hs.get_federation_registry()
@@ -116,6 +119,10 @@ class E2eKeysHandler:
hs.config.experimental.msc3984_appservice_key_query
)
self._task_scheduler.register_action(
self._delete_old_one_time_keys_task, "delete_old_otks"
)
@trace
@cancellable
async def query_devices(
@@ -615,7 +622,7 @@ class E2eKeysHandler:
3. Attempt to fetch fallback keys from the database.
Args:
local_query: An iterable of tuples of (user ID, device ID, algorithm).
local_query: An iterable of tuples of (user ID, device ID, algorithm, number of keys).
always_include_fallback_keys: True to always include fallback keys.
Returns:
@@ -1574,6 +1581,45 @@ class E2eKeysHandler:
return True
return False
async def _delete_old_one_time_keys_task(
self, task: ScheduledTask
) -> Tuple[TaskStatus, Optional[JsonMapping], Optional[str]]:
"""Scheduler task to delete old one time keys.
Until Synapse 1.119, Synapse used to issue one-time-keys in a random order, leading to the possibility
that it could still have old OTKs that the client has dropped. This task is scheduled exactly once
by a database schema delta file, and it clears out old one-time-keys that look like they came from libolm.
"""
last_user = task.result.get("from_user", "") if task.result else ""
while True:
# We process users in batches of 100
users, rowcount = await self.store.delete_old_otks_for_next_user_batch(
last_user, 100
)
if len(users) == 0:
# We're done!
return TaskStatus.COMPLETE, None, None
logger.debug(
"Deleted %i old one-time-keys for users '%s'..'%s'",
rowcount,
users[0],
users[-1],
)
last_user = users[-1]
# Store our progress
await self._task_scheduler.update_task(
task.id, result={"from_user": last_user}
)
# Sleep a little before doing the next user.
#
# matrix.org has about 15M users in the e2e_one_time_keys_json table
# (comprising 20M devices). We want this to take about a week, so we need
# to do about one batch of 100 users every 4 seconds.
await self.clock.sleep(4)
def _check_cross_signing_key(
key: JsonDict, user_id: str, key_type: str, signing_key: Optional[VerifyKey] = None

View File

@@ -18,7 +18,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Optional, Tuple
from authlib.jose import JsonWebToken, JWTClaims
from authlib.jose.errors import BadSignatureError, InvalidClaimError, JoseError
@@ -36,11 +36,12 @@ class JwtHandler:
self.jwt_secret = hs.config.jwt.jwt_secret
self.jwt_subject_claim = hs.config.jwt.jwt_subject_claim
self.jwt_display_name_claim = hs.config.jwt.jwt_display_name_claim
self.jwt_algorithm = hs.config.jwt.jwt_algorithm
self.jwt_issuer = hs.config.jwt.jwt_issuer
self.jwt_audiences = hs.config.jwt.jwt_audiences
def validate_login(self, login_submission: JsonDict) -> str:
def validate_login(self, login_submission: JsonDict) -> Tuple[str, Optional[str]]:
"""
Authenticates the user for the /login API
@@ -49,7 +50,8 @@ class JwtHandler:
(including 'type' and other relevant fields)
Returns:
The user ID that is logging in.
A tuple of (user_id, display_name) of the user that is logging in.
If the JWT does not contain a display name, the second element of the tuple will be None.
Raises:
LoginError if there was an authentication problem.
@@ -109,4 +111,10 @@ class JwtHandler:
if user is None:
raise LoginError(403, "Invalid JWT", errcode=Codes.FORBIDDEN)
return UserID(user, self.hs.hostname).to_string()
default_display_name = None
if self.jwt_display_name_claim:
display_name_claim = claims.get(self.jwt_display_name_claim)
if display_name_claim is not None:
default_display_name = display_name_claim
return UserID(user, self.hs.hostname).to_string(), default_display_name

View File

@@ -196,7 +196,9 @@ class MessageHandler:
AuthError (403) if the user doesn't have permission to view
members of this room.
"""
state_filter = state_filter or StateFilter.all()
if state_filter is None:
state_filter = StateFilter.all()
user_id = requester.user.to_string()
if at_token:

View File

@@ -1190,6 +1190,26 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
origin_server_ts=origin_server_ts,
)
async def check_for_any_membership_in_room(
self, *, user_id: str, room_id: str
) -> None:
"""
Check if the user has any membership in the room and raise error if not.
Args:
user_id: The user to check.
room_id: The room to check.
Raises:
AuthError if the user doesn't have any membership in the room.
"""
result = await self.store.get_local_current_membership_for_user_in_room(
user_id=user_id, room_id=room_id
)
if result is None or result == (None, None):
raise AuthError(403, f"User {user_id} has no membership in room {room_id}")
async def _should_perform_remote_join(
self,
user_id: str,

View File

@@ -12,9 +12,10 @@
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import itertools
import logging
from itertools import chain
from typing import TYPE_CHECKING, Dict, List, Mapping, Optional, Set, Tuple
from typing import TYPE_CHECKING, AbstractSet, Dict, List, Mapping, Optional, Set, Tuple
from prometheus_client import Histogram
from typing_extensions import assert_never
@@ -49,6 +50,7 @@ from synapse.types import (
Requester,
SlidingSyncStreamToken,
StateMap,
StrCollection,
StreamKeyType,
StreamToken,
)
@@ -78,6 +80,15 @@ sync_processing_time = Histogram(
["initial"],
)
# Limit the number of state_keys we should remember sending down the connection for each
# (room_id, user_id). We don't want to store and pull out too much data in the database.
#
# 100 is an arbitrary but small-ish number. The idea is that we probably won't send down
# too many redundant member state events (that the client already knows about) for a
# given ongoing conversation if we keep 100 around. Most rooms don't have 100 members
# anyway and it takes a while to cycle through 100 members.
MAX_NUMBER_PREVIOUS_STATE_KEYS_TO_REMEMBER = 100
class SlidingSyncHandler:
def __init__(self, hs: "HomeServer"):
@@ -293,7 +304,6 @@ class SlidingSyncHandler:
# to record rooms as having updates even if there might not actually
# be anything new for the user (e.g. due to event filters, events
# having happened after the user left, etc).
unsent_room_ids = []
if from_token:
# The set of rooms that the client (may) care about, but aren't
# in any list range (or subscribed to).
@@ -305,15 +315,24 @@ class SlidingSyncHandler:
# TODO: Replace this with something faster. When we land the
# sliding sync tables that record the most recent event
# positions we can use that.
missing_event_map_by_room = (
await self.store.get_room_events_stream_for_rooms(
room_ids=missing_rooms,
from_key=to_token.room_key,
to_key=from_token.stream_token.room_key,
limit=1,
unsent_room_ids: StrCollection
if await self.store.have_finished_sliding_sync_background_jobs():
unsent_room_ids = await (
self.store.get_rooms_that_have_updates_since_sliding_sync_table(
room_ids=missing_rooms,
from_key=from_token.stream_token.room_key,
)
)
)
unsent_room_ids = list(missing_event_map_by_room)
else:
missing_event_map_by_room = (
await self.store.get_room_events_stream_for_rooms(
room_ids=missing_rooms,
from_key=to_token.room_key,
to_key=from_token.stream_token.room_key,
limit=1,
)
)
unsent_room_ids = list(missing_event_map_by_room)
new_connection_state.rooms.record_unsent_rooms(
unsent_room_ids, from_token.stream_token.room_key
@@ -443,13 +462,11 @@ class SlidingSyncHandler:
to_token=to_token,
)
event_map = await self.store.get_events(list(state_ids.values()))
events = await self.store.get_events_as_list(list(state_ids.values()))
state_map = {}
for key, event_id in state_ids.items():
event = event_map.get(event_id)
if event:
state_map[key] = event
for event in events:
state_map[(event.type, event.state_key)] = event
return state_map
@@ -495,6 +512,26 @@ class SlidingSyncHandler:
room_sync_config.timeline_limit,
)
# Handle state resets. For example, if we see
# `room_membership_for_user_at_to_token.event_id=None and
# room_membership_for_user_at_to_token.membership is not None`, we should
# indicate to the client that a state reset happened. Perhaps we should indicate
# this by setting `initial: True` and empty `required_state: []`.
state_reset_out_of_room = False
if (
room_membership_for_user_at_to_token.event_id is None
and room_membership_for_user_at_to_token.membership is not None
):
# We only expect the `event_id` to be `None` if you've been state reset out
# of the room (meaning you're no longer in the room). We could put this as
# part of the if-statement above but we want to handle every case where
# `event_id` is `None`.
assert room_membership_for_user_at_to_token.membership is Membership.LEAVE
state_reset_out_of_room = True
prev_room_sync_config = previous_connection_state.room_configs.get(room_id)
# Determine whether we should limit the timeline to the token range.
#
# We should return historical messages (before token range) in the
@@ -523,11 +560,10 @@ class SlidingSyncHandler:
# or `limited` mean for clients that interpret them correctly. In future this
# behavior is almost certainly going to change.
#
# TODO: Also handle changes to `required_state`
from_bound = None
initial = True
ignore_timeline_bound = False
if from_token and not newly_joined:
if from_token and not newly_joined and not state_reset_out_of_room:
room_status = previous_connection_state.rooms.have_sent_room(room_id)
if room_status.status == HaveSentRoomFlag.LIVE:
from_bound = from_token.stream_token.room_key
@@ -544,7 +580,6 @@ class SlidingSyncHandler:
log_kv({"sliding_sync.room_status": room_status})
prev_room_sync_config = previous_connection_state.room_configs.get(room_id)
if prev_room_sync_config is not None:
# Check if the timeline limit has increased, if so ignore the
# timeline bound and record the change (see "XXX: Odd behavior"
@@ -555,8 +590,6 @@ class SlidingSyncHandler:
):
ignore_timeline_bound = True
# TODO: Check for changes in `required_state``
log_kv(
{
"sliding_sync.from_bound": from_bound,
@@ -732,12 +765,6 @@ class SlidingSyncHandler:
stripped_state.append(strip_event(invite_or_knock_event))
# TODO: Handle state resets. For example, if we see
# `room_membership_for_user_at_to_token.event_id=None and
# room_membership_for_user_at_to_token.membership is not None`, we should
# indicate to the client that a state reset happened. Perhaps we should indicate
# this by setting `initial: True` and empty `required_state`.
# Get the changes to current state in the token range from the
# `current_state_delta_stream` table.
#
@@ -856,6 +883,14 @@ class SlidingSyncHandler:
#
# Calculate the `StateFilter` based on the `required_state` for the room
required_state_filter = StateFilter.none()
# The requested `required_state_map` with the lazy membership expanded and
# `$ME` replaced with the user's ID. This allows us to see what membership we've
# sent down to the client in the next request.
#
# Make a copy so we can modify it. Still need to be careful to make a copy of
# the state key sets if we want to add/remove from them. We could make a deep
# copy but this saves us some work.
expanded_required_state_map = dict(room_sync_config.required_state_map)
if room_membership_for_user_at_to_token.membership not in (
Membership.INVITE,
Membership.KNOCK,
@@ -921,21 +956,48 @@ class SlidingSyncHandler:
):
lazy_load_room_members = True
# Everyone in the timeline is relevant
#
# FIXME: We probably also care about invite, ban, kick, targets, etc
# but the spec only mentions "senders".
timeline_membership: Set[str] = set()
if timeline_events is not None:
for timeline_event in timeline_events:
timeline_membership.add(timeline_event.sender)
# Update the required state filter so we pick up the new
# membership
for user_id in timeline_membership:
required_state_types.append(
(EventTypes.Member, user_id)
)
# FIXME: We probably also care about invite, ban, kick, targets, etc
# but the spec only mentions "senders".
# Add an explicit entry for each user in the timeline
#
# Make a new set or copy of the state key set so we can
# modify it without affecting the original
# `required_state_map`
expanded_required_state_map[EventTypes.Member] = (
expanded_required_state_map.get(
EventTypes.Member, set()
)
| timeline_membership
)
elif state_key == StateValues.ME:
num_others += 1
required_state_types.append((state_type, user.to_string()))
# Replace `$ME` with the user's ID so we can deduplicate
# when someone requests the same state with `$ME` or with
# their user ID.
#
# Make a new set or copy of the state key set so we can
# modify it without affecting the original
# `required_state_map`
expanded_required_state_map[EventTypes.Member] = (
expanded_required_state_map.get(
EventTypes.Member, set()
)
| {user.to_string()}
)
else:
num_others += 1
required_state_types.append((state_type, state_key))
@@ -976,6 +1038,10 @@ class SlidingSyncHandler:
include_others=required_state_filter.include_others,
)
# The required state map to store in the room sync config, if it has
# changed.
changed_required_state_map: Optional[Mapping[str, AbstractSet[str]]] = None
# We can return all of the state that was requested if this was the first
# time we've sent the room down this connection.
room_state: StateMap[EventBase] = {}
@@ -989,6 +1055,29 @@ class SlidingSyncHandler:
else:
assert from_bound is not None
if prev_room_sync_config is not None:
# Check if there are any changes to the required state config
# that we need to handle.
changed_required_state_map, added_state_filter = (
_required_state_changes(
user.to_string(),
prev_required_state_map=prev_room_sync_config.required_state_map,
request_required_state_map=expanded_required_state_map,
state_deltas=room_state_delta_id_map,
)
)
if added_state_filter:
# Some state entries got added, so we pull out the current
# state for them. If we don't do this we'd only send down new deltas.
state_ids = await self.get_current_state_ids_at(
room_id=room_id,
room_membership_for_user_at_to_token=room_membership_for_user_at_to_token,
state_filter=added_state_filter,
to_token=to_token,
)
room_state_delta_id_map.update(state_ids)
events = await self.store.get_events(
state_filter.filter_state(room_state_delta_id_map).values()
)
@@ -1036,25 +1125,66 @@ class SlidingSyncHandler:
)
)
# Figure out the last bump event in the room
#
# By default, just choose the membership event position for any non-join membership
bump_stamp = room_membership_for_user_at_to_token.event_pos.stream
# Figure out the last bump event in the room. If the bump stamp hasn't
# changed we omit it from the response.
bump_stamp = None
always_return_bump_stamp = (
# We use the membership event position for any non-join
room_membership_for_user_at_to_token.membership != Membership.JOIN
# We didn't fetch any timeline events but we should still check for
# a bump_stamp that might be somewhere
or limited is None
# There might be a bump event somewhere before the timeline events
# that we fetched, that we didn't previously send down
or limited is True
# Always give the client some frame of reference if this is the
# first time they are seeing the room down the connection
or initial
)
# If we're joined to the room, we need to find the last bump event before the
# `to_token`
if room_membership_for_user_at_to_token.membership == Membership.JOIN:
# Try and get a bump stamp, if not we just fall back to the
# membership token.
# Try and get a bump stamp
new_bump_stamp = await self._get_bump_stamp(
room_id, to_token, timeline_events
room_id,
to_token,
timeline_events,
check_outside_timeline=always_return_bump_stamp,
)
if new_bump_stamp is not None:
bump_stamp = new_bump_stamp
unstable_expanded_timeline = False
prev_room_sync_config = previous_connection_state.room_configs.get(room_id)
if bump_stamp is None and always_return_bump_stamp:
# By default, just choose the membership event position for any non-join membership
bump_stamp = room_membership_for_user_at_to_token.event_pos.stream
if bump_stamp is not None and bump_stamp < 0:
# We never want to send down negative stream orderings, as you can't
# sensibly compare positive and negative stream orderings (they have
# different meanings).
#
# A negative bump stamp here can only happen if the stream ordering
# of the membership event is negative (and there are no further bump
# stamps), which can happen if the server leaves and deletes a room,
# and then rejoins it.
#
# To deal with this, we just set the bump stamp to zero, which will
# shove this room to the bottom of the list. This is OK as the
# moment a new message happens in the room it will get put into a
# sensible order again.
bump_stamp = 0
room_sync_required_state_map_to_persist: Mapping[str, AbstractSet[str]] = (
expanded_required_state_map
)
if changed_required_state_map:
room_sync_required_state_map_to_persist = changed_required_state_map
# Record the `room_sync_config` if we're `ignore_timeline_bound` (which means
# that the `timeline_limit` has increased)
unstable_expanded_timeline = False
if ignore_timeline_bound:
# FIXME: We signal the fact that we're sending down more events to
# the client by setting `unstable_expanded_timeline` to true (see
@@ -1063,7 +1193,7 @@ class SlidingSyncHandler:
new_connection_state.room_configs[room_id] = RoomSyncConfig(
timeline_limit=room_sync_config.timeline_limit,
required_state_map=room_sync_config.required_state_map,
required_state_map=room_sync_required_state_map_to_persist,
)
elif prev_room_sync_config is not None:
# If the result is `limited` then we need to record that the
@@ -1092,13 +1222,20 @@ class SlidingSyncHandler:
):
new_connection_state.room_configs[room_id] = RoomSyncConfig(
timeline_limit=room_sync_config.timeline_limit,
required_state_map=room_sync_config.required_state_map,
required_state_map=room_sync_required_state_map_to_persist,
)
# TODO: Record changes in required_state.
elif changed_required_state_map is not None:
new_connection_state.room_configs[room_id] = RoomSyncConfig(
timeline_limit=room_sync_config.timeline_limit,
required_state_map=room_sync_required_state_map_to_persist,
)
else:
new_connection_state.room_configs[room_id] = room_sync_config
new_connection_state.room_configs[room_id] = RoomSyncConfig(
timeline_limit=room_sync_config.timeline_limit,
required_state_map=room_sync_required_state_map_to_persist,
)
set_tag(SynapseTags.RESULT_PREFIX + "initial", initial)
@@ -1128,14 +1265,23 @@ class SlidingSyncHandler:
@trace
async def _get_bump_stamp(
self, room_id: str, to_token: StreamToken, timeline: List[EventBase]
self,
room_id: str,
to_token: StreamToken,
timeline: List[EventBase],
check_outside_timeline: bool,
) -> Optional[int]:
"""Get a bump stamp for the room, if we have a bump event
"""Get a bump stamp for the room, if we have a bump event and it has
changed.
Args:
room_id
to_token: The upper bound of token to return
timeline: The list of events we have fetched.
limited: If the timeline was limited.
check_outside_timeline: Whether we need to check for bump stamp for
events before the timeline if we didn't find a bump stamp in
the timeline events.
"""
# First check the timeline events we're returning to see if one of
@@ -1155,6 +1301,11 @@ class SlidingSyncHandler:
if new_bump_stamp > 0:
return new_bump_stamp
if not check_outside_timeline:
# If we are not a limited sync, then we know the bump stamp can't
# have changed.
return None
# We can quickly query for the latest bump event in the room using the
# sliding sync tables.
latest_room_bump_stamp = await self.store.get_latest_bump_stamp_for_room(
@@ -1171,8 +1322,8 @@ class SlidingSyncHandler:
# `SCHEMA_COMPAT_VERSION` and run the foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots`
# (tracked by https://github.com/element-hq/synapse/issues/17623)
await self.store.have_finished_sliding_sync_background_jobs()
and latest_room_bump_stamp is None
latest_room_bump_stamp is None
and await self.store.have_finished_sliding_sync_background_jobs()
):
return None
@@ -1214,3 +1365,231 @@ class SlidingSyncHandler:
return new_bump_event_pos.stream
return None
def _required_state_changes(
user_id: str,
*,
prev_required_state_map: Mapping[str, AbstractSet[str]],
request_required_state_map: Mapping[str, AbstractSet[str]],
state_deltas: StateMap[str],
) -> Tuple[Optional[Mapping[str, AbstractSet[str]]], StateFilter]:
"""Calculates the changes between the required state room config from the
previous requests compared with the current request.
This does two things. First, it calculates if we need to update the room
config due to changes to required state. Secondly, it works out which state
entries we need to pull from current state and return due to the state entry
now appearing in the required state when it previously wasn't (on top of the
state deltas).
This function tries to ensure to handle the case where a state entry is
added, removed and then added again to the required state. In that case we
only want to re-send that entry down sync if it has changed.
Returns:
A 2-tuple of updated required state config (or None if there is no update)
and the state filter to use to fetch extra current state that we need to
return.
"""
if prev_required_state_map == request_required_state_map:
# There has been no change. Return immediately.
return None, StateFilter.none()
prev_wildcard = prev_required_state_map.get(StateValues.WILDCARD, set())
request_wildcard = request_required_state_map.get(StateValues.WILDCARD, set())
# If we were previously fetching everything ("*", "*"), always update the effective
# room required state config to match the request. And since we we're previously
# already fetching everything, we don't have to fetch anything now that they've
# narrowed.
if StateValues.WILDCARD in prev_wildcard:
return request_required_state_map, StateFilter.none()
# If a event type wildcard has been added or removed we don't try and do
# anything fancy, and instead always update the effective room required
# state config to match the request.
if request_wildcard - prev_wildcard:
# Some keys were added, so we need to fetch everything
return request_required_state_map, StateFilter.all()
if prev_wildcard - request_wildcard:
# Keys were only removed, so we don't have to fetch everything.
return request_required_state_map, StateFilter.none()
# Contains updates to the required state map compared with the previous room
# config. This has the same format as `RoomSyncConfig.required_state`
changes: Dict[str, AbstractSet[str]] = {}
# The set of types/state keys that we need to fetch and return to the
# client. Passed to `StateFilter.from_types(...)`
added: List[Tuple[str, Optional[str]]] = []
# Convert the list of state deltas to map from type to state_keys that have
# changed.
changed_types_to_state_keys: Dict[str, Set[str]] = {}
for event_type, state_key in state_deltas:
changed_types_to_state_keys.setdefault(event_type, set()).add(state_key)
# First we calculate what, if anything, has been *added*.
for event_type in (
prev_required_state_map.keys() | request_required_state_map.keys()
):
old_state_keys = prev_required_state_map.get(event_type, set())
request_state_keys = request_required_state_map.get(event_type, set())
changed_state_keys = changed_types_to_state_keys.get(event_type, set())
if old_state_keys == request_state_keys:
# No change to this type
continue
if not request_state_keys - old_state_keys:
# Nothing *added*, so we skip. Removals happen below.
continue
# We only remove state keys from the effective state if they've been
# removed from the request *and* the state has changed. This ensures
# that if a client removes and then re-adds a state key, we only send
# down the associated current state event if its changed (rather than
# sending down the same event twice).
invalidated_state_keys = (
old_state_keys - request_state_keys
) & changed_state_keys
# Figure out which state keys we should remember sending down the connection
inheritable_previous_state_keys = (
# Retain the previous state_keys that we've sent down before.
# Wildcard and lazy state keys are not sticky from previous requests.
(old_state_keys - {StateValues.WILDCARD, StateValues.LAZY})
- invalidated_state_keys
)
# Always update changes to include the newly added keys (we've expanded the set
# of state keys), use the new requested set with whatever hasn't been
# invalidated from the previous set.
changes[event_type] = request_state_keys | inheritable_previous_state_keys
# Limit the number of state_keys we should remember sending down the connection
# for each (room_id, user_id). We don't want to store and pull out too much data
# in the database. This is a happy-medium between remembering nothing and
# everything. We can avoid sending redundant state down the connection most of
# the time given that most rooms don't have 100 members anyway and it takes a
# while to cycle through 100 members.
#
# Only remember up to (MAX_NUMBER_PREVIOUS_STATE_KEYS_TO_REMEMBER)
if len(changes[event_type]) > MAX_NUMBER_PREVIOUS_STATE_KEYS_TO_REMEMBER:
# Reset back to only the requested state keys
changes[event_type] = request_state_keys
# Skip if there isn't any room to fill in the rest with previous state keys
if len(request_state_keys) < MAX_NUMBER_PREVIOUS_STATE_KEYS_TO_REMEMBER:
# Fill the rest with previous state_keys. Ideally, we could sort
# these by recency but it's just a set so just pick an arbitrary
# subset (good enough).
changes[event_type] = changes[event_type] | set(
itertools.islice(
inheritable_previous_state_keys,
# Just taking the difference isn't perfect as there could be
# overlap in the keys between the requested and previous but we
# will decide to just take the easy route for now and avoid
# additional set operations to figure it out.
MAX_NUMBER_PREVIOUS_STATE_KEYS_TO_REMEMBER
- len(request_state_keys),
)
)
if StateValues.WILDCARD in old_state_keys:
# We were previously fetching everything for this type, so we don't need to
# fetch anything new.
continue
# Record the new state keys to fetch for this type.
if StateValues.WILDCARD in request_state_keys:
# If we have added a wildcard then we always just fetch everything.
added.append((event_type, None))
else:
for state_key in request_state_keys - old_state_keys:
if state_key == StateValues.ME:
added.append((event_type, user_id))
elif state_key == StateValues.LAZY:
# We handle lazy loading separately (outside this function),
# so don't need to explicitly add anything here.
#
# LAZY values should also be ignore for event types that are
# not membership.
pass
else:
added.append((event_type, state_key))
added_state_filter = StateFilter.from_types(added)
# Figure out what changes we need to apply to the effective required state
# config.
for event_type, changed_state_keys in changed_types_to_state_keys.items():
old_state_keys = prev_required_state_map.get(event_type, set())
request_state_keys = request_required_state_map.get(event_type, set())
if old_state_keys == request_state_keys:
# No change.
continue
# If we see the `user_id` as a state_key, also add "$ME" to the list of state
# that has changed to account for people requesting `required_state` with `$ME`
# or their user ID.
if user_id in changed_state_keys:
changed_state_keys.add(StateValues.ME)
# We only remove state keys from the effective state if they've been
# removed from the request *and* the state has changed. This ensures
# that if a client removes and then re-adds a state key, we only send
# down the associated current state event if its changed (rather than
# sending down the same event twice).
invalidated_state_keys = (
old_state_keys - request_state_keys
) & changed_state_keys
# We've expanded the set of state keys, ... (already handled above)
if request_state_keys - old_state_keys:
continue
old_state_key_wildcard = StateValues.WILDCARD in old_state_keys
request_state_key_wildcard = StateValues.WILDCARD in request_state_keys
if old_state_key_wildcard != request_state_key_wildcard:
# If a state_key wildcard has been added or removed, we always update the
# effective room required state config to match the request.
changes[event_type] = request_state_keys
continue
if event_type == EventTypes.Member:
old_state_key_lazy = StateValues.LAZY in old_state_keys
request_state_key_lazy = StateValues.LAZY in request_state_keys
if old_state_key_lazy != request_state_key_lazy:
# If a "$LAZY" has been added or removed we always update the effective room
# required state config to match the request.
changes[event_type] = request_state_keys
continue
# At this point there are no wildcards and no additions to the set of
# state keys requested, only deletions.
#
# We only remove state keys from the effective state if they've been
# removed from the request *and* the state has changed. This ensures
# that if a client removes and then re-adds a state key, we only send
# down the associated current state event if its changed (rather than
# sending down the same event twice).
if invalidated_state_keys:
changes[event_type] = old_state_keys - invalidated_state_keys
if changes:
# Update the required state config based on the changes.
new_required_state_map = dict(prev_required_state_map)
for event_type, state_keys in changes.items():
if state_keys:
new_required_state_map[event_type] = state_keys
else:
# Remove entries with empty state keys.
new_required_state_map.pop(event_type, None)
return new_required_state_map, added_state_filter
else:
return None, added_state_filter

View File

@@ -19,7 +19,6 @@ from typing import (
AbstractSet,
ChainMap,
Dict,
List,
Mapping,
MutableMapping,
Optional,
@@ -50,7 +49,10 @@ from synapse.types.handlers.sliding_sync import (
SlidingSyncConfig,
SlidingSyncResult,
)
from synapse.util.async_helpers import concurrently_execute
from synapse.util.async_helpers import (
concurrently_execute,
gather_optional_coroutines,
)
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -98,27 +100,29 @@ class SlidingSyncExtensionHandler:
if sync_config.extensions is None:
return SlidingSyncResult.Extensions()
to_device_response = None
to_device_coro = None
if sync_config.extensions.to_device is not None:
to_device_response = await self.get_to_device_extension_response(
to_device_coro = self.get_to_device_extension_response(
sync_config=sync_config,
to_device_request=sync_config.extensions.to_device,
to_token=to_token,
)
e2ee_response = None
e2ee_coro = None
if sync_config.extensions.e2ee is not None:
e2ee_response = await self.get_e2ee_extension_response(
e2ee_coro = self.get_e2ee_extension_response(
sync_config=sync_config,
e2ee_request=sync_config.extensions.e2ee,
to_token=to_token,
from_token=from_token,
)
account_data_response = None
account_data_coro = None
if sync_config.extensions.account_data is not None:
account_data_response = await self.get_account_data_extension_response(
account_data_coro = self.get_account_data_extension_response(
sync_config=sync_config,
previous_connection_state=previous_connection_state,
new_connection_state=new_connection_state,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
account_data_request=sync_config.extensions.account_data,
@@ -126,9 +130,9 @@ class SlidingSyncExtensionHandler:
from_token=from_token,
)
receipts_response = None
receipts_coro = None
if sync_config.extensions.receipts is not None:
receipts_response = await self.get_receipts_extension_response(
receipts_coro = self.get_receipts_extension_response(
sync_config=sync_config,
previous_connection_state=previous_connection_state,
new_connection_state=new_connection_state,
@@ -140,9 +144,9 @@ class SlidingSyncExtensionHandler:
from_token=from_token,
)
typing_response = None
typing_coro = None
if sync_config.extensions.typing is not None:
typing_response = await self.get_typing_extension_response(
typing_coro = self.get_typing_extension_response(
sync_config=sync_config,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
@@ -152,6 +156,20 @@ class SlidingSyncExtensionHandler:
from_token=from_token,
)
(
to_device_response,
e2ee_response,
account_data_response,
receipts_response,
typing_response,
) = await gather_optional_coroutines(
to_device_coro,
e2ee_coro,
account_data_coro,
receipts_coro,
typing_coro,
)
return SlidingSyncResult.Extensions(
to_device=to_device_response,
e2ee=e2ee_response,
@@ -361,6 +379,8 @@ class SlidingSyncExtensionHandler:
async def get_account_data_extension_response(
self,
sync_config: SlidingSyncConfig,
previous_connection_state: "PerConnectionState",
new_connection_state: "MutablePerConnectionState",
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
account_data_request: SlidingSyncConfig.Extensions.AccountDataExtension,
@@ -425,15 +445,7 @@ class SlidingSyncExtensionHandler:
# Fetch room account data
#
# List of -> Mapping from room_id to mapping of `type` to `content` of room
# account data events.
#
# This is is a list so we can avoid making copies of immutable data and instead
# just provide multiple maps that need to be combined. Normally, we could
# reach for `ChainMap` in this scenario, but this is a nested map and accessing
# the ChainMap by room_id won't combine the two maps for that room (we would
# need a new `NestedChainMap` type class).
account_data_by_room_maps: List[Mapping[str, Mapping[str, JsonMapping]]] = []
account_data_by_room_map: MutableMapping[str, Mapping[str, JsonMapping]] = {}
relevant_room_ids = self.find_relevant_room_ids_for_extension(
requested_lists=account_data_request.lists,
requested_room_ids=account_data_request.rooms,
@@ -441,9 +453,43 @@ class SlidingSyncExtensionHandler:
actual_room_ids=actual_room_ids,
)
if len(relevant_room_ids) > 0:
# We need to handle the different cases depending on if we have sent
# down account data previously or not, so we split the relevant
# rooms up into different collections based on status.
live_rooms = set()
previously_rooms: Dict[str, int] = {}
initial_rooms = set()
for room_id in relevant_room_ids:
if not from_token:
initial_rooms.add(room_id)
continue
room_status = previous_connection_state.account_data.have_sent_room(
room_id
)
if room_status.status == HaveSentRoomFlag.LIVE:
live_rooms.add(room_id)
elif room_status.status == HaveSentRoomFlag.PREVIOUSLY:
assert room_status.last_token is not None
previously_rooms[room_id] = room_status.last_token
elif room_status.status == HaveSentRoomFlag.NEVER:
initial_rooms.add(room_id)
else:
assert_never(room_status.status)
# We fetch all room account data since the from_token. This is so
# that we can record which rooms have updates that haven't been sent
# down.
#
# Mapping from room_id to mapping of `type` to `content` of room account
# data events.
all_updates_since_the_from_token: Mapping[
str, Mapping[str, JsonMapping]
] = {}
if from_token is not None:
# TODO: This should take into account the `from_token` and `to_token`
account_data_by_room_map = (
all_updates_since_the_from_token = (
await self.store.get_updated_room_account_data_for_user(
user_id, from_token.stream_token.account_data_key
)
@@ -456,58 +502,109 @@ class SlidingSyncExtensionHandler:
user_id, from_token.stream_token.account_data_key
)
for room_id, tags in tags_by_room.items():
account_data_by_room_map.setdefault(room_id, {})[
all_updates_since_the_from_token.setdefault(room_id, {})[
AccountDataTypes.TAG
] = {"tags": tags}
account_data_by_room_maps.append(account_data_by_room_map)
else:
# TODO: This should take into account the `to_token`
immutable_account_data_by_room_map = (
await self.store.get_room_account_data_for_user(user_id)
)
account_data_by_room_maps.append(immutable_account_data_by_room_map)
# For live rooms we just get the updates from `all_updates_since_the_from_token`
if live_rooms:
for room_id in all_updates_since_the_from_token.keys() & live_rooms:
account_data_by_room_map[room_id] = (
all_updates_since_the_from_token[room_id]
)
# Add room tags
#
# TODO: This should take into account the `to_token`
tags_by_room = await self.store.get_tags_for_user(user_id)
account_data_by_room_maps.append(
{
room_id: {AccountDataTypes.TAG: {"tags": tags}}
for room_id, tags in tags_by_room.items()
}
# For previously and initial rooms we query each room individually.
if previously_rooms or initial_rooms:
async def handle_previously(room_id: str) -> None:
# Either get updates or all account data in the room
# depending on if the room state is PREVIOUSLY or NEVER.
previous_token = previously_rooms.get(room_id)
if previous_token is not None:
room_account_data = await (
self.store.get_updated_room_account_data_for_user_for_room(
user_id=user_id,
room_id=room_id,
from_stream_id=previous_token,
to_stream_id=to_token.account_data_key,
)
)
# Add room tags
changed = await self.store.has_tags_changed_for_room(
user_id=user_id,
room_id=room_id,
from_stream_id=previous_token,
to_stream_id=to_token.account_data_key,
)
if changed:
# XXX: Ideally, this should take into account the `to_token`
# and return the set of tags at that time but we don't track
# changes to tags so we just have to return all tags for the
# room.
immutable_tag_map = await self.store.get_tags_for_room(
user_id, room_id
)
room_account_data[AccountDataTypes.TAG] = {
"tags": immutable_tag_map
}
# Only add an entry if there were any updates.
if room_account_data:
account_data_by_room_map[room_id] = room_account_data
else:
# TODO: This should take into account the `to_token`
immutable_room_account_data = (
await self.store.get_account_data_for_room(user_id, room_id)
)
# Add room tags
#
# XXX: Ideally, this should take into account the `to_token`
# and return the set of tags at that time but we don't track
# changes to tags so we just have to return all tags for the
# room.
immutable_tag_map = await self.store.get_tags_for_room(
user_id, room_id
)
account_data_by_room_map[room_id] = ChainMap(
{AccountDataTypes.TAG: {"tags": immutable_tag_map}}
if immutable_tag_map
else {},
# Cast is safe because `ChainMap` only mutates the top-most map,
# see https://github.com/python/typeshed/issues/8430
cast(
MutableMapping[str, JsonMapping],
immutable_room_account_data,
),
)
# We handle these rooms concurrently to speed it up.
await concurrently_execute(
handle_previously,
previously_rooms.keys() | initial_rooms,
limit=20,
)
# Filter down to the relevant rooms ... and combine the maps
relevant_account_data_by_room_map: MutableMapping[
str, Mapping[str, JsonMapping]
] = {}
for room_id in relevant_room_ids:
# We want to avoid adding empty maps for relevant rooms that have no room
# account data so do a quick check to see if it's in any of the maps.
is_room_in_maps = False
for room_map in account_data_by_room_maps:
if room_id in room_map:
is_room_in_maps = True
break
# Now record which rooms are now up to data, and which rooms have
# pending updates to send.
new_connection_state.account_data.record_sent_rooms(previously_rooms.keys())
new_connection_state.account_data.record_sent_rooms(initial_rooms)
missing_updates = (
all_updates_since_the_from_token.keys() - relevant_room_ids
)
if missing_updates:
# If we have missing updates then we must have had a from_token.
assert from_token is not None
# If we found the room in any of the maps, combine the maps for that room
if is_room_in_maps:
relevant_account_data_by_room_map[room_id] = ChainMap(
{},
*(
# Cast is safe because `ChainMap` only mutates the top-most map,
# see https://github.com/python/typeshed/issues/8430
cast(MutableMapping[str, JsonMapping], room_map[room_id])
for room_map in account_data_by_room_maps
if room_map.get(room_id)
),
new_connection_state.account_data.record_unsent_rooms(
missing_updates, from_token.stream_token.account_data_key
)
return SlidingSyncResult.Extensions.AccountDataExtension(
global_account_data_map=global_account_data_map,
account_data_by_room_map=relevant_account_data_by_room_map,
account_data_by_room_map=account_data_by_room_map,
)
@trace
@@ -684,9 +781,10 @@ class SlidingSyncExtensionHandler:
room_id_to_receipt_map[room_id] = {"type": type, "content": content}
# Now we update the per-connection state to track which receipts we have
# and haven't sent down.
new_connection_state.receipts.record_sent_rooms(relevant_room_ids)
# Update the per-connection state to track which rooms we have sent
# all the receipts for.
new_connection_state.receipts.record_sent_rooms(previously_rooms.keys())
new_connection_state.receipts.record_sent_rooms(initial_rooms)
if from_token:
# Now find the set of rooms that may have receipts that we're not sending

View File

@@ -40,6 +40,7 @@ from synapse.api.constants import (
EventTypes,
Membership,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import StrippedStateEvent
from synapse.events.utils import parse_stripped_state_event
from synapse.logging.opentracing import start_active_span, trace
@@ -55,7 +56,6 @@ from synapse.storage.roommember import (
)
from synapse.types import (
MutableStateMap,
PersistedEventPosition,
RoomStreamToken,
StateMap,
StrCollection,
@@ -80,6 +80,12 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
# type of a dictionary lookup and subsequent type narrowing.
UNSET_SENTINEL = object()
# Helper definition for the types that we might return. We do this to avoid
# copying data between types (which can be expensive for many rooms).
RoomsForUserType = Union[RoomsForUserStateReset, RoomsForUser, RoomsForUserSlidingSync]
@@ -117,11 +123,18 @@ class SlidingSyncInterestedRooms:
newly_left_rooms: AbstractSet[str]
dm_room_ids: AbstractSet[str]
class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
# type of a dictionary lookup and subsequent type narrowing.
UNSET_SENTINEL = object()
@staticmethod
def empty() -> "SlidingSyncInterestedRooms":
return SlidingSyncInterestedRooms(
lists={},
relevant_room_map={},
relevant_rooms_to_send_map={},
all_rooms=set(),
room_membership_for_user_map={},
newly_joined_rooms=set(),
newly_left_rooms=set(),
dm_room_ids=set(),
)
def filter_membership_for_sync(
@@ -181,6 +194,14 @@ class SlidingSyncRoomLists:
from_token: Optional[StreamToken],
) -> SlidingSyncInterestedRooms:
"""Fetch the set of rooms that match the request"""
has_lists = sync_config.lists is not None and len(sync_config.lists) > 0
has_room_subscriptions = (
sync_config.room_subscriptions is not None
and len(sync_config.room_subscriptions) > 0
)
if not has_lists and not has_room_subscriptions:
return SlidingSyncInterestedRooms.empty()
if await self.store.have_finished_sliding_sync_background_jobs():
return await self._compute_interested_rooms_new_tables(
@@ -220,19 +241,38 @@ class SlidingSyncRoomLists:
# include rooms that are outside the list ranges.
all_rooms: Set[str] = set()
# Note: this won't include rooms the user has left themselves. We add back
# `newly_left` rooms below. This is more efficient than fetching all rooms and
# then filtering out the old left rooms.
room_membership_for_user_map = await self.store.get_sliding_sync_rooms_for_user(
user_id
)
# Remove invites from ignored users
ignored_users = await self.store.ignored_users(user_id)
if ignored_users:
# TODO: It would be nice to avoid these copies
room_membership_for_user_map = dict(room_membership_for_user_map)
# Make a copy so we don't run into an error: `dictionary changed size during
# iteration`, when we remove items
for room_id in list(room_membership_for_user_map.keys()):
room_for_user_sliding_sync = room_membership_for_user_map[room_id]
if (
room_for_user_sliding_sync.membership == Membership.INVITE
and room_for_user_sliding_sync.sender in ignored_users
):
room_membership_for_user_map.pop(room_id, None)
changes = await self._get_rewind_changes_to_current_membership_to_token(
sync_config.user, room_membership_for_user_map, to_token=to_token
)
if changes:
# TODO: It would be nice to avoid these copies
room_membership_for_user_map = dict(room_membership_for_user_map)
for room_id, change in changes.items():
if change is None:
# Remove rooms that the user joined after the `to_token`
room_membership_for_user_map.pop(room_id)
room_membership_for_user_map.pop(room_id, None)
continue
existing_room = room_membership_for_user_map.get(room_id)
@@ -245,36 +285,11 @@ class SlidingSyncRoomLists:
event_id=change.event_id,
event_pos=change.event_pos,
room_version_id=change.room_version_id,
# We keep the current state of the room though
# We keep the state of the room though
has_known_state=existing_room.has_known_state,
room_type=existing_room.room_type,
is_encrypted=existing_room.is_encrypted,
)
else:
# This can happen if we get "state reset" out of the room
# after the `to_token`. In other words, there is no membership
# for the room after the `to_token` but we see membership in
# the token range.
# Get the state at the time. Note that room type never changes,
# so we can just get current room type
room_type = await self.store.get_room_type(room_id)
is_encrypted = await self.get_is_encrypted_for_room_at_token(
room_id, to_token.room_key
)
# Add back rooms that the user was state-reset out of after `to_token`
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_id=room_id,
sender=change.sender,
membership=change.membership,
event_id=change.event_id,
event_pos=change.event_pos,
room_version_id=change.room_version_id,
has_known_state=True,
room_type=room_type,
is_encrypted=is_encrypted,
)
(
newly_joined_room_ids,
@@ -284,44 +299,88 @@ class SlidingSyncRoomLists:
)
dm_room_ids = await self._get_dm_rooms_for_user(user_id)
# Handle state resets in the from -> to token range.
state_reset_rooms = (
# Add back `newly_left` rooms (rooms left in the from -> to token range).
#
# We do this because `get_sliding_sync_rooms_for_user(...)` doesn't include
# rooms that the user left themselves as it's more efficient to add them back
# here than to fetch all rooms and then filter out the old left rooms. The user
# only leaves a room once in a blue moon so this barely needs to run.
#
missing_newly_left_rooms = (
newly_left_room_map.keys() - room_membership_for_user_map.keys()
)
if state_reset_rooms:
if missing_newly_left_rooms:
# TODO: It would be nice to avoid these copies
room_membership_for_user_map = dict(room_membership_for_user_map)
for room_id in (
newly_left_room_map.keys() - room_membership_for_user_map.keys()
):
# Get the state at the time. Note that room type never changes,
# so we can just get current room type
room_type = await self.store.get_room_type(room_id)
is_encrypted = await self.get_is_encrypted_for_room_at_token(
room_id, newly_left_room_map[room_id].to_room_stream_token()
)
for room_id in missing_newly_left_rooms:
newly_left_room_for_user = newly_left_room_map[room_id]
# This should be a given
assert newly_left_room_for_user.membership == Membership.LEAVE
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_id=room_id,
sender=None,
membership=Membership.LEAVE,
event_id=None,
event_pos=newly_left_room_map[room_id],
room_version_id=await self.store.get_room_version_id(room_id),
has_known_state=True,
room_type=room_type,
is_encrypted=is_encrypted,
# Add back `newly_left` rooms
#
# Check for membership and state in the Sliding Sync tables as it's just
# another membership
newly_left_room_for_user_sliding_sync = (
await self.store.get_sliding_sync_room_for_user(user_id, room_id)
)
# If the membership exists, it's just a normal user left the room on
# their own
if newly_left_room_for_user_sliding_sync is not None:
room_membership_for_user_map[room_id] = (
newly_left_room_for_user_sliding_sync
)
change = changes.get(room_id)
if change is not None:
# Update room membership events to the point in time of the `to_token`
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_id=room_id,
sender=change.sender,
membership=change.membership,
event_id=change.event_id,
event_pos=change.event_pos,
room_version_id=change.room_version_id,
# We keep the state of the room though
has_known_state=newly_left_room_for_user_sliding_sync.has_known_state,
room_type=newly_left_room_for_user_sliding_sync.room_type,
is_encrypted=newly_left_room_for_user_sliding_sync.is_encrypted,
)
# If we are `newly_left` from the room but can't find any membership,
# then we have been "state reset" out of the room
else:
# Get the state at the time. We can't read from the Sliding Sync
# tables because the user has no membership in the room according to
# the state (thanks to the state reset).
#
# Note: `room_type` never changes, so we can just get current room
# type
room_type = await self.store.get_room_type(room_id)
has_known_state = room_type is not ROOM_UNKNOWN_SENTINEL
if isinstance(room_type, StateSentinel):
room_type = None
# Get the encryption status at the time of the token
is_encrypted = await self.get_is_encrypted_for_room_at_token(
room_id,
newly_left_room_for_user.event_pos.to_room_stream_token(),
)
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_id=room_id,
sender=newly_left_room_for_user.sender,
membership=newly_left_room_for_user.membership,
event_id=newly_left_room_for_user.event_id,
event_pos=newly_left_room_for_user.event_pos,
room_version_id=newly_left_room_for_user.room_version_id,
has_known_state=has_known_state,
room_type=room_type,
is_encrypted=is_encrypted,
)
if sync_config.lists:
sync_room_map = {
room_id: room_membership_for_user
for room_id, room_membership_for_user in room_membership_for_user_map.items()
if filter_membership_for_sync(
user_id=user_id,
room_membership_for_user=room_membership_for_user,
newly_left=room_id in newly_left_room_map,
)
}
sync_room_map = room_membership_for_user_map
with start_active_span("assemble_sliding_window_lists"):
for list_key, list_config in sync_config.lists.items():
# Apply filters
@@ -330,6 +389,7 @@ class SlidingSyncRoomLists:
filtered_sync_room_map = await self.filter_rooms_using_tables(
user_id,
sync_room_map,
previous_connection_state,
list_config.filters,
to_token,
dm_room_ids,
@@ -357,8 +417,18 @@ class SlidingSyncRoomLists:
ops: List[SlidingSyncResult.SlidingWindowList.Operation] = []
if list_config.ranges:
if list_config.ranges == [(0, len(filtered_sync_room_map) - 1)]:
# If we are asking for the full range, we don't need to sort the list.
# Optimization: If we are asking for the full range, we don't
# need to sort the list.
if (
# We're looking for a single range that covers the entire list
len(list_config.ranges) == 1
# Range starts at 0
and list_config.ranges[0][0] == 0
# And the range extends to the end of the list or more. Each
# side is inclusive.
and list_config.ranges[0][1]
>= len(filtered_sync_room_map) - 1
):
sorted_room_info: List[RoomsForUserType] = list(
filtered_sync_room_map.values()
)
@@ -371,6 +441,10 @@ class SlidingSyncRoomLists:
Dict[str, RoomsForUserType], filtered_sync_room_map
),
to_token,
# We only need to sort the rooms up to the end
# of the largest range. Both sides of range are
# inclusive so we `+ 1`.
limit=max(range[1] + 1 for range in list_config.ranges),
)
for range in list_config.ranges:
@@ -413,21 +487,38 @@ class SlidingSyncRoomLists:
)
lists[list_key] = SlidingSyncResult.SlidingWindowList(
count=len(sorted_room_info),
count=len(filtered_sync_room_map),
ops=ops,
)
if sync_config.room_subscriptions:
with start_active_span("assemble_room_subscriptions"):
# TODO: It would be nice to avoid these copies
room_membership_for_user_map = dict(room_membership_for_user_map)
# Find which rooms are partially stated and may need to be filtered out
# depending on the `required_state` requested (see below).
partial_state_rooms = await self.store.get_partial_rooms()
# Fetch any rooms that we have not already fetched from the database.
subscription_sliding_sync_rooms = (
await self.store.get_sliding_sync_room_for_user_batch(
user_id,
sync_config.room_subscriptions.keys()
- room_membership_for_user_map.keys(),
)
)
room_membership_for_user_map.update(subscription_sliding_sync_rooms)
for (
room_id,
room_subscription,
) in sync_config.room_subscriptions.items():
if room_id not in room_membership_for_user_map:
# Check if we have a membership for the room, but didn't pull it out
# above. This could be e.g. a leave that we don't pull out by
# default.
current_room_entry = room_membership_for_user_map.get(room_id)
if not current_room_entry:
# TODO: Handle rooms the user isn't in.
continue
@@ -444,8 +535,6 @@ class SlidingSyncRoomLists:
if room_id in partial_state_rooms:
continue
all_rooms.add(room_id)
# Update our `relevant_room_map` with the room we're going to display
# and need to fetch more info about.
existing_room_sync_config = relevant_room_map.get(room_id)
@@ -460,7 +549,7 @@ class SlidingSyncRoomLists:
# Filtered subset of `relevant_room_map` for rooms that may have updates
# (in the event stream)
relevant_rooms_to_send_map = await self._filter_relevant_room_to_send(
relevant_rooms_to_send_map = await self._filter_relevant_rooms_to_send(
previous_connection_state, from_token, relevant_room_map
)
@@ -517,6 +606,7 @@ class SlidingSyncRoomLists:
filtered_sync_room_map = await self.filter_rooms(
sync_config.user,
sync_room_map,
previous_connection_state,
list_config.filters,
to_token,
dm_room_ids,
@@ -647,7 +737,7 @@ class SlidingSyncRoomLists:
# Filtered subset of `relevant_room_map` for rooms that may have updates
# (in the event stream)
relevant_rooms_to_send_map = await self._filter_relevant_room_to_send(
relevant_rooms_to_send_map = await self._filter_relevant_rooms_to_send(
previous_connection_state, from_token, relevant_room_map
)
@@ -662,7 +752,7 @@ class SlidingSyncRoomLists:
dm_room_ids=dm_room_ids,
)
async def _filter_relevant_room_to_send(
async def _filter_relevant_rooms_to_send(
self,
previous_connection_state: PerConnectionState,
from_token: Optional[StreamToken],
@@ -926,8 +1016,38 @@ class SlidingSyncRoomLists:
excluded_rooms=self.rooms_to_exclude_globally,
)
# If the user has never joined any rooms before, we can just return an empty list
if not room_for_user_list:
# We filter out unknown room versions before we try and load any
# metadata about the room. They shouldn't go down sync anyway, and their
# metadata may be in a broken state.
room_for_user_list = [
room_for_user
for room_for_user in room_for_user_list
if room_for_user.room_version_id in KNOWN_ROOM_VERSIONS
]
# Remove invites from ignored users
ignored_users = await self.store.ignored_users(user_id)
if ignored_users:
room_for_user_list = [
room_for_user
for room_for_user in room_for_user_list
if not (
room_for_user.membership == Membership.INVITE
and room_for_user.sender in ignored_users
)
]
(
newly_joined_room_ids,
newly_left_room_map,
) = await self._get_newly_joined_and_left_rooms(
user_id, to_token=to_token, from_token=from_token
)
# If the user has never joined any rooms before, we can just return an empty
# list. We also have to check the `newly_left_room_map` in case someone was
# state reset out of all of the rooms they were in.
if not room_for_user_list and not newly_left_room_map:
return {}, set(), set()
# Since we fetched the users room list at some point in time after the
@@ -945,30 +1065,22 @@ class SlidingSyncRoomLists:
else:
rooms_for_user[room_id] = change_room_for_user
(
newly_joined_room_ids,
newly_left_room_ids,
) = await self._get_newly_joined_and_left_rooms(
user_id, to_token=to_token, from_token=from_token
)
# Ensure we have entries for rooms that the user has been "state reset"
# out of. These are rooms appear in the `newly_left_rooms` map but
# aren't in the `rooms_for_user` map.
for room_id, left_event_pos in newly_left_room_ids.items():
for room_id, newly_left_room_for_user in newly_left_room_map.items():
# If we already know about the room, it's not a state reset
if room_id in rooms_for_user:
continue
rooms_for_user[room_id] = RoomsForUserStateReset(
room_id=room_id,
event_id=None,
event_pos=left_event_pos,
membership=Membership.LEAVE,
sender=None,
room_version_id=await self.store.get_room_version_id(room_id),
)
# This should be true if it's a state reset
assert newly_left_room_for_user.membership is Membership.LEAVE
assert newly_left_room_for_user.event_id is None
assert newly_left_room_for_user.sender is None
return rooms_for_user, newly_joined_room_ids, set(newly_left_room_ids)
rooms_for_user[room_id] = newly_left_room_for_user
return rooms_for_user, newly_joined_room_ids, set(newly_left_room_map)
@trace
async def _get_newly_joined_and_left_rooms(
@@ -976,7 +1088,7 @@ class SlidingSyncRoomLists:
user_id: str,
to_token: StreamToken,
from_token: Optional[StreamToken],
) -> Tuple[AbstractSet[str], Mapping[str, PersistedEventPosition]]:
) -> Tuple[AbstractSet[str], Mapping[str, RoomsForUserStateReset]]:
"""Fetch the sets of rooms that the user newly joined or left in the
given token range.
@@ -985,11 +1097,18 @@ class SlidingSyncRoomLists:
"current memberships" of the user.
Returns:
A 2-tuple of newly joined room IDs and a map of newly left room
IDs to the event position the leave happened at.
A 2-tuple of newly joined room IDs and a map of newly_left room
IDs to the `RoomsForUserStateReset` entry.
We're using `RoomsForUserStateReset` but that doesn't necessarily mean the
user was state reset of the rooms. It's just that the `event_id`/`sender`
are optional and we can't tell the difference between the server leaving the
room when the user was the last person participating in the room and left or
was state reset out of the room. To actually check for a state reset, you
need to check if a membership still exists in the room.
"""
newly_joined_room_ids: Set[str] = set()
newly_left_room_map: Dict[str, PersistedEventPosition] = {}
newly_left_room_map: Dict[str, RoomsForUserStateReset] = {}
# We need to figure out the
#
@@ -1060,8 +1179,13 @@ class SlidingSyncRoomLists:
# 1) Figure out newly_left rooms (> `from_token` and <= `to_token`).
if last_membership_change_in_from_to_range.membership == Membership.LEAVE:
# 1) Mark this room as `newly_left`
newly_left_room_map[room_id] = (
last_membership_change_in_from_to_range.event_pos
newly_left_room_map[room_id] = RoomsForUserStateReset(
room_id=room_id,
sender=last_membership_change_in_from_to_range.sender,
membership=Membership.LEAVE,
event_id=last_membership_change_in_from_to_range.event_id,
event_pos=last_membership_change_in_from_to_range.event_pos,
room_version_id=await self.store.get_room_version_id(room_id),
)
# 2) Figure out `newly_joined`
@@ -1505,6 +1629,7 @@ class SlidingSyncRoomLists:
self,
user: UserID,
sync_room_map: Dict[str, RoomsForUserType],
previous_connection_state: PerConnectionState,
filters: SlidingSyncConfig.SlidingSyncList.Filters,
to_token: StreamToken,
dm_room_ids: AbstractSet[str],
@@ -1690,14 +1815,33 @@ class SlidingSyncRoomLists:
)
}
# Keep rooms if the user has been state reset out of it but we previously sent
# down the connection before. We want to make sure that we send these down to
# the client regardless of filters so they find out about the state reset.
#
# We don't always have access to the state in a room after being state reset if
# no one else locally on the server is participating in the room so we patch
# these back in manually.
state_reset_out_of_room_id_set = {
room_id
for room_id in sync_room_map.keys()
if sync_room_map[room_id].event_id is None
and previous_connection_state.rooms.have_sent_room(room_id).status
!= HaveSentRoomFlag.NEVER
}
# Assemble a new sync room map but only with the `filtered_room_id_set`
return {room_id: sync_room_map[room_id] for room_id in filtered_room_id_set}
return {
room_id: sync_room_map[room_id]
for room_id in filtered_room_id_set | state_reset_out_of_room_id_set
}
@trace
async def filter_rooms_using_tables(
self,
user_id: str,
sync_room_map: Mapping[str, RoomsForUserSlidingSync],
previous_connection_state: PerConnectionState,
filters: SlidingSyncConfig.SlidingSyncList.Filters,
to_token: StreamToken,
dm_room_ids: AbstractSet[str],
@@ -1839,23 +1983,47 @@ class SlidingSyncRoomLists:
)
}
# Keep rooms if the user has been state reset out of it but we previously sent
# down the connection before. We want to make sure that we send these down to
# the client regardless of filters so they find out about the state reset.
#
# We don't always have access to the state in a room after being state reset if
# no one else locally on the server is participating in the room so we patch
# these back in manually.
state_reset_out_of_room_id_set = {
room_id
for room_id in sync_room_map.keys()
if sync_room_map[room_id].event_id is None
and previous_connection_state.rooms.have_sent_room(room_id).status
!= HaveSentRoomFlag.NEVER
}
# Assemble a new sync room map but only with the `filtered_room_id_set`
return {room_id: sync_room_map[room_id] for room_id in filtered_room_id_set}
return {
room_id: sync_room_map[room_id]
for room_id in filtered_room_id_set | state_reset_out_of_room_id_set
}
@trace
async def sort_rooms(
self,
sync_room_map: Dict[str, RoomsForUserType],
to_token: StreamToken,
limit: Optional[int] = None,
) -> List[RoomsForUserType]:
"""
Sort by `stream_ordering` of the last event that the user should see in the
room. `stream_ordering` is unique so we get a stable sort.
If `limit` is specified then sort may return fewer entries, but will
always return at least the top N rooms. This is useful as we don't always
need to sort the full list, but are just interested in the top N.
Args:
sync_room_map: Dictionary of room IDs to sort along with membership
information in the room at the time of `to_token`.
to_token: We sort based on the events in the room at this token (<= `to_token`)
limit: The number of rooms that we need to return from the top of the list.
Returns:
A sorted list of room IDs by `stream_ordering` along with membership information.
@@ -1865,8 +2033,23 @@ class SlidingSyncRoomLists:
# user should see in the room (<= `to_token`)
last_activity_in_room_map: Dict[str, int] = {}
# Same as above, except for positions that we know are in the event
# stream cache.
cached_positions: Dict[str, int] = {}
earliest_cache_position = (
self.store._events_stream_cache.get_earliest_known_position()
)
for room_id, room_for_user in sync_room_map.items():
if room_for_user.membership != Membership.JOIN:
if room_for_user.membership == Membership.JOIN:
# For joined rooms check the stream change cache.
cached_position = (
self.store._events_stream_cache.get_max_pos_of_last_change(room_id)
)
if cached_position is not None:
cached_positions[room_id] = cached_position
else:
# If the user has left/been invited/knocked/been banned from a
# room, they shouldn't see anything past that point.
#
@@ -1876,6 +2059,48 @@ class SlidingSyncRoomLists:
# https://github.com/matrix-org/matrix-spec-proposals/pull/3575#discussion_r1653045932
last_activity_in_room_map[room_id] = room_for_user.event_pos.stream
# If the stream position is in range of the stream change cache
# we can include it.
if room_for_user.event_pos.stream > earliest_cache_position:
cached_positions[room_id] = room_for_user.event_pos.stream
# If we are only asked for the top N rooms, and we have enough from
# looking in the stream change cache, then we can return early. This
# is because the cache must include all entries above
# `.get_earliest_known_position()`.
if limit is not None and len(cached_positions) >= limit:
# ... but first we need to handle the case where the cached max
# position is greater than the to_token, in which case we do
# actually query the DB. This should happen rarely, so can do it in
# a loop.
for room_id, position in list(cached_positions.items()):
if position > to_token.room_key.stream:
result = await self.store.get_last_event_pos_in_room_before_stream_ordering(
room_id, to_token.room_key
)
if (
result is not None
and result[1].stream > earliest_cache_position
):
# We have a stream position in the cached range.
cached_positions[room_id] = result[1].stream
else:
# No position in the range, so we remove the entry.
cached_positions.pop(room_id)
if limit is not None and len(cached_positions) >= limit:
return sorted(
(
room
for room in sync_room_map.values()
if room.room_id in cached_positions
),
# Sort by the last activity (stream_ordering) in the room
key=lambda room_info: cached_positions[room_info.room_id],
# We want descending order
reverse=True,
)
# For fully-joined rooms, we find the latest activity at/before the
# `to_token`.
joined_room_positions = (

View File

@@ -143,6 +143,7 @@ class SyncConfig:
filter_collection: FilterCollection
is_guest: bool
device_id: Optional[str]
use_state_after: bool
@attr.s(slots=True, frozen=True, auto_attribs=True)
@@ -1141,6 +1142,7 @@ class SyncHandler:
since_token: Optional[StreamToken],
end_token: StreamToken,
full_state: bool,
joined: bool,
) -> MutableStateMap[EventBase]:
"""Works out the difference in state between the end of the previous sync and
the start of the timeline.
@@ -1155,6 +1157,7 @@ class SyncHandler:
the point just after their leave event.
full_state: Whether to force returning the full state.
`lazy_load_members` still applies when `full_state` is `True`.
joined: whether the user is currently joined to the room
Returns:
The state to return in the sync response for the room.
@@ -1230,11 +1233,12 @@ class SyncHandler:
if full_state:
state_ids = await self._compute_state_delta_for_full_sync(
room_id,
sync_config.user,
sync_config,
batch,
end_token,
members_to_fetch,
timeline_state,
joined,
)
else:
# If this is an initial sync then full_state should be set, and
@@ -1244,6 +1248,7 @@ class SyncHandler:
state_ids = await self._compute_state_delta_for_incremental_sync(
room_id,
sync_config,
batch,
since_token,
end_token,
@@ -1316,20 +1321,24 @@ class SyncHandler:
async def _compute_state_delta_for_full_sync(
self,
room_id: str,
syncing_user: UserID,
sync_config: SyncConfig,
batch: TimelineBatch,
end_token: StreamToken,
members_to_fetch: Optional[Set[str]],
timeline_state: StateMap[str],
joined: bool,
) -> StateMap[str]:
"""Calculate the state events to be included in a full sync response.
As with `_compute_state_delta_for_incremental_sync`, the result will include
the membership events for the senders of each event in `members_to_fetch`.
Note that whether this returns the state at the start or the end of the
batch depends on `sync_config.use_state_after` (c.f. MSC4222).
Args:
room_id: The room we are calculating for.
syncing_user: The user that is calling `/sync`.
sync_confg: The user that is calling `/sync`.
batch: The timeline batch for the room that will be sent to the user.
end_token: Token of the end of the current batch. Normally this will be
the same as the global "now_token", but if the user has left the room,
@@ -1338,10 +1347,11 @@ class SyncHandler:
events in the timeline.
timeline_state: The contribution to the room state from state events in
`batch`. Only contains the last event for any given state key.
joined: whether the user is currently joined to the room
Returns:
A map from (type, state_key) to event_id, for each event that we believe
should be included in the `state` part of the sync response.
should be included in the `state` or `state_after` part of the sync response.
"""
if members_to_fetch is not None:
# Lazy-loading of membership events is enabled.
@@ -1359,7 +1369,7 @@ class SyncHandler:
# is no guarantee that our membership will be in the auth events of
# timeline events when the room is partial stated.
state_filter = StateFilter.from_lazy_load_member_list(
members_to_fetch.union((syncing_user.to_string(),))
members_to_fetch.union((sync_config.user.to_string(),))
)
# We are happy to use partial state to compute the `/sync` response.
@@ -1373,6 +1383,61 @@ class SyncHandler:
await_full_state = True
lazy_load_members = False
# Check if we are wanting to return the state at the start or end of the
# timeline. If at the end we can just use the current state.
if sync_config.use_state_after:
# If we're getting the state at the end of the timeline, we can just
# use the current state of the room (and roll back any changes
# between when we fetched the current state and `end_token`).
#
# For rooms we're not joined to, there might be a very large number
# of deltas between `end_token` and "now", and so instead we fetch
# the state at the end of the timeline.
if joined:
state_ids = await self._state_storage_controller.get_current_state_ids(
room_id,
state_filter=state_filter,
await_full_state=await_full_state,
)
# Now roll back the state by looking at the state deltas between
# end_token and now.
deltas = await self.store.get_current_state_deltas_for_room(
room_id,
from_token=end_token.room_key,
to_token=self.store.get_room_max_token(),
)
if deltas:
mutable_state_ids = dict(state_ids)
# We iterate over the deltas backwards so that if there are
# multiple changes of the same type/state_key we'll
# correctly pick the earliest delta.
for delta in reversed(deltas):
if delta.prev_event_id:
mutable_state_ids[(delta.event_type, delta.state_key)] = (
delta.prev_event_id
)
elif (delta.event_type, delta.state_key) in mutable_state_ids:
mutable_state_ids.pop((delta.event_type, delta.state_key))
state_ids = mutable_state_ids
return state_ids
else:
# Just use state groups to get the state at the end of the
# timeline, i.e. the state at the leave/etc event.
state_at_timeline_end = (
await self._state_storage_controller.get_state_ids_at(
room_id,
stream_position=end_token,
state_filter=state_filter,
await_full_state=await_full_state,
)
)
return state_at_timeline_end
state_at_timeline_end = await self._state_storage_controller.get_state_ids_at(
room_id,
stream_position=end_token,
@@ -1405,6 +1470,7 @@ class SyncHandler:
async def _compute_state_delta_for_incremental_sync(
self,
room_id: str,
sync_config: SyncConfig,
batch: TimelineBatch,
since_token: StreamToken,
end_token: StreamToken,
@@ -1419,8 +1485,12 @@ class SyncHandler:
(`compute_state_delta`) is responsible for keeping track of which membership
events we have already sent to the client, and hence ripping them out.
Note that whether this returns the state at the start or the end of the
batch depends on `sync_config.use_state_after` (c.f. MSC4222).
Args:
room_id: The room we are calculating for.
sync_config
batch: The timeline batch for the room that will be sent to the user.
since_token: Token of the end of the previous batch.
end_token: Token of the end of the current batch. Normally this will be
@@ -1433,7 +1503,7 @@ class SyncHandler:
Returns:
A map from (type, state_key) to event_id, for each event that we believe
should be included in the `state` part of the sync response.
should be included in the `state` or `state_after` part of the sync response.
"""
if members_to_fetch is not None:
# Lazy-loading is enabled. Only return the state that is needed.
@@ -1445,6 +1515,51 @@ class SyncHandler:
await_full_state = True
lazy_load_members = False
# Check if we are wanting to return the state at the start or end of the
# timeline. If at the end we can just use the current state delta stream.
if sync_config.use_state_after:
delta_state_ids: MutableStateMap[str] = {}
if members_to_fetch:
# We're lazy-loading, so the client might need some more member
# events to understand the events in this timeline. So we always
# fish out all the member events corresponding to the timeline
# here. The caller will then dedupe any redundant ones.
member_ids = await self._state_storage_controller.get_current_state_ids(
room_id=room_id,
state_filter=StateFilter.from_types(
(EventTypes.Member, member) for member in members_to_fetch
),
await_full_state=await_full_state,
)
delta_state_ids.update(member_ids)
# We don't do LL filtering for incremental syncs - see
# https://github.com/vector-im/riot-web/issues/7211#issuecomment-419976346
# N.B. this slows down incr syncs as we are now processing way more
# state in the server than if we were LLing.
#
# i.e. we return all state deltas, including membership changes that
# we'd normally exclude due to LL.
deltas = await self.store.get_current_state_deltas_for_room(
room_id=room_id,
from_token=since_token.room_key,
to_token=end_token.room_key,
)
for delta in deltas:
if delta.event_id is None:
# There was a state reset and this state entry is no longer
# present, but we have no way of informing the client about
# this, so we just skip it for now.
continue
# Note that deltas are in stream ordering, so if there are
# multiple deltas for a given type/state_key we'll always pick
# the latest one.
delta_state_ids[(delta.event_type, delta.state_key)] = delta.event_id
return delta_state_ids
# For a non-gappy sync if the events in the timeline are simply a linear
# chain (i.e. no merging/branching of the graph), then we know the state
# delta between the end of the previous sync and start of the new one is
@@ -2867,6 +2982,7 @@ class SyncHandler:
since_token,
room_builder.end_token,
full_state=full_state,
joined=room_builder.rtype == "joined",
)
else:
# An out of band room won't have any state changes.

View File

@@ -36,7 +36,6 @@ from typing import (
)
import attr
import multipart
import treq
from canonicaljson import encode_canonical_json
from netaddr import AddrFormatError, IPAddress, IPSet
@@ -93,6 +92,20 @@ from synapse.util.async_helpers import timeout_deferred
if TYPE_CHECKING:
from synapse.server import HomeServer
# Support both import names for the `python-multipart` (PyPI) library,
# which renamed its package name from `multipart` to `python_multipart`
# in 0.0.13 (though supports the old import name for compatibility).
# Note that the `multipart` package name conflicts with `multipart` (PyPI)
# so we should prefer importing from `python_multipart` when possible.
try:
from python_multipart import MultipartParser
if TYPE_CHECKING:
from python_multipart import multipart
except ImportError:
from multipart import MultipartParser # type: ignore[no-redef]
logger = logging.getLogger(__name__)
outgoing_requests_counter = Counter("synapse_http_client_requests", "", ["method"])
@@ -1039,7 +1052,7 @@ class _MultipartParserProtocol(protocol.Protocol):
self.deferred = deferred
self.boundary = boundary
self.max_length = max_length
self.parser = None
self.parser: Optional[MultipartParser] = None
self.multipart_response = MultipartResponse()
self.has_redirect = False
self.in_json = False
@@ -1097,12 +1110,12 @@ class _MultipartParserProtocol(protocol.Protocol):
self.deferred.errback()
self.file_length += end - start
callbacks = {
callbacks: "multipart.MultipartCallbacks" = {
"on_header_field": on_header_field,
"on_header_value": on_header_value,
"on_part_data": on_part_data,
}
self.parser = multipart.MultipartParser(self.boundary, callbacks)
self.parser = MultipartParser(self.boundary, callbacks)
self.total_length += len(incoming_data)
if self.max_length is not None and self.total_length >= self.max_length:
@@ -1113,7 +1126,7 @@ class _MultipartParserProtocol(protocol.Protocol):
self.transport.abortConnection()
try:
self.parser.write(incoming_data) # type: ignore[attr-defined]
self.parser.write(incoming_data)
except Exception as e:
logger.warning(f"Exception writing to multipart parser: {e}")
self.deferred.errback()

View File

@@ -19,7 +19,6 @@
#
#
import abc
import cgi
import codecs
import logging
import random
@@ -792,7 +791,7 @@ class MatrixFederationHttpClient:
url_str,
_flatten_response_never_received(e),
)
body = None
body = b""
exc = HttpResponseException(
response.code, response_phrase, body
@@ -1813,8 +1812,9 @@ def check_content_type_is(headers: Headers, expected_content_type: str) -> None:
)
c_type = content_type_headers[0].decode("ascii") # only the first header
val, options = cgi.parse_header(c_type)
if val != expected_content_type:
# Extract the 'essence' of the mimetype, removing any parameter
c_type_parsed = c_type.split(";", 1)[0].strip()
if c_type_parsed != expected_content_type:
raise RequestSendFailed(
RuntimeError(
f"Remote server sent Content-Type header of '{c_type}', not '{expected_content_type}'",

View File

@@ -51,25 +51,17 @@ logger = logging.getLogger(__name__)
# "Hop-by-hop" headers (as opposed to "end-to-end" headers) as defined by RFC2616
# section 13.5.1 and referenced in RFC9110 section 7.6.1. These are meant to only be
# consumed by the immediate recipient and not be forwarded on.
HOP_BY_HOP_HEADERS = {
"Connection",
"Keep-Alive",
"Proxy-Authenticate",
"Proxy-Authorization",
"TE",
"Trailers",
"Transfer-Encoding",
"Upgrade",
HOP_BY_HOP_HEADERS_LOWERCASE = {
"connection",
"keep-alive",
"proxy-authenticate",
"proxy-authorization",
"te",
"trailers",
"transfer-encoding",
"upgrade",
}
if hasattr(Headers, "_canonicalNameCaps"):
# Twisted < 24.7.0rc1
_canonicalHeaderName = Headers()._canonicalNameCaps # type: ignore[attr-defined]
else:
# Twisted >= 24.7.0rc1
# But note that `_encodeName` still exists on prior versions,
# it just encodes differently
_canonicalHeaderName = Headers()._encodeName
assert all(header.lower() == header for header in HOP_BY_HOP_HEADERS_LOWERCASE)
def parse_connection_header_value(
@@ -92,12 +84,12 @@ def parse_connection_header_value(
Returns:
The set of header names that should not be copied over from the remote response.
The keys are capitalized in canonical capitalization.
The keys are lowercased.
"""
extra_headers_to_remove: Set[str] = set()
if connection_header_value:
extra_headers_to_remove = {
_canonicalHeaderName(connection_option.strip()).decode("ascii")
connection_option.decode("ascii").strip().lower()
for connection_option in connection_header_value.split(b",")
}
@@ -194,7 +186,7 @@ class ProxyResource(_AsyncResource):
# The `Connection` header also defines which headers should not be copied over.
connection_header = response_headers.getRawHeaders(b"connection")
extra_headers_to_remove = parse_connection_header_value(
extra_headers_to_remove_lowercase = parse_connection_header_value(
connection_header[0] if connection_header else None
)
@@ -202,10 +194,10 @@ class ProxyResource(_AsyncResource):
for k, v in response_headers.getAllRawHeaders():
# Do not copy over any hop-by-hop headers. These are meant to only be
# consumed by the immediate recipient and not be forwarded on.
header_key = k.decode("ascii")
header_key_lowercase = k.decode("ascii").lower()
if (
header_key in HOP_BY_HOP_HEADERS
or header_key in extra_headers_to_remove
header_key_lowercase in HOP_BY_HOP_HEADERS_LOWERCASE
or header_key_lowercase in extra_headers_to_remove_lowercase
):
continue

Some files were not shown because too many files have changed in this diff Show More