Compare commits

...

87 Commits

Author SHA1 Message Date
Eric Eastwood
7e1a2f5a96 Better label lowercase to avoid confusion 2024-11-01 11:40:04 -05:00
Eric Eastwood
150ce230cb Harden test 2024-11-01 11:35:55 -05:00
Eric Eastwood
842304df57 Add changelog 2024-11-01 11:30:34 -05:00
Eric Eastwood
d29cb8abed Remove usage on internal header encoding API 2024-11-01 11:24:22 -05:00
Travis Ralston
c705beebf7 Support & use stable endpoints for MSC4151 (#17374)
https://github.com/matrix-org/matrix-spec-proposals/pull/4151 has
finished FCP.

See https://github.com/element-hq/synapse/issues/17373 for unstable
endpoint removal

---------

Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
2024-10-31 09:55:30 +00:00
Jason Little
47fe6df013 Remove Generator in _prune_old_outbound_device_pokes (#17814)
Context: https://github.com/matrix-org/synapse/issues/15439
(https://github.com/element-hq/synapse/issues/15439)

Also see discussion in https://github.com/element-hq/synapse/pull/17813
2024-10-30 21:21:22 -05:00
Jason Little
034d472688 Remove Generator in _purge_unreferenced_state_groups twice (#17815)
Context: https://github.com/matrix-org/synapse/issues/15439
(https://github.com/element-hq/synapse/issues/15439)

Also see discussion in https://github.com/element-hq/synapse/pull/17813
2024-10-30 20:16:49 -05:00
Jason Little
0c429fae1d Remove Generator in update_cached_last_access_time (#17816)
Context: https://github.com/matrix-org/synapse/issues/15439
(https://github.com/element-hq/synapse/issues/15439)

Also see discussion in https://github.com/element-hq/synapse/pull/17813
2024-10-30 20:16:24 -05:00
Jason Little
2e5fe3f187 Remove Generator in store_search_entries_txn (#17817)
Context: https://github.com/matrix-org/synapse/issues/15439
(https://github.com/element-hq/synapse/issues/15439)

Also see discussion in https://github.com/element-hq/synapse/pull/17813
2024-10-30 20:15:57 -05:00
Jason Little
af59a99933 Remove Generator from 4 places in PersistEventStore (#17818)
Context: https://github.com/matrix-org/synapse/issues/15439
(https://github.com/element-hq/synapse/issues/15439)

Also see discussion in https://github.com/element-hq/synapse/pull/17813
2024-10-30 20:14:36 -05:00
Jason Little
7987d5e638 Remove Generator in _quarantine_media_txn() (#17813) 2024-10-30 19:34:11 -05:00
Lama
3ae80b0de4 Check if user is in room before being able to tag it (#17839)
Fix #17819
2024-10-30 11:55:23 -05:00
dependabot[bot]
5c781b578d Bump ruff from 0.6.9 to 0.7.1 (#17868) 2024-10-30 11:57:36 +00:00
dependabot[bot]
418fbba8de Bump phonenumbers from 8.13.47 to 8.13.48 (#17880) 2024-10-30 11:56:20 +00:00
dependabot[bot]
6d65c3944b Bump python-multipart from 0.0.12 to 0.0.16 (#17879) 2024-10-30 11:56:12 +00:00
dependabot[bot]
330f170c0e Bump bytes from 1.7.2 to 1.8.0 (#17877) 2024-10-30 11:55:17 +00:00
dependabot[bot]
bf03361c86 Bump anyhow from 1.0.90 to 1.0.91 (#17876) 2024-10-30 11:54:59 +00:00
dependabot[bot]
3e750ab0d8 Bump serde from 1.0.210 to 1.0.213 (#17875) 2024-10-30 11:54:48 +00:00
dependabot[bot]
9cd3545bca Bump regex from 1.11.0 to 1.11.1 (#17874) 2024-10-30 11:54:38 +00:00
Erik Johnston
83513b75f7 Speed up sliding sync by computing extensions in parallel (#17884)
The main change here is to add a helper function
`gather_optional_coroutines`, which works in a similar way as
`yieldable_gather_results` but takes a set of coroutines rather than a
function
2024-10-30 10:51:04 +00:00
Shay
58deef5eba Add admin handler to list of handlers used for background tasks (#17847)
Fixes #17823

While we're at it, makes a change where the redactions are sent as the
admin if the user is not a member of the server (otherwise these fail
with a "User must be our own" message).
2024-10-29 13:50:13 -05:00
Erik Johnston
d427403c67 Fix check for outdated Rust library (#17861)
This failed when install with poetry, so let's properly try and detect
what's going on.
2024-10-29 17:06:15 +00:00
Till Faelligen
e9f9625d6b Merge branch 'master' into develop 2024-10-29 17:47:05 +01:00
Till Faelligen
4be3bd41fd Move announcements up 2024-10-29 17:05:22 +01:00
Till Faelligen
b3b1db4057 1.118.0 2024-10-29 15:30:10 +01:00
Erik Johnston
6c51f8649d Include the destination in the error of 'Destination mismatch' (#17830)
To help debug problems such as
https://github.com/element-hq/synapse/issues/17822
2024-10-29 10:09:25 +00:00
dependabot[bot]
69e9b75373 Bump types-setuptools from 75.1.0.20241014 to 75.2.0.20241019 (#17856)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:44:12 +01:00
dependabot[bot]
5d0514f29b Bump serde_json from 1.0.128 to 1.0.132 (#17857)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:43:40 +01:00
dependabot[bot]
4e5410fdae Bump types-psycopg2 from 2.9.21.20240819 to 2.9.21.20241019 (#17855)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:42:38 +01:00
dependabot[bot]
12d65a6778 Bump cryptography from 43.0.1 to 43.0.3 (#17853)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:40:58 +01:00
dependabot[bot]
1006c12eb2 Bump anyhow from 1.0.89 to 1.0.90 (#17858)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:35:37 +01:00
Andrew Morgan
57efc8c03e Add media tests for a CMYK JPEG image (#17786) 2024-10-23 18:26:01 +01:00
Andrew Morgan
46c885f5b5 fix spelling in changelog 2024-10-22 12:00:40 +01:00
Andrew Morgan
4b94a056bd 1.118.0rc1 2024-10-22 11:56:08 +01:00
Eric Eastwood
a5e16a4ab5 Sliding Sync: Reset forgotten status when membership changes (like rejoining a room) (#17835)
Reset `sliding_sync_membership_snapshots` -> `forgotten` status when
membership changes (like rejoining a room).

Fix https://github.com/element-hq/synapse/issues/17781

### What was the problem before?

Previously, if someone used `/forget` on one of their rooms, it would
update `sliding_sync_membership_snapshots` as expected but when someone
rejoined the room (or had any membership change), the upsert didn't
overwrite and reset the `forgotten` status so it remained `forgotten`
and invisible down the Sliding Sync endpoint.
2024-10-22 11:06:46 +01:00
Quentin Gliech
80ad02e10e Ensure Python 3.13 and PostgreSQL 17 compatibility (#17752)
This adds Python 3.13.0 to the trial test matrix

Also updates `cffi` and `zope.interface` in the locked dependencies to
make sure we have versions compatible with Python 3.13. For some
reasons, they are not being picked up by dependabot.
2024-10-22 09:23:36 +00:00
dependabot[bot]
9512b84a72 Bump mypy from 1.10.1 to 1.11.2 (#17842)
Bumps [mypy](https://github.com/python/mypy) from 1.10.1 to 1.11.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/python/mypy/blob/master/CHANGELOG.md">mypy's
changelog</a>.</em></p>
<blockquote>
<h3>Mypy 1.11.2</h3>
<ul>
<li>Alternative fix for a union-like literal string (Ivan Levkivskyi, PR
<a
href="https://redirect.github.com/python/mypy/pull/17639">17639</a>)</li>
<li>Unwrap <code>TypedDict</code> item types before storing (Ivan
Levkivskyi, PR <a
href="https://redirect.github.com/python/mypy/pull/17640">17640</a>)</li>
</ul>
<h3>Acknowledgements</h3>
<p>Thanks to all mypy contributors who contributed to this release:</p>
<ul>
<li>Alex Waygood</li>
<li>Alexander Leopold Shon</li>
<li>Ali Hamdan</li>
<li>Anders Kaseorg</li>
<li>Ben Brown</li>
<li>Bénédikt Tran</li>
<li>bzoracler</li>
<li>Christoph Tyralla</li>
<li>Christopher Barber</li>
<li>dexterkennedy</li>
<li>gilesgc</li>
<li>GiorgosPapoutsakis</li>
<li>Ivan Levkivskyi</li>
<li>Jelle Zijlstra</li>
<li>Jukka Lehtosalo</li>
<li>Marc Mueller</li>
<li>Matthieu Devlin</li>
<li>Michael R. Crusoe</li>
<li>Nikita Sobolev</li>
<li>Seo Sanghyeon</li>
<li>Shantanu</li>
<li>sobolevn</li>
<li>Steven Troxler</li>
<li>Tadeu Manoel</li>
<li>Tamir Duberstein</li>
<li>Tushar Sadhwani</li>
<li>urnest</li>
<li>Valentin Stanciu</li>
</ul>
<p>I’d also like to thank my employer, Dropbox, for supporting mypy
development.</p>
<h2>Mypy 1.10</h2>
<p>We’ve just uploaded mypy 1.10 to the Python Package Index (<a
href="https://pypi.org/project/mypy/">PyPI</a>). Mypy is a static type
checker for Python. This release includes new features, performance
improvements and bug fixes. You can install it as follows:</p>
<pre><code>python3 -m pip install -U mypy
</code></pre>
<p>You can read the full documentation for this release on <a
href="http://mypy.readthedocs.io">Read the Docs</a>.</p>
<h3>Support TypeIs (PEP 742)</h3>
<p>Mypy now supports <code>TypeIs</code> (<a
href="https://peps.python.org/pep-0742/">PEP 742</a>), which allows</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="789f02c83a"><code>789f02c</code></a>
Bump version to 1.11.2</li>
<li><a
href="917cc75fd6"><code>917cc75</code></a>
An alternative fix for a union-like literal string (<a
href="https://redirect.github.com/python/mypy/issues/17639">#17639</a>)</li>
<li><a
href="7d805b364e"><code>7d805b3</code></a>
Unwrap TypedDict item types before storing (<a
href="https://redirect.github.com/python/mypy/issues/17640">#17640</a>)</li>
<li><a
href="32675dddfa"><code>32675dd</code></a>
Revert &quot;Fix Literal strings containing pipe characters&quot; (<a
href="https://redirect.github.com/python/mypy/issues/17638">#17638</a>)</li>
<li><a
href="778542b93a"><code>778542b</code></a>
Revert &quot;Fix <code>RawExpressionType.accept</code> crash with
<code>--cache-fine-grained</code>&quot; (<a
href="https://redirect.github.com/python/mypy/issues/1">#1</a>...</li>
<li><a
href="14ab742dec"><code>14ab742</code></a>
Bump version to 1.11.2+dev</li>
<li><a
href="570b90a7a3"><code>570b90a</code></a>
Bump version to 1.11</li>
<li><a
href="b3a102ef31"><code>b3a102e</code></a>
Fix <code>RawExpressionType.accept</code> crash with
<code>--cache-fine-grained</code> (<a
href="https://redirect.github.com/python/mypy/issues/17588">#17588</a>)</li>
<li><a
href="aec04c7448"><code>aec04c7</code></a>
Fix PEP 604 isinstance caching (<a
href="https://redirect.github.com/python/mypy/issues/17563">#17563</a>)</li>
<li><a
href="cb44e4d8f1"><code>cb44e4d</code></a>
Fix <code>typing.TypeAliasType</code> being undefined on python &lt;
3.12 (<a
href="https://redirect.github.com/python/mypy/issues/17558">#17558</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/python/mypy/compare/v1.10.1...v1.11.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=mypy&package-manager=pip&previous-version=1.10.1&new-version=1.11.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-17 15:05:00 +00:00
dependabot[bot]
22aa925523 Bump types-requests from 2.32.0.20240914 to 2.32.0.20241016 (#17841)
Bumps [types-requests](https://github.com/python/typeshed) from
2.32.0.20240914 to 2.32.0.20241016.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=types-requests&package-manager=pip&previous-version=2.32.0.20240914&new-version=2.32.0.20241016)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-17 14:52:18 +00:00
dependabot[bot]
0ab99369a1 Bump sentry-sdk from 2.16.0 to 2.17.0 (#17844)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from
2.16.0 to 2.17.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-python/releases">sentry-sdk's
releases</a>.</em></p>
<blockquote>
<h2>2.17.0</h2>
<h3>Various fixes &amp; improvements</h3>
<ul>
<li>Add support for async calls in Anthropic and OpenAI integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3497">#3497</a>)
by <a href="https://github.com/vetyy"><code>@​vetyy</code></a></li>
<li>Allow custom transaction names in ASGI (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3664">#3664</a>)
by <a
href="https://github.com/sl0thentr0py"><code>@​sl0thentr0py</code></a></li>
<li>Langchain: Handle case when parent span wasn't traced (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3656">#3656</a>)
by <a
href="https://github.com/rbasoalto"><code>@​rbasoalto</code></a></li>
<li>Fix Anthropic integration when using tool calls (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3615">#3615</a>)
by <a href="https://github.com/kwnath"><code>@​kwnath</code></a></li>
<li>More defensive Django Spotlight middleware injection (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3665">#3665</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>Remove <code>ensure_integration_enabled_async</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3632">#3632</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Test with newer Falcon version (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3644">#3644</a>,
<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3653">#3653</a>,
<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3662">#3662</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Fix mypy (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3657">#3657</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Fix flaky transport test (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3666">#3666</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Remove pin on <code>sphinx</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3650">#3650</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Bump <code>actions/checkout</code> from <code>4.2.0</code> to
<code>4.2.1</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3651">#3651</a>)
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md">sentry-sdk's
changelog</a>.</em></p>
<blockquote>
<h2>2.17.0</h2>
<h3>Various fixes &amp; improvements</h3>
<ul>
<li>Add support for async calls in Anthropic and OpenAI integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3497">#3497</a>)
by <a href="https://github.com/vetyy"><code>@​vetyy</code></a></li>
<li>Allow custom transaction names in ASGI (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3664">#3664</a>)
by <a
href="https://github.com/sl0thentr0py"><code>@​sl0thentr0py</code></a></li>
<li>Langchain: Handle case when parent span wasn't traced (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3656">#3656</a>)
by <a
href="https://github.com/rbasoalto"><code>@​rbasoalto</code></a></li>
<li>Fix Anthropic integration when using tool calls (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3615">#3615</a>)
by <a href="https://github.com/kwnath"><code>@​kwnath</code></a></li>
<li>More defensive Django Spotlight middleware injection (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3665">#3665</a>)
by <a href="https://github.com/BYK"><code>@​BYK</code></a></li>
<li>Remove <code>ensure_integration_enabled_async</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3632">#3632</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Test with newer Falcon version (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3644">#3644</a>,
<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3653">#3653</a>,
<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3662">#3662</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Fix mypy (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3657">#3657</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Fix flaky transport test (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3666">#3666</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Remove pin on <code>sphinx</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3650">#3650</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Bump <code>actions/checkout</code> from <code>4.2.0</code> to
<code>4.2.1</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3651">#3651</a>)
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e44c9eeafd"><code>e44c9ee</code></a>
Update CHANGELOG.md</li>
<li><a
href="ee30db346c"><code>ee30db3</code></a>
release: 2.17.0</li>
<li><a
href="365d9cf244"><code>365d9cf</code></a>
Fix flaky transport test (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3666">#3666</a>)</li>
<li><a
href="9ae58209ee"><code>9ae5820</code></a>
Add support for async calls in Anthropic and OpenAI integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3497">#3497</a>)</li>
<li><a
href="891afee6df"><code>891afee</code></a>
fix(spotlight): More defensive Django spotlight middleware injection (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3665">#3665</a>)</li>
<li><a
href="f493057fde"><code>f493057</code></a>
Allow custom transaction names in asgi (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3664">#3664</a>)</li>
<li><a
href="e463034c2c"><code>e463034</code></a>
tests: Falcon RC1 (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3662">#3662</a>)</li>
<li><a
href="deca5f2f01"><code>deca5f2</code></a>
build(deps): Remove pin on sphinx (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3650">#3650</a>)</li>
<li><a
href="302457dec2"><code>302457d</code></a>
build(deps): bump actions/checkout from 4.2.0 to 4.2.1 (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3651">#3651</a>)</li>
<li><a
href="846b8b26aa"><code>846b8b2</code></a>
fix(langchain): handle case when parent span wasn't traced (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/3656">#3656</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-python/compare/2.16.0...2.17.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=sentry-sdk&package-manager=pip&previous-version=2.16.0&new-version=2.17.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-17 14:34:00 +00:00
dependabot[bot]
6ececb8f2a Bump psycopg2 from 2.9.9 to 2.9.10 (#17843)
Bumps [psycopg2](https://github.com/psycopg/psycopg2) from 2.9.9 to
2.9.10.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/psycopg/psycopg2/blob/master/NEWS">psycopg2's
changelog</a>.</em></p>
<blockquote>
<h2>Current release</h2>
<p>What's new in psycopg 2.9.10
^^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Add support for Python 3.13.</li>
<li>Receive notifications on commit
(🎫<code>[#1728](https://github.com/psycopg/psycopg2/issues/1728)</code>).</li>
<li><code>~psycopg2.errorcodes</code> map and
<code>~psycopg2.errors</code> classes updated to
PostgreSQL 17.</li>
<li>Drop support for Python 3.7.</li>
</ul>
<p>What's new in psycopg 2.9.9
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Add support for Python 3.12.</li>
<li>Drop support for Python 3.6.</li>
</ul>
<p>What's new in psycopg 2.9.8
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Wheel package bundled with PostgreSQL 16 libpq in order to add
support for
recent features, such as <code>sslcertmode</code>.</li>
</ul>
<p>What's new in psycopg 2.9.7
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Fix propagation of exceptions raised during module initialization

(🎫<code>[#1598](https://github.com/psycopg/psycopg2/issues/1598)</code>).</li>
<li>Fix building when pg_config returns an empty string
(🎫<code>[#1599](https://github.com/psycopg/psycopg2/issues/1599)</code>).</li>
<li>Wheel package bundled with OpenSSL 1.1.1v.</li>
</ul>
<p>What's new in psycopg 2.9.6
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Package manylinux 2014 for aarch64 and ppc64le platforms, in order
to
include libpq 15 in the binary package
(🎫<code>[#1396](https://github.com/psycopg/psycopg2/issues/1396)</code>).</li>
<li>Wheel package bundled with OpenSSL 1.1.1t.</li>
</ul>
<p>What's new in psycopg 2.9.5
^^^^^^^^^^^^^^^^^^^^^^^^^^^</p>
<ul>
<li>Add support for Python 3.11.</li>
<li>Add support for rowcount in MERGE statements in binary packages

(🎫<code>[#1497](https://github.com/psycopg/psycopg2/issues/1497)</code>).</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/psycopg/psycopg2/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=psycopg2&package-manager=pip&previous-version=2.9.9&new-version=2.9.10)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-17 14:29:05 +00:00
Erik Johnston
2ce7a1edf7 Merge branch 'master' into develop 2024-10-15 15:01:48 +01:00
Erik Johnston
ec885ffd33 1.117.0 2024-10-15 10:46:33 +01:00
Tulir Asokan
11bc9a1b3a Implement MSC4210: Remove legacy mentions (#17783) 2024-10-14 14:24:28 +01:00
Andrew Morgan
c5b379de66 Enable the .org.matrix.msc4028.encrypted_event push rule by default (#17826)
Clients will still only see this rule if the corresponding experimental
feature, `msc4028_push_encrypted_events`, is also enabled.

This aligns the implementation with MSC4028, specifically [this
section](https://github.com/matrix-org/matrix-spec-proposals/blob/giomfo/push_encrypted_events/proposals/4028-push-all-encrypted-events-except-for-muted-rooms.md#unstable-prefix).
See https://github.com/element-hq/synapse/issues/16846 for context.
2024-10-14 13:49:43 +01:00
Eric Eastwood
adda2a4613 Sliding Sync: Slight optimization when fetching state for the room (get_events_as_list(...)) (#17718)
Spawning from @kegsay [pointing
out](https://matrix.to/#/!cnVVNLKqgUzNTOFQkz:matrix.org/$ExOO7J8uPUQSyH-9Uxc_QCa8jlXX9uK4VRtkSC0EI3o?via=element.io&via=matrix.org&via=jki.re)
that the Sliding Sync endpoint doesn't handle a large room with a lot of
state well on initial sync (requesting all state via `required_state: [
["*","*"] ]`) (it just takes forever).

After investigating further, the slow part is just
`get_events_as_list(...)` fetching all of the current state ID's out for
the room (which can be 100k+ events for rooms with a lot of membership).
This is just a slow thing in Synapse in general and the same thing
happens in Sync v2 or the `/state` endpoint.


---

The only idea I had to improve things was to use `batch_iter` to only
try fetching a fixed amount at a time instead of working with large
maps, lists, and sets. This doesn't seem to have much effect though.

There is already a `batch_iter(event_ids, 200)` in
`_fetch_event_rows(...)` for when we actually have to touch the database
and that's inside a queue to deduplicate work.

I did notice one slight optimization to use `get_events_as_list(...)`
directly instead of `get_events(...)`. `get_events(...)` just turns the
result from `get_events_as_list(...)` into a dict and since we're just
iterating over the events, we don't need the dict/map.
2024-10-14 13:47:35 +01:00
Andrew Morgan
5d47138b46 Fix typo in target_cache_memory_usage docs (#17825) 2024-10-14 13:34:55 +01:00
Erik Johnston
d025b5ab50 Correctly changes to required state config in sliding sync (#17785)
Fixes https://github.com/element-hq/synapse/issues/17698

This handles `required_state` changes by checking if new state has been
added to the config, and if so fetching and returning that from the
current state.

This also takes care to ensure that given a state entry S that is added,
removed and then re-added that we do *not* send S down a second time if
there have been no changes to S in the current state. This is fine for
Rust SDK (as it just remembers all state), but we might decide not to do
this behaviour in the MSC. If we decide to always send down S then its
easy enough to rip out all the code.

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-10-14 13:31:22 +01:00
dependabot[bot]
ae6179b382 Bump mypy-zope from 1.0.5 to 1.0.7 (#17827) 2024-10-14 13:26:40 +01:00
dependabot[bot]
5dd6157972 Bump types-setuptools from 75.1.0.20240917 to 75.1.0.20241014 (#17828) 2024-10-14 13:26:23 +01:00
dependabot[bot]
1266138b66 Bump sentry-sdk from 2.15.0 to 2.16.0 (#17829) 2024-10-14 13:26:12 +01:00
Erik Johnston
24975eca4d Build debian packages for new Ubuntu versions (#17824)
c.f. https://wiki.ubuntu.com/Releases for the currently supported Ubuntu
releases.

Note: this removes support for 23.04 and 23.10, which are EOL.

Fixes #17811
2024-10-14 11:34:33 +01:00
Andrew Morgan
451a9dc7b9 Clarify when 3PID invite module callbacks are called (#17627)
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-10-14 11:31:49 +01:00
Erik Johnston
f6a3e5e1c2 Fix release script to check GH token (#17803)
The current logic didn't work.
2024-10-10 08:59:01 +00:00
Nathan
05576f0b4b Added display_name_claim in jwt_config which sets the user's display name upon registration (#17708) 2024-10-09 12:21:08 +00:00
Martin Weinelt
60aebdb27e Fix saving of non-RGB thumbnails as PNG (#17736) 2024-10-08 18:32:25 +01:00
Erik Johnston
b1b4b2944d Merge branch 'release-v1.117' into develop 2024-10-08 16:35:35 +01:00
Andrew Ferrazzutti
bdcc9fa388 Fix incorrectly documented config path argument (#17802) 2024-10-08 15:05:36 +01:00
Erik Johnston
6a0c21fabd Fixup changlog 2024-10-08 15:04:20 +01:00
dependabot[bot]
f40641c29b Bump sigstore/cosign-installer from 3.6.0 to 3.7.0 (#17798)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 14:56:15 +01:00
dependabot[bot]
1bb528ee44 Bump phonenumbers from 8.13.46 to 8.13.47 (#17797)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 14:41:27 +01:00
dependabot[bot]
165f4ca776 Bump sentry-sdk from 2.14.0 to 2.15.0 (#17795)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 14:41:03 +01:00
dependabot[bot]
475e192cbe Bump tomli from 2.0.1 to 2.0.2 (#17796)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 14:40:12 +01:00
dependabot[bot]
43040a4051 Bump ruff from 0.6.8 to 0.6.9 (#17794)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-08 14:39:19 +01:00
Erik Johnston
b3e2d10f39 1.117.0rc1 2024-10-08 14:37:36 +01:00
Shay
a5986ac229 Improvements to admin redact api (#17792)
- better validation on user input
- fix an early task completion
- when checking membership in rooms, check for rooms user has been
banned from as well
2024-10-08 14:23:21 +01:00
Andrew Ferrazzutti
006251a5d0 Add missing license header (#17799)
Co-authored-by: Erik Johnston <erik@matrix.org>
2024-10-08 12:01:44 +01:00
Erik Johnston
422f3ecec1 Sliding sync: omit bump stamp when it is unchanged (#17788)
This saves some DB lookups in rooms
2024-10-08 11:17:23 +01:00
Erik Johnston
4e90221d87 Sliding sync minor performance speed up using new table (#17787)
Use the new tables to work out which rooms have changed.

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-10-08 11:06:31 +01:00
Erik Johnston
e2610de208 Speed up sliding sync when there are many active subscriptions (#17789)
Two changes: a) use a batch lookup function instead of a loop, b) check
existing data to see if we already have what we need and only fetch what
we don't.
2024-10-08 10:35:15 +01:00
Andrew Morgan
e8c8924b81 Clarify test_forget_when_not_left docstring (#17628) 2024-10-07 16:34:32 +01:00
V02460
e8e0f0fad7 Add config option redis.password_path (#17717)
Adds the option to load the Redis password from a file, instead of
giving it in the config directly. The code is similar to how it’s done
for `registration_shared_secret_path`. I changed the example in the
documentation to represent the best practice regarding the handling of
secrets.

Reading secrets from files has the security advantage of separating the
secrets from the config. It also simplifies secrets management in
Kubernetes.
2024-10-07 09:46:51 +01:00
Henrique
beb7a951f4 docs: add note about PYTHONMALLOC for accurate jemalloc memory tracking (#17709)
Added a note in the documentation suggesting that users may set
`PYTHONMALLOC=malloc` when using `jemalloc`. This allows jemalloc to
track memory usage more accurately by bypassing Python's internal
small-object allocator (`pymalloc`), helping to ensure that
`cache_autotuning` functions as expected.

This doc change aims to provide more clarity for users configuring
jemalloc with Synapse.


Based on:
4ac783549c/synapse/metrics/jemalloc.py (L198-L201)
2024-10-07 08:37:39 +00:00
dependabot[bot]
d34f827ed8 Bump python-multipart from 0.0.10 to 0.0.12 (#17772) 2024-10-07 09:14:30 +01:00
Andrew Ferrazzutti
9920417723 Don't say MSC4140 is supported when it's disabled (#17780) 2024-10-04 13:42:34 +01:00
Andrew Morgan
316d635906 Fix NAME attribute of ReplicationRemovePusherRestServlet (#17779) 2024-10-04 09:53:35 +01:00
Dirk Klimpel
8bbe66a9b9 explain load balancing for federation_sender_instances (#17776)
Adding information on how the load is distributed for
`federation_sender_instances`.

Thx to @devonh for the information.

causal source:
c2e5e9e67c/synapse/config/_base.py (L946-L989)

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
2024-10-03 22:01:33 +00:00
Andrew Morgan
d4e3ad04cd Merge branch 'master' into develop 2024-10-01 12:18:22 +01:00
Andrew Morgan
55c0391cc8 1.116.0 2024-10-01 11:14:13 +01:00
Erik Johnston
81e0f57800 Fix perf when streams don't change often (#17767)
There is a bug with the `StreamChangeCache` where it would incorrectly
return that all entities had changed if asked for entities changed
*since* the earliest stream position.

Note that for streams we use the inequalities: `$min_stream_id <
stream_id <= $max_stream_id`, i.e. when we ask the stream change cache
for all things that have changed since `$stream_id` we don't care for
events that happened *at* `$stream_id`.

Specifically: `_earliest_known_stream_pos` is the position at which we
know that we'll have entries for all changes since that point, we can
use the cache for any stream IDs that equal
`_earliest_known_stream_pos`.

`_earliest_known_stream_pos` is set in three places:
- On startup we set it either to:
  - the current maximum stream ID, with not prefilled values; or
  - the minimum of the latest N values we pulled from the DB
- When we evict items from the bottom, we set it to the stream ID of the
evicted items.

This was changed in https://github.com/matrix-org/synapse/pull/14435,
but I think we were overly conservative there.

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2024-09-30 13:52:33 +01:00
Erik Johnston
ae4862c38f Optimise notifier mk2 (#17766)
Based on #17765.

Basically the idea is to reduce the overhead of calling
`ObservableDeferred` in a loop. The two gains are: a) just using a list
of deferreds rather than the machinery of `ObservableDeferred`, and b)
only calling `PreseverLoggingContext` once.

`PreseverLoggingContext` in particular is expensive to call a lot as
each time it needs to call `get_thread_resource_usage` twice, so that it
an update the CPU metrics of the log context.
2024-09-30 13:32:31 +01:00
dependabot[bot]
602956ef64 Bump ruff from 0.6.7 to 0.6.8 (#17774) 2024-09-30 13:08:56 +01:00
dependabot[bot]
444b565c76 Bump phonenumbers from 8.13.45 to 8.13.46 (#17773) 2024-09-30 13:07:57 +01:00
dependabot[bot]
8068f31146 Bump regex from 1.10.6 to 1.11.0 (#17770) 2024-09-30 13:06:43 +01:00
Erik Johnston
5210565c12 Reduce overhead of sliding sync E2EE loops (#17771)
Mainly toning down logging and only calling
`get_membership_from_event_ids` if something has changed.
2024-09-30 13:00:14 +01:00
Erik Johnston
de955293cf Add fast path for sliding sync streams that only ask for extensions (#17768)
Principally useful for EX e2ee sliding sync connections.
2024-09-30 12:59:50 +01:00
Erik Johnston
93889eb2e7 Optimise notifier (#17765)
The notifier is quite inefficient when it has to wake up many user
streams all at once

From a silly benchmark this takes the time to notify 1M user streams
from ~30s to ~5s
2024-09-30 12:58:13 +01:00
Erik Johnston
ece66ba61c Minor perf speed up for large accounts on SSS (#17751)
This works as instead of passing *all* rooms to `record_sent_rooms` we
only need to pass rooms that were previously not in the LIVE state.

This came from a py-spy where we were spending ~10% CPU calling these
functions. Note that `record_sent_rooms` is a no-op for rooms that are
already in the `LIVE` state, so we only need to call them for
`PREVIOUSLY` or `INITIAL` rooms.
2024-09-30 12:58:02 +01:00
113 changed files with 3695 additions and 609 deletions

View File

@@ -53,7 +53,7 @@ if not IS_PR:
"database": "sqlite",
"extras": "all",
}
for version in ("3.9", "3.10", "3.11", "3.12")
for version in ("3.9", "3.10", "3.11", "3.12", "3.13")
)
trial_postgres_tests = [
@@ -68,9 +68,9 @@ trial_postgres_tests = [
if not IS_PR:
trial_postgres_tests.append(
{
"python-version": "3.12",
"python-version": "3.13",
"database": "postgres",
"postgres-version": "16",
"postgres-version": "17",
"extras": "all",
}
)

View File

@@ -30,7 +30,7 @@ jobs:
run: docker buildx inspect
- name: Install Cosign
uses: sigstore/cosign-installer@v3.6.0
uses: sigstore/cosign-installer@v3.7.0
- name: Checkout repository
uses: actions/checkout@v4

View File

@@ -1,3 +1,117 @@
# Synapse 1.118.0 (2024-10-29)
No significant changes since 1.118.0rc1.
### Python 3.8 support will be dropped in the next release
Python 3.8 is now [end-of-life](https://devguide.python.org/versions/). As per our [Deprecation Policy for Platform Dependencies](https://element-hq.github.io/synapse/latest/deprecation_policy.html#policy), Synapse will be dropping support for Python 3.8 in the next release; Synapse 1.119.0.
Synapse 1.118.x will be the final release to support Python 3.8. If you are running Synapse with Python 3.8, please upgrade before the 1.119.0 release, due in less than one month.
### Python 3.13 and PostgreSQL 17 support
On the other end of the spectrum, Synapse 1.118.0 is the first release to support [Python 3.13](https://www.python.org/downloads/release/python-3130/)! [PostgreSQL 17](https://www.postgresql.org/about/news/postgresql-17-released-2936/) is also supported as of this release.
# Synapse 1.118.0rc1 (2024-10-22)
### Features
- Added the `display_name_claim` option to the JWT configuration. This option allows specifying the claim key that contains the user's display name in the JWT payload. ([\#17708](https://github.com/element-hq/synapse/issues/17708))
- Implement [MSC4210](https://github.com/matrix-org/matrix-spec-proposals/pull/4210): Remove legacy mentions. Contributed by @tulir @ Beeper. ([\#17783](https://github.com/element-hq/synapse/issues/17783))
### Bugfixes
- Fix saving of PNG thumbnails, when the original image is in the CMYK color space. ([\#17736](https://github.com/element-hq/synapse/issues/17736))
- Fix bug with sliding sync where the server would not return state that was added to the `required_state` config. ([\#17785](https://github.com/element-hq/synapse/issues/17785), [\#17805](https://github.com/element-hq/synapse/issues/17805))
- Fix a bug in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync that would cause rooms to stay forgotten and hidden even after rejoining. ([\#17835](https://github.com/element-hq/synapse/issues/17835))
### Improved Documentation
- Clarify when the `user_may_invite` and `user_may_send_3pid_invite` module callbacks are called. ([\#17627](https://github.com/element-hq/synapse/issues/17627))
- Correct documentation to refer to the `--config-path` argument instead of `--config-file`. ([\#17802](https://github.com/element-hq/synapse/issues/17802))
- Fix typo in `target_cache_memory_usage` docs. ([\#17825](https://github.com/element-hq/synapse/issues/17825))
### Internal Changes
- Slight optimization when fetching state/events for Sliding Sync. ([\#17718](https://github.com/element-hq/synapse/issues/17718))
- Add Python 3.13 and Postgres 17 to the test matrix. ([\#17752](https://github.com/element-hq/synapse/issues/17752))
- Test github token before running release script steps. ([\#17803](https://github.com/element-hq/synapse/issues/17803))
- Build debian packages for new Ubuntu versions, and stop building for no longer supported versions. ([\#17824](https://github.com/element-hq/synapse/issues/17824))
- Enable the `.org.matrix.msc4028.encrypted_event` push rule by default in accordance with [MSC4028](https://github.com/matrix-org/matrix-spec-proposals/pull/4028). Note that the corresponding experimental feature must still be switched on for this push rule to have any effect. ([\#17826](https://github.com/element-hq/synapse/issues/17826))
- Fix some typing issues uncovered by upgrading mypy to 1.11.x. ([\#17842](https://github.com/element-hq/synapse/issues/17842))
### Updates to locked dependencies
* Bump mypy from 1.10.1 to 1.11.2. ([\#17842](https://github.com/element-hq/synapse/issues/17842))
* Bump mypy-zope from 1.0.5 to 1.0.7. ([\#17827](https://github.com/element-hq/synapse/issues/17827))
* Bump phonenumbers from 8.13.46 to 8.13.47. ([\#17797](https://github.com/element-hq/synapse/issues/17797))
* Bump psycopg2 from 2.9.9 to 2.9.10. ([\#17843](https://github.com/element-hq/synapse/issues/17843))
* Bump ruff from 0.6.8 to 0.6.9. ([\#17794](https://github.com/element-hq/synapse/issues/17794))
* Bump sentry-sdk from 2.14.0 to 2.15.0. ([\#17795](https://github.com/element-hq/synapse/issues/17795))
* Bump sentry-sdk from 2.15.0 to 2.16.0. ([\#17829](https://github.com/element-hq/synapse/issues/17829))
* Bump sentry-sdk from 2.16.0 to 2.17.0. ([\#17844](https://github.com/element-hq/synapse/issues/17844))
* Bump sigstore/cosign-installer from 3.6.0 to 3.7.0. ([\#17798](https://github.com/element-hq/synapse/issues/17798))
* Bump tomli from 2.0.1 to 2.0.2. ([\#17796](https://github.com/element-hq/synapse/issues/17796))
* Bump types-requests from 2.32.0.20240914 to 2.32.0.20241016. ([\#17841](https://github.com/element-hq/synapse/issues/17841))
* Bump types-setuptools from 75.1.0.20240917 to 75.1.0.20241014. ([\#17828](https://github.com/element-hq/synapse/issues/17828))
# Synapse 1.117.0 (2024-10-15)
No significant changes since 1.117.0rc1.
# Synapse 1.117.0rc1 (2024-10-08)
### Features
- Add config option `redis.password_path`. ([\#17717](https://github.com/element-hq/synapse/issues/17717))
### Bugfixes
- Fix a rare bug introduced in v1.29.0 where invalidating a user's access token from a worker could raise an error. ([\#17779](https://github.com/element-hq/synapse/issues/17779))
- In the response to `GET /_matrix/client/versions`, set the `unstable_features` flag for [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140) to `false` when server configuration disables support for delayed events. ([\#17780](https://github.com/element-hq/synapse/issues/17780))
- Improve input validation and room membership checks in admin redaction API. ([\#17792](https://github.com/element-hq/synapse/issues/17792))
### Improved Documentation
- Clarify the docstring of `test_forget_when_not_left`. ([\#17628](https://github.com/element-hq/synapse/issues/17628))
- Add documentation note about PYTHONMALLOC for accurate jemalloc memory tracking. Contributed by @hensg. ([\#17709](https://github.com/element-hq/synapse/issues/17709))
- Remove spurious "TODO UPDATE ALL THIS" note in the Debian installation docs. ([\#17749](https://github.com/element-hq/synapse/issues/17749))
- Explain how load balancing works for `federation_sender_instances`. ([\#17776](https://github.com/element-hq/synapse/issues/17776))
### Internal Changes
- Minor performance increase for large accounts using sliding sync. ([\#17751](https://github.com/element-hq/synapse/issues/17751))
- Increase performance of the notifier when there are many syncing users. ([\#17765](https://github.com/element-hq/synapse/issues/17765), [\#17766](https://github.com/element-hq/synapse/issues/17766))
- Fix performance of streams that don't change often. ([\#17767](https://github.com/element-hq/synapse/issues/17767))
- Improve performance of sliding sync connections that do not ask for any rooms. ([\#17768](https://github.com/element-hq/synapse/issues/17768))
- Reduce overhead of sliding sync E2EE loops. ([\#17771](https://github.com/element-hq/synapse/issues/17771))
- Sliding sync minor performance speed up using new table. ([\#17787](https://github.com/element-hq/synapse/issues/17787))
- Sliding sync minor performance improvement by omitting unchanged data from incremental responses. ([\#17788](https://github.com/element-hq/synapse/issues/17788))
- Speed up sliding sync when there are many active subscriptions. ([\#17789](https://github.com/element-hq/synapse/issues/17789))
- Add missing license headers on new source files. ([\#17799](https://github.com/element-hq/synapse/issues/17799))
### Updates to locked dependencies
* Bump phonenumbers from 8.13.45 to 8.13.46. ([\#17773](https://github.com/element-hq/synapse/issues/17773))
* Bump python-multipart from 0.0.10 to 0.0.12. ([\#17772](https://github.com/element-hq/synapse/issues/17772))
* Bump regex from 1.10.6 to 1.11.0. ([\#17770](https://github.com/element-hq/synapse/issues/17770))
* Bump ruff from 0.6.7 to 0.6.8. ([\#17774](https://github.com/element-hq/synapse/issues/17774))
# Synapse 1.116.0 (2024-10-01)
No significant changes since 1.116.0rc2.
# Synapse 1.116.0rc2 (2024-09-26)
### Features

40
Cargo.lock generated
View File

@@ -13,9 +13,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.89"
version = "1.0.91"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86fdf8605db99b54d3cd748a44c6d04df638eb5dafb219b135d0149bd0db01f6"
checksum = "c042108f3ed77fd83760a5fd79b53be043192bb3b9dba91d8c574c0ada7850c8"
[[package]]
name = "arc-swap"
@@ -67,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
[[package]]
name = "bytes"
version = "1.7.2"
version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "428d9aa8fbc0670b7b8d6030a7fadd0f86151cae55e4dbbece15f3780a3dfaf3"
checksum = "9ac0150caa2ae65ca5bd83f25c7de183dea78d4d366469f148435e2acfbad0da"
[[package]]
name = "cfg-if"
@@ -302,9 +302,9 @@ checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de"
[[package]]
name = "proc-macro2"
version = "1.0.82"
version = "1.0.89"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ad3d49ab951a01fbaafe34f2ec74122942fe18a3f9814c3268f1bb72042131b"
checksum = "f139b0662de085916d1fb67d2b4169d1addddda1919e696f3252b740b629986e"
dependencies = [
"unicode-ident",
]
@@ -444,9 +444,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.10.6"
version = "1.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4219d74c6b67a3654a9fbebc4b419e22126d13d2f3c4a07ee0cb61ff79a79619"
checksum = "b544ef1b4eac5dc2db33ea63606ae9ffcfac26c1416a2806ae0bf5f56b201191"
dependencies = [
"aho-corasick",
"memchr",
@@ -456,9 +456,9 @@ dependencies = [
[[package]]
name = "regex-automata"
version = "0.4.6"
version = "0.4.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86b83b8b9847f9bf95ef68afb0b8e6cdb80f498442f5179a29fad448fcc1eaea"
checksum = "368758f23274712b504848e9d5a6f010445cc8b87a7cdb4d7cbee666c1288da3"
dependencies = [
"aho-corasick",
"memchr",
@@ -467,9 +467,9 @@ dependencies = [
[[package]]
name = "regex-syntax"
version = "0.8.3"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "adad44e29e4c806119491a7f06f03de4d1af22c3a680dd47f1e6e179439d1f56"
checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
[[package]]
name = "ryu"
@@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]]
name = "serde"
version = "1.0.210"
version = "1.0.213"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8e3592472072e6e22e0a54d5904d9febf8508f65fb8552499a1abc7d1078c3a"
checksum = "3ea7893ff5e2466df8d720bb615088341b295f849602c6956047f8f80f0e9bc1"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.210"
version = "1.0.213"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "243902eda00fad750862fc144cea25caca5e20d615af0a81bee94ca738f1df1f"
checksum = "7e85ad2009c50b58e87caa8cd6dac16bdf511bbfb7af6c33df902396aa480fa5"
dependencies = [
"proc-macro2",
"quote",
@@ -505,9 +505,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.128"
version = "1.0.132"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6ff5456707a1de34e7e37f2a6fd3d3f808c318259cbd01ab6377795054b483d8"
checksum = "d726bfaff4b320266d395898905d0eba0345aae23b54aee3a737e260fd46db03"
dependencies = [
"itoa",
"memchr",
@@ -551,9 +551,9 @@ checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc"
[[package]]
name = "syn"
version = "2.0.61"
version = "2.0.85"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c993ed8ccba56ae856363b1845da7266a7cb78e1d146c8a32d54b45a8b831fc9"
checksum = "5023162dfcd14ef8f32034d8bcd4cc5ddc61ef7a247c024a33e24e1f24d21b56"
dependencies = [
"proc-macro2",
"quote",

View File

@@ -0,0 +1 @@
Support [MSC4151](https://github.com/matrix-org/matrix-spec-proposals/pull/4151)'s stable report room API.

View File

@@ -1 +0,0 @@
Remove spurious "TODO UPDATE ALL THIS" note in the Debian installation docs.

1
changelog.d/17786.misc Normal file
View File

@@ -0,0 +1 @@
Add a test for downloading and thumbnailing a CMYK JPEG.

1
changelog.d/17813.bugfix Normal file
View File

@@ -0,0 +1 @@
Avoid lost data on some database query retries.

1
changelog.d/17814.bugfix Normal file
View File

@@ -0,0 +1 @@
Avoid lost data on some database query retries.

1
changelog.d/17815.bugfix Normal file
View File

@@ -0,0 +1 @@
Avoid lost data on some database query retries.

1
changelog.d/17816.bugfix Normal file
View File

@@ -0,0 +1 @@
Avoid lost data on some database query retries.

1
changelog.d/17817.bugfix Normal file
View File

@@ -0,0 +1 @@
Avoid lost data on some database query retries.

1
changelog.d/17818.bugfix Normal file
View File

@@ -0,0 +1 @@
Avoid lost data on some database query retries.

1
changelog.d/17830.misc Normal file
View File

@@ -0,0 +1 @@
Include the destination in the error of 'Destination mismatch' on federation requests.

1
changelog.d/17839.bugfix Normal file
View File

@@ -0,0 +1 @@
Check if user has membership in a room before tagging it. Contributed by Lama Alosaimi.

2
changelog.d/17847.bugfix Normal file
View File

@@ -0,0 +1,2 @@
Fix a bug in the admin redact endpoint where the background task would not run if a worker was specified in
the config option `run_background_tasks_on`.

1
changelog.d/17861.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix detection when the built Rust library was outdated when using source installations.

1
changelog.d/17884.misc Normal file
View File

@@ -0,0 +1 @@
Minor speed-up of sliding sync by computing extensions results in parallel.

1
changelog.d/17894.misc Normal file
View File

@@ -0,0 +1 @@
Remove usage of internal header encoding API.

30
debian/changelog vendored
View File

@@ -1,3 +1,33 @@
matrix-synapse-py3 (1.118.0) stable; urgency=medium
* New Synapse release 1.118.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 29 Oct 2024 15:29:53 +0100
matrix-synapse-py3 (1.118.0~rc1) stable; urgency=medium
* New Synapse release 1.118.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Oct 2024 11:48:14 +0100
matrix-synapse-py3 (1.117.0) stable; urgency=medium
* New Synapse release 1.117.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Oct 2024 10:46:30 +0100
matrix-synapse-py3 (1.117.0~rc1) stable; urgency=medium
* New Synapse release 1.117.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 08 Oct 2024 14:37:11 +0100
matrix-synapse-py3 (1.116.0) stable; urgency=medium
* New Synapse release 1.116.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Oct 2024 11:14:07 +0100
matrix-synapse-py3 (1.116.0~rc2) stable; urgency=medium
* New synapse release 1.116.0rc2.

View File

@@ -1365,6 +1365,9 @@ _Added in Synapse 1.72.0._
## Redact all the events of a user
This endpoint allows an admin to redact the events of a given user. There are no restrictions on redactions for a
local user. By default, we puppet the user who sent the message to redact it themselves. Redactions for non-local users are issued using the admin user, and will fail in rooms where the admin user is not admin/does not have the specified power level to issue redactions.
The API is
```
POST /_synapse/admin/v1/user/$user_id/redact

View File

@@ -76,8 +76,9 @@ _Changed in Synapse v1.62.0: `synapse.module_api.NOT_SPAM` and `synapse.module_a
async def user_may_invite(inviter: str, invitee: str, room_id: str) -> Union["synapse.module_api.NOT_SPAM", "synapse.module_api.errors.Codes", bool]
```
Called when processing an invitation. Both inviter and invitee are
represented by their Matrix user ID (e.g. `@alice:example.com`).
Called when processing an invitation, both when one is created locally or when
receiving an invite over federation. Both inviter and invitee are represented by
their Matrix user ID (e.g. `@alice:example.com`).
The callback must return one of:
@@ -112,7 +113,9 @@ async def user_may_send_3pid_invite(
```
Called when processing an invitation using a third-party identifier (also called a 3PID,
e.g. an email address or a phone number).
e.g. an email address or a phone number). It is only called when a 3PID invite is created
locally - not when one is received in a room over federation. If the 3PID is already associated
with a Matrix ID, the spam check will go through the `user_may_invite` callback instead.
The inviter is represented by their Matrix user ID (e.g. `@alice:example.com`), and the
invitee is represented by its medium (e.g. "email") and its address

View File

@@ -255,6 +255,8 @@ line to `/etc/default/matrix-synapse`:
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2
*Note*: You may need to set `PYTHONMALLOC=malloc` to ensure that `jemalloc` can accurately calculate memory usage. By default, Python uses its internal small-object allocator, which may interfere with jemalloc's ability to track memory consumption correctly. This could prevent the [cache_autotuning](../configuration/config_documentation.md#caches-and-associated-values) feature from functioning as expected, as the Python allocator may not reach the memory threshold set by `max_cache_memory_usage`, thus not triggering the cache eviction process.
This made a significant difference on Python 2.7 - it's unclear how
much of an improvement it provides on Python 3.x.

View File

@@ -1434,7 +1434,7 @@ number of entries that can be stored.
Please see the [Config Conventions](#config-conventions) for information on how to specify memory size and cache expiry
durations.
* `max_cache_memory_usage` sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
They will continue to be evicted until the memory usage drops below the `target_cache_memory_usage`, set in
the setting below, or until the `min_cache_ttl` is hit. There is no default value for this option.
* `target_cache_memory_usage` sets a rough target for the desired memory usage of the caches. There is no default value
for this option.
@@ -3722,6 +3722,8 @@ Additional sub-options for this setting include:
Required if `enabled` is set to true.
* `subject_claim`: Name of the claim containing a unique identifier for the user.
Optional, defaults to `sub`.
* `display_name_claim`: Name of the claim containing the display name for the user. Optional.
If provided, the display name will be set to the value of this claim upon first login.
* `issuer`: The issuer to validate the "iss" claim against. Optional. If provided the
"iss" claim will be required and validated for all JSON web tokens.
* `audiences`: A list of audiences to validate the "aud" claim against. Optional.
@@ -3736,6 +3738,7 @@ jwt_config:
secret: "provided-by-your-issuer"
algorithm: "provided-by-your-issuer"
subject_claim: "name_of_claim"
display_name_claim: "name_of_claim"
issuer: "provided-by-your-issuer"
audiences:
- "provided-by-your-issuer"
@@ -4368,7 +4371,13 @@ It is possible to scale the processes that handle sending outbound federation re
by running a [`generic_worker`](../../workers.md#synapseappgeneric_worker) and adding it's [`worker_name`](#worker_name) to
a `federation_sender_instances` map. Doing so will remove handling of this function from
the main process. Multiple workers can be added to this map, in which case the work is
balanced across them.
balanced across them.
The way that the load balancing works is any outbound federation request will be assigned
to a federation sender worker based on the hash of the destination server name. This
means that all requests being sent to the same destination will be processed by the same
worker instance. Multiple `federation_sender_instances` are useful if there is a federation
with multiple servers.
This configuration setting must be shared between all workers handling federation
sending, and if changed all federation sender workers must be stopped at the same time
@@ -4518,6 +4527,9 @@ This setting has the following sub-options:
* `path`: The full path to a local Unix socket file. **If this is used, `host` and
`port` are ignored.** Defaults to `/tmp/redis.sock'
* `password`: Optional password if configured on the Redis instance.
* `password_path`: Alternative to `password`, reading the password from an
external file. The file should be a plain text file, containing only the
password. Synapse reads the password from the given file once at startup.
* `dbid`: Optional redis dbid if needs to connect to specific redis logical db.
* `use_tls`: Whether to use tls connection. Defaults to false.
* `certificate_file`: Optional path to the certificate file
@@ -4531,13 +4543,16 @@ This setting has the following sub-options:
_Changed in Synapse 1.85.0: Added path option to use a local Unix socket_
_Changed in Synapse 1.116.0: Added password\_path_
Example configuration:
```yaml
redis:
enabled: true
host: localhost
port: 6379
password: <secret_password>
password_path: <path_to_the_password_file>
# OR password: <secret_password>
dbid: <dbid>
#use_tls: True
#certificate_file: <path_to_the_certificate_file>

View File

@@ -177,11 +177,11 @@ The following applies to Synapse installations that have been installed from sou
You can start the main Synapse process with Poetry by running the following command:
```console
poetry run synapse_homeserver --config-file [your homeserver.yaml]
poetry run synapse_homeserver --config-path [your homeserver.yaml]
```
For worker setups, you can run the following command
```console
poetry run synapse_worker --config-file [your homeserver.yaml] --config-file [your worker.yaml]
poetry run synapse_worker --config-path [your homeserver.yaml] --config-path [your worker.yaml]
```
## Available worker applications

446
poetry.lock generated
View File

@@ -147,75 +147,78 @@ files = [
[[package]]
name = "cffi"
version = "1.15.1"
version = "1.17.1"
description = "Foreign Function Interface for Python calling C code."
optional = false
python-versions = "*"
python-versions = ">=3.8"
files = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
{file = "cffi-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14"},
{file = "cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be"},
{file = "cffi-1.17.1-cp310-cp310-win32.whl", hash = "sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c"},
{file = "cffi-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15"},
{file = "cffi-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401"},
{file = "cffi-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b"},
{file = "cffi-1.17.1-cp311-cp311-win32.whl", hash = "sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655"},
{file = "cffi-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0"},
{file = "cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4"},
{file = "cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8"},
{file = "cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65"},
{file = "cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903"},
{file = "cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e"},
{file = "cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9"},
{file = "cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d"},
{file = "cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a"},
{file = "cffi-1.17.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1"},
{file = "cffi-1.17.1-cp38-cp38-win32.whl", hash = "sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8"},
{file = "cffi-1.17.1-cp38-cp38-win_amd64.whl", hash = "sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1"},
{file = "cffi-1.17.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16"},
{file = "cffi-1.17.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e"},
{file = "cffi-1.17.1-cp39-cp39-win32.whl", hash = "sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7"},
{file = "cffi-1.17.1-cp39-cp39-win_amd64.whl", hash = "sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662"},
{file = "cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824"},
]
[package.dependencies]
@@ -357,38 +360,38 @@ files = [
[[package]]
name = "cryptography"
version = "43.0.1"
version = "43.0.3"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = ">=3.7"
files = [
{file = "cryptography-43.0.1-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:8385d98f6a3bf8bb2d65a73e17ed87a3ba84f6991c155691c51112075f9ffc5d"},
{file = "cryptography-43.0.1-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:27e613d7077ac613e399270253259d9d53872aaf657471473ebfc9a52935c062"},
{file = "cryptography-43.0.1-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:68aaecc4178e90719e95298515979814bda0cbada1256a4485414860bd7ab962"},
{file = "cryptography-43.0.1-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:de41fd81a41e53267cb020bb3a7212861da53a7d39f863585d13ea11049cf277"},
{file = "cryptography-43.0.1-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:f98bf604c82c416bc829e490c700ca1553eafdf2912a91e23a79d97d9801372a"},
{file = "cryptography-43.0.1-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:61ec41068b7b74268fa86e3e9e12b9f0c21fcf65434571dbb13d954bceb08042"},
{file = "cryptography-43.0.1-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:014f58110f53237ace6a408b5beb6c427b64e084eb451ef25a28308270086494"},
{file = "cryptography-43.0.1-cp37-abi3-win32.whl", hash = "sha256:2bd51274dcd59f09dd952afb696bf9c61a7a49dfc764c04dd33ef7a6b502a1e2"},
{file = "cryptography-43.0.1-cp37-abi3-win_amd64.whl", hash = "sha256:666ae11966643886c2987b3b721899d250855718d6d9ce41b521252a17985f4d"},
{file = "cryptography-43.0.1-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:ac119bb76b9faa00f48128b7f5679e1d8d437365c5d26f1c2c3f0da4ce1b553d"},
{file = "cryptography-43.0.1-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1bbcce1a551e262dfbafb6e6252f1ae36a248e615ca44ba302df077a846a8806"},
{file = "cryptography-43.0.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58d4e9129985185a06d849aa6df265bdd5a74ca6e1b736a77959b498e0505b85"},
{file = "cryptography-43.0.1-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:d03a475165f3134f773d1388aeb19c2d25ba88b6a9733c5c590b9ff7bbfa2e0c"},
{file = "cryptography-43.0.1-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:511f4273808ab590912a93ddb4e3914dfd8a388fed883361b02dea3791f292e1"},
{file = "cryptography-43.0.1-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:80eda8b3e173f0f247f711eef62be51b599b5d425c429b5d4ca6a05e9e856baa"},
{file = "cryptography-43.0.1-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:38926c50cff6f533f8a2dae3d7f19541432610d114a70808f0926d5aaa7121e4"},
{file = "cryptography-43.0.1-cp39-abi3-win32.whl", hash = "sha256:a575913fb06e05e6b4b814d7f7468c2c660e8bb16d8d5a1faf9b33ccc569dd47"},
{file = "cryptography-43.0.1-cp39-abi3-win_amd64.whl", hash = "sha256:d75601ad10b059ec832e78823b348bfa1a59f6b8d545db3a24fd44362a1564cb"},
{file = "cryptography-43.0.1-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:ea25acb556320250756e53f9e20a4177515f012c9eaea17eb7587a8c4d8ae034"},
{file = "cryptography-43.0.1-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c1332724be35d23a854994ff0b66530119500b6053d0bd3363265f7e5e77288d"},
{file = "cryptography-43.0.1-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:fba1007b3ef89946dbbb515aeeb41e30203b004f0b4b00e5e16078b518563289"},
{file = "cryptography-43.0.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:5b43d1ea6b378b54a1dc99dd8a2b5be47658fe9a7ce0a58ff0b55f4b43ef2b84"},
{file = "cryptography-43.0.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:88cce104c36870d70c49c7c8fd22885875d950d9ee6ab54df2745f83ba0dc365"},
{file = "cryptography-43.0.1-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:9d3cdb25fa98afdd3d0892d132b8d7139e2c087da1712041f6b762e4f807cc96"},
{file = "cryptography-43.0.1-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e710bf40870f4db63c3d7d929aa9e09e4e7ee219e703f949ec4073b4294f6172"},
{file = "cryptography-43.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:7c05650fe8023c5ed0d46793d4b7d7e6cd9c04e68eabe5b0aeea836e37bdcec2"},
{file = "cryptography-43.0.1.tar.gz", hash = "sha256:203e92a75716d8cfb491dc47c79e17d0d9207ccffcbcb35f598fbe463ae3444d"},
{file = "cryptography-43.0.3-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:bf7a1932ac4176486eab36a19ed4c0492da5d97123f1406cf15e41b05e787d2e"},
{file = "cryptography-43.0.3-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:63efa177ff54aec6e1c0aefaa1a241232dcd37413835a9b674b6e3f0ae2bfd3e"},
{file = "cryptography-43.0.3-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e1ce50266f4f70bf41a2c6dc4358afadae90e2a1e5342d3c08883df1675374f"},
{file = "cryptography-43.0.3-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:443c4a81bb10daed9a8f334365fe52542771f25aedaf889fd323a853ce7377d6"},
{file = "cryptography-43.0.3-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:74f57f24754fe349223792466a709f8e0c093205ff0dca557af51072ff47ab18"},
{file = "cryptography-43.0.3-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9762ea51a8fc2a88b70cf2995e5675b38d93bf36bd67d91721c309df184f49bd"},
{file = "cryptography-43.0.3-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:81ef806b1fef6b06dcebad789f988d3b37ccaee225695cf3e07648eee0fc6b73"},
{file = "cryptography-43.0.3-cp37-abi3-win32.whl", hash = "sha256:cbeb489927bd7af4aa98d4b261af9a5bc025bd87f0e3547e11584be9e9427be2"},
{file = "cryptography-43.0.3-cp37-abi3-win_amd64.whl", hash = "sha256:f46304d6f0c6ab8e52770addfa2fc41e6629495548862279641972b6215451cd"},
{file = "cryptography-43.0.3-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:8ac43ae87929a5982f5948ceda07001ee5e83227fd69cf55b109144938d96984"},
{file = "cryptography-43.0.3-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:846da004a5804145a5f441b8530b4bf35afbf7da70f82409f151695b127213d5"},
{file = "cryptography-43.0.3-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f996e7268af62598f2fc1204afa98a3b5712313a55c4c9d434aef49cadc91d4"},
{file = "cryptography-43.0.3-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:f7b178f11ed3664fd0e995a47ed2b5ff0a12d893e41dd0494f406d1cf555cab7"},
{file = "cryptography-43.0.3-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:c2e6fc39c4ab499049df3bdf567f768a723a5e8464816e8f009f121a5a9f4405"},
{file = "cryptography-43.0.3-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:e1be4655c7ef6e1bbe6b5d0403526601323420bcf414598955968c9ef3eb7d16"},
{file = "cryptography-43.0.3-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:df6b6c6d742395dd77a23ea3728ab62f98379eff8fb61be2744d4679ab678f73"},
{file = "cryptography-43.0.3-cp39-abi3-win32.whl", hash = "sha256:d56e96520b1020449bbace2b78b603442e7e378a9b3bd68de65c782db1507995"},
{file = "cryptography-43.0.3-cp39-abi3-win_amd64.whl", hash = "sha256:0c580952eef9bf68c4747774cde7ec1d85a6e61de97281f2dba83c7d2c806362"},
{file = "cryptography-43.0.3-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:d03b5621a135bffecad2c73e9f4deb1a0f977b9a8ffe6f8e002bf6c9d07b918c"},
{file = "cryptography-43.0.3-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:a2a431ee15799d6db9fe80c82b055bae5a752bef645bba795e8e52687c69efe3"},
{file = "cryptography-43.0.3-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:281c945d0e28c92ca5e5930664c1cefd85efe80e5c0d2bc58dd63383fda29f83"},
{file = "cryptography-43.0.3-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:f18c716be16bc1fea8e95def49edf46b82fccaa88587a45f8dc0ff6ab5d8e0a7"},
{file = "cryptography-43.0.3-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:4a02ded6cd4f0a5562a8887df8b3bd14e822a90f97ac5e544c162899bc467664"},
{file = "cryptography-43.0.3-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:53a583b6637ab4c4e3591a15bc9db855b8d9dee9a669b550f311480acab6eb08"},
{file = "cryptography-43.0.3-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:1ec0bcf7e17c0c5669d881b1cd38c4972fade441b27bda1051665faaa89bdcaa"},
{file = "cryptography-43.0.3-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:2ce6fae5bdad59577b44e4dfed356944fbf1d925269114c28be377692643b4ff"},
{file = "cryptography-43.0.3.tar.gz", hash = "sha256:315b9001266a492a6ff443b61238f956b214dbec9910a081ba5b6646a055a805"},
]
[package.dependencies]
@@ -401,7 +404,7 @@ nox = ["nox"]
pep8test = ["check-sdist", "click", "mypy", "ruff"]
sdist = ["build"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi", "cryptography-vectors (==43.0.1)", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
test = ["certifi", "cryptography-vectors (==43.0.3)", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
test-randomorder = ["pytest-randomly"]
[[package]]
@@ -1319,44 +1322,44 @@ files = [
[[package]]
name = "mypy"
version = "1.10.1"
version = "1.11.2"
description = "Optional static typing for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "mypy-1.10.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e36f229acfe250dc660790840916eb49726c928e8ce10fbdf90715090fe4ae02"},
{file = "mypy-1.10.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:51a46974340baaa4145363b9e051812a2446cf583dfaeba124af966fa44593f7"},
{file = "mypy-1.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:901c89c2d67bba57aaaca91ccdb659aa3a312de67f23b9dfb059727cce2e2e0a"},
{file = "mypy-1.10.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:0cd62192a4a32b77ceb31272d9e74d23cd88c8060c34d1d3622db3267679a5d9"},
{file = "mypy-1.10.1-cp310-cp310-win_amd64.whl", hash = "sha256:a2cbc68cb9e943ac0814c13e2452d2046c2f2b23ff0278e26599224cf164e78d"},
{file = "mypy-1.10.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:bd6f629b67bb43dc0d9211ee98b96d8dabc97b1ad38b9b25f5e4c4d7569a0c6a"},
{file = "mypy-1.10.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a1bbb3a6f5ff319d2b9d40b4080d46cd639abe3516d5a62c070cf0114a457d84"},
{file = "mypy-1.10.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8edd4e9bbbc9d7b79502eb9592cab808585516ae1bcc1446eb9122656c6066f"},
{file = "mypy-1.10.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:6166a88b15f1759f94a46fa474c7b1b05d134b1b61fca627dd7335454cc9aa6b"},
{file = "mypy-1.10.1-cp311-cp311-win_amd64.whl", hash = "sha256:5bb9cd11c01c8606a9d0b83ffa91d0b236a0e91bc4126d9ba9ce62906ada868e"},
{file = "mypy-1.10.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:d8681909f7b44d0b7b86e653ca152d6dff0eb5eb41694e163c6092124f8246d7"},
{file = "mypy-1.10.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:378c03f53f10bbdd55ca94e46ec3ba255279706a6aacaecac52ad248f98205d3"},
{file = "mypy-1.10.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bacf8f3a3d7d849f40ca6caea5c055122efe70e81480c8328ad29c55c69e93e"},
{file = "mypy-1.10.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:701b5f71413f1e9855566a34d6e9d12624e9e0a8818a5704d74d6b0402e66c04"},
{file = "mypy-1.10.1-cp312-cp312-win_amd64.whl", hash = "sha256:3c4c2992f6ea46ff7fce0072642cfb62af7a2484efe69017ed8b095f7b39ef31"},
{file = "mypy-1.10.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:604282c886497645ffb87b8f35a57ec773a4a2721161e709a4422c1636ddde5c"},
{file = "mypy-1.10.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:37fd87cab83f09842653f08de066ee68f1182b9b5282e4634cdb4b407266bade"},
{file = "mypy-1.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8addf6313777dbb92e9564c5d32ec122bf2c6c39d683ea64de6a1fd98b90fe37"},
{file = "mypy-1.10.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:5cc3ca0a244eb9a5249c7c583ad9a7e881aa5d7b73c35652296ddcdb33b2b9c7"},
{file = "mypy-1.10.1-cp38-cp38-win_amd64.whl", hash = "sha256:1b3a2ffce52cc4dbaeee4df762f20a2905aa171ef157b82192f2e2f368eec05d"},
{file = "mypy-1.10.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:fe85ed6836165d52ae8b88f99527d3d1b2362e0cb90b005409b8bed90e9059b3"},
{file = "mypy-1.10.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c2ae450d60d7d020d67ab440c6e3fae375809988119817214440033f26ddf7bf"},
{file = "mypy-1.10.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6be84c06e6abd72f960ba9a71561c14137a583093ffcf9bbfaf5e613d63fa531"},
{file = "mypy-1.10.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:2189ff1e39db399f08205e22a797383613ce1cb0cb3b13d8bcf0170e45b96cc3"},
{file = "mypy-1.10.1-cp39-cp39-win_amd64.whl", hash = "sha256:97a131ee36ac37ce9581f4220311247ab6cba896b4395b9c87af0675a13a755f"},
{file = "mypy-1.10.1-py3-none-any.whl", hash = "sha256:71d8ac0b906354ebda8ef1673e5fde785936ac1f29ff6987c7483cfbd5a4235a"},
{file = "mypy-1.10.1.tar.gz", hash = "sha256:1f8f492d7db9e3593ef42d4f115f04e556130f2819ad33ab84551403e97dd4c0"},
{file = "mypy-1.11.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d42a6dd818ffce7be66cce644f1dff482f1d97c53ca70908dff0b9ddc120b77a"},
{file = "mypy-1.11.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:801780c56d1cdb896eacd5619a83e427ce436d86a3bdf9112527f24a66618fef"},
{file = "mypy-1.11.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:41ea707d036a5307ac674ea172875f40c9d55c5394f888b168033177fce47383"},
{file = "mypy-1.11.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6e658bd2d20565ea86da7d91331b0eed6d2eee22dc031579e6297f3e12c758c8"},
{file = "mypy-1.11.2-cp310-cp310-win_amd64.whl", hash = "sha256:478db5f5036817fe45adb7332d927daa62417159d49783041338921dcf646fc7"},
{file = "mypy-1.11.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:75746e06d5fa1e91bfd5432448d00d34593b52e7e91a187d981d08d1f33d4385"},
{file = "mypy-1.11.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a976775ab2256aadc6add633d44f100a2517d2388906ec4f13231fafbb0eccca"},
{file = "mypy-1.11.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:cd953f221ac1379050a8a646585a29574488974f79d8082cedef62744f0a0104"},
{file = "mypy-1.11.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:57555a7715c0a34421013144a33d280e73c08df70f3a18a552938587ce9274f4"},
{file = "mypy-1.11.2-cp311-cp311-win_amd64.whl", hash = "sha256:36383a4fcbad95f2657642a07ba22ff797de26277158f1cc7bd234821468b1b6"},
{file = "mypy-1.11.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:e8960dbbbf36906c5c0b7f4fbf2f0c7ffb20f4898e6a879fcf56a41a08b0d318"},
{file = "mypy-1.11.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:06d26c277962f3fb50e13044674aa10553981ae514288cb7d0a738f495550b36"},
{file = "mypy-1.11.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6e7184632d89d677973a14d00ae4d03214c8bc301ceefcdaf5c474866814c987"},
{file = "mypy-1.11.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:3a66169b92452f72117e2da3a576087025449018afc2d8e9bfe5ffab865709ca"},
{file = "mypy-1.11.2-cp312-cp312-win_amd64.whl", hash = "sha256:969ea3ef09617aff826885a22ece0ddef69d95852cdad2f60c8bb06bf1f71f70"},
{file = "mypy-1.11.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:37c7fa6121c1cdfcaac97ce3d3b5588e847aa79b580c1e922bb5d5d2902df19b"},
{file = "mypy-1.11.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4a8a53bc3ffbd161b5b2a4fff2f0f1e23a33b0168f1c0778ec70e1a3d66deb86"},
{file = "mypy-1.11.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2ff93107f01968ed834f4256bc1fc4475e2fecf6c661260066a985b52741ddce"},
{file = "mypy-1.11.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:edb91dded4df17eae4537668b23f0ff6baf3707683734b6a818d5b9d0c0c31a1"},
{file = "mypy-1.11.2-cp38-cp38-win_amd64.whl", hash = "sha256:ee23de8530d99b6db0573c4ef4bd8f39a2a6f9b60655bf7a1357e585a3486f2b"},
{file = "mypy-1.11.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:801ca29f43d5acce85f8e999b1e431fb479cb02d0e11deb7d2abb56bdaf24fd6"},
{file = "mypy-1.11.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:af8d155170fcf87a2afb55b35dc1a0ac21df4431e7d96717621962e4b9192e70"},
{file = "mypy-1.11.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f7821776e5c4286b6a13138cc935e2e9b6fde05e081bdebf5cdb2bb97c9df81d"},
{file = "mypy-1.11.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:539c570477a96a4e6fb718b8d5c3e0c0eba1f485df13f86d2970c91f0673148d"},
{file = "mypy-1.11.2-cp39-cp39-win_amd64.whl", hash = "sha256:3f14cd3d386ac4d05c5a39a51b84387403dadbd936e17cb35882134d4f8f0d24"},
{file = "mypy-1.11.2-py3-none-any.whl", hash = "sha256:b499bc07dbdcd3de92b0a8b29fdf592c111276f6a12fe29c30f6c417dd546d12"},
{file = "mypy-1.11.2.tar.gz", hash = "sha256:7f9993ad3e0ffdc95c2a14b66dee63729f021968bff8ad911867579c65d13a79"},
]
[package.dependencies]
mypy-extensions = ">=1.0.0"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=4.1.0"
typing-extensions = ">=4.6.0"
[package.extras]
dmypy = ["psutil (>=4.0)"]
@@ -1377,16 +1380,17 @@ files = [
[[package]]
name = "mypy-zope"
version = "1.0.5"
version = "1.0.7"
description = "Plugin for mypy to support zope interfaces"
optional = false
python-versions = "*"
files = [
{file = "mypy_zope-1.0.5.tar.gz", hash = "sha256:2440406d49c0e1199c1cd819c92a2c4957de65579c6abc8a081c927f4bdc8d49"},
{file = "mypy_zope-1.0.7-py3-none-any.whl", hash = "sha256:f19de249574319d81083b15f8a022c6b15583582f23340a860922141f1b651ca"},
{file = "mypy_zope-1.0.7.tar.gz", hash = "sha256:32a79ce78647c0bea61e7e0c0eb1233fcb97bb94e8950cca73f17d3419c602f7"},
]
[package.dependencies]
mypy = ">=1.0.0,<1.11.0"
mypy = ">=1.0.0,<1.12.0"
"zope.interface" = "*"
"zope.schema" = "*"
@@ -1447,13 +1451,13 @@ dev = ["jinja2"]
[[package]]
name = "phonenumbers"
version = "8.13.45"
version = "8.13.48"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
files = [
{file = "phonenumbers-8.13.45-py2.py3-none-any.whl", hash = "sha256:bf05ec20fcd13f0d53e43a34ed7bd1c8be26a72b88fce4b8c64fca5b4641987a"},
{file = "phonenumbers-8.13.45.tar.gz", hash = "sha256:53679a95b6060fd5e15467759252c87933d8566d6a5be00995a579eb0e02435b"},
{file = "phonenumbers-8.13.48-py2.py3-none-any.whl", hash = "sha256:5c51939acefa390eb74119750afb10a85d3c628dc83fd62c52d6f532fcf5d205"},
{file = "phonenumbers-8.13.48.tar.gz", hash = "sha256:62d8df9b0f3c3c41571c6b396f044ddd999d61631534001b8be7fdf7ba1b18f3"},
]
[[package]]
@@ -1594,24 +1598,20 @@ twisted = ["twisted"]
[[package]]
name = "psycopg2"
version = "2.9.9"
version = "2.9.10"
description = "psycopg2 - Python-PostgreSQL Database Adapter"
optional = true
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "psycopg2-2.9.9-cp310-cp310-win32.whl", hash = "sha256:38a8dcc6856f569068b47de286b472b7c473ac7977243593a288ebce0dc89516"},
{file = "psycopg2-2.9.9-cp310-cp310-win_amd64.whl", hash = "sha256:426f9f29bde126913a20a96ff8ce7d73fd8a216cfb323b1f04da402d452853c3"},
{file = "psycopg2-2.9.9-cp311-cp311-win32.whl", hash = "sha256:ade01303ccf7ae12c356a5e10911c9e1c51136003a9a1d92f7aa9d010fb98372"},
{file = "psycopg2-2.9.9-cp311-cp311-win_amd64.whl", hash = "sha256:121081ea2e76729acfb0673ff33755e8703d45e926e416cb59bae3a86c6a4981"},
{file = "psycopg2-2.9.9-cp312-cp312-win32.whl", hash = "sha256:d735786acc7dd25815e89cc4ad529a43af779db2e25aa7c626de864127e5a024"},
{file = "psycopg2-2.9.9-cp312-cp312-win_amd64.whl", hash = "sha256:a7653d00b732afb6fc597e29c50ad28087dcb4fbfb28e86092277a559ae4e693"},
{file = "psycopg2-2.9.9-cp37-cp37m-win32.whl", hash = "sha256:5e0d98cade4f0e0304d7d6f25bbfbc5bd186e07b38eac65379309c4ca3193efa"},
{file = "psycopg2-2.9.9-cp37-cp37m-win_amd64.whl", hash = "sha256:7e2dacf8b009a1c1e843b5213a87f7c544b2b042476ed7755be813eaf4e8347a"},
{file = "psycopg2-2.9.9-cp38-cp38-win32.whl", hash = "sha256:ff432630e510709564c01dafdbe996cb552e0b9f3f065eb89bdce5bd31fabf4c"},
{file = "psycopg2-2.9.9-cp38-cp38-win_amd64.whl", hash = "sha256:bac58c024c9922c23550af2a581998624d6e02350f4ae9c5f0bc642c633a2d5e"},
{file = "psycopg2-2.9.9-cp39-cp39-win32.whl", hash = "sha256:c92811b2d4c9b6ea0285942b2e7cac98a59e166d59c588fe5cfe1eda58e72d59"},
{file = "psycopg2-2.9.9-cp39-cp39-win_amd64.whl", hash = "sha256:de80739447af31525feddeb8effd640782cf5998e1a4e9192ebdf829717e3913"},
{file = "psycopg2-2.9.9.tar.gz", hash = "sha256:d1454bde93fb1e224166811694d600e746430c006fbb031ea06ecc2ea41bf156"},
{file = "psycopg2-2.9.10-cp310-cp310-win32.whl", hash = "sha256:5df2b672140f95adb453af93a7d669d7a7bf0a56bcd26f1502329166f4a61716"},
{file = "psycopg2-2.9.10-cp310-cp310-win_amd64.whl", hash = "sha256:c6f7b8561225f9e711a9c47087388a97fdc948211c10a4bccbf0ba68ab7b3b5a"},
{file = "psycopg2-2.9.10-cp311-cp311-win32.whl", hash = "sha256:47c4f9875125344f4c2b870e41b6aad585901318068acd01de93f3677a6522c2"},
{file = "psycopg2-2.9.10-cp311-cp311-win_amd64.whl", hash = "sha256:0435034157049f6846e95103bd8f5a668788dd913a7c30162ca9503fdf542cb4"},
{file = "psycopg2-2.9.10-cp312-cp312-win32.whl", hash = "sha256:65a63d7ab0e067e2cdb3cf266de39663203d38d6a8ed97f5ca0cb315c73fe067"},
{file = "psycopg2-2.9.10-cp312-cp312-win_amd64.whl", hash = "sha256:4a579d6243da40a7b3182e0430493dbd55950c493d8c68f4eec0b302f6bbf20e"},
{file = "psycopg2-2.9.10-cp39-cp39-win32.whl", hash = "sha256:9d5b3b94b79a844a986d029eee38998232451119ad653aea42bb9220a8c5066b"},
{file = "psycopg2-2.9.10-cp39-cp39-win_amd64.whl", hash = "sha256:88138c8dedcbfa96408023ea2b0c369eda40fe5d75002c0964c78f46f11fa442"},
{file = "psycopg2-2.9.10.tar.gz", hash = "sha256:12ec0b40b0273f95296233e8750441339298e6a572f7039da5b260e3c8b60e11"},
]
[[package]]
@@ -1974,13 +1974,13 @@ six = ">=1.5"
[[package]]
name = "python-multipart"
version = "0.0.10"
version = "0.0.16"
description = "A streaming multipart parser for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "python_multipart-0.0.10-py3-none-any.whl", hash = "sha256:2b06ad9e8d50c7a8db80e3b56dab590137b323410605af2be20d62a5f1ba1dc8"},
{file = "python_multipart-0.0.10.tar.gz", hash = "sha256:46eb3c6ce6fdda5fb1a03c7e11d490e407c6930a2703fe7aef4da71c374688fa"},
{file = "python_multipart-0.0.16-py3-none-any.whl", hash = "sha256:c2759b7b976ef3937214dfb592446b59dfaa5f04682a076f78b117c94776d87a"},
{file = "python_multipart-0.0.16.tar.gz", hash = "sha256:8dee37b88dab9b59922ca173c35acb627cc12ec74019f5cd4578369c6df36554"},
]
[[package]]
@@ -2277,29 +2277,29 @@ files = [
[[package]]
name = "ruff"
version = "0.6.7"
version = "0.7.1"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
files = [
{file = "ruff-0.6.7-py3-none-linux_armv6l.whl", hash = "sha256:08277b217534bfdcc2e1377f7f933e1c7957453e8a79764d004e44c40db923f2"},
{file = "ruff-0.6.7-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:c6707a32e03b791f4448dc0dce24b636cbcdee4dd5607adc24e5ee73fd86c00a"},
{file = "ruff-0.6.7-py3-none-macosx_11_0_arm64.whl", hash = "sha256:533d66b7774ef224e7cf91506a7dafcc9e8ec7c059263ec46629e54e7b1f90ab"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17a86aac6f915932d259f7bec79173e356165518859f94649d8c50b81ff087e9"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:b3f8822defd260ae2460ea3832b24d37d203c3577f48b055590a426a722d50ef"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9ba4efe5c6dbbb58be58dd83feedb83b5e95c00091bf09987b4baf510fee5c99"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:525201b77f94d2b54868f0cbe5edc018e64c22563da6c5c2e5c107a4e85c1c0d"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8854450839f339e1049fdbe15d875384242b8e85d5c6947bb2faad33c651020b"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2f0b62056246234d59cbf2ea66e84812dc9ec4540518e37553513392c171cb18"},
{file = "ruff-0.6.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b1462fa56c832dc0cea5b4041cfc9c97813505d11cce74ebc6d1aae068de36b"},
{file = "ruff-0.6.7-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:02b083770e4cdb1495ed313f5694c62808e71764ec6ee5db84eedd82fd32d8f5"},
{file = "ruff-0.6.7-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:0c05fd37013de36dfa883a3854fae57b3113aaa8abf5dea79202675991d48624"},
{file = "ruff-0.6.7-py3-none-musllinux_1_2_i686.whl", hash = "sha256:f49c9caa28d9bbfac4a637ae10327b3db00f47d038f3fbb2195c4d682e925b14"},
{file = "ruff-0.6.7-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:a0e1655868164e114ba43a908fd2d64a271a23660195017c17691fb6355d59bb"},
{file = "ruff-0.6.7-py3-none-win32.whl", hash = "sha256:a939ca435b49f6966a7dd64b765c9df16f1faed0ca3b6f16acdf7731969deb35"},
{file = "ruff-0.6.7-py3-none-win_amd64.whl", hash = "sha256:590445eec5653f36248584579c06252ad2e110a5d1f32db5420de35fb0e1c977"},
{file = "ruff-0.6.7-py3-none-win_arm64.whl", hash = "sha256:b28f0d5e2f771c1fe3c7a45d3f53916fc74a480698c4b5731f0bea61e52137c8"},
{file = "ruff-0.6.7.tar.gz", hash = "sha256:44e52129d82266fa59b587e2cd74def5637b730a69c4542525dfdecfaae38bd5"},
{file = "ruff-0.7.1-py3-none-linux_armv6l.whl", hash = "sha256:cb1bc5ed9403daa7da05475d615739cc0212e861b7306f314379d958592aaa89"},
{file = "ruff-0.7.1-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:27c1c52a8d199a257ff1e5582d078eab7145129aa02721815ca8fa4f9612dc35"},
{file = "ruff-0.7.1-py3-none-macosx_11_0_arm64.whl", hash = "sha256:588a34e1ef2ea55b4ddfec26bbe76bc866e92523d8c6cdec5e8aceefeff02d99"},
{file = "ruff-0.7.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94fc32f9cdf72dc75c451e5f072758b118ab8100727168a3df58502b43a599ca"},
{file = "ruff-0.7.1-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:985818742b833bffa543a84d1cc11b5e6871de1b4e0ac3060a59a2bae3969250"},
{file = "ruff-0.7.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:32f1e8a192e261366c702c5fb2ece9f68d26625f198a25c408861c16dc2dea9c"},
{file = "ruff-0.7.1-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:699085bf05819588551b11751eff33e9ca58b1b86a6843e1b082a7de40da1565"},
{file = "ruff-0.7.1-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:344cc2b0814047dc8c3a8ff2cd1f3d808bb23c6658db830d25147339d9bf9ea7"},
{file = "ruff-0.7.1-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4316bbf69d5a859cc937890c7ac7a6551252b6a01b1d2c97e8fc96e45a7c8b4a"},
{file = "ruff-0.7.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79d3af9dca4c56043e738a4d6dd1e9444b6d6c10598ac52d146e331eb155a8ad"},
{file = "ruff-0.7.1-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:c5c121b46abde94a505175524e51891f829414e093cd8326d6e741ecfc0a9112"},
{file = "ruff-0.7.1-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:8422104078324ea250886954e48f1373a8fe7de59283d747c3a7eca050b4e378"},
{file = "ruff-0.7.1-py3-none-musllinux_1_2_i686.whl", hash = "sha256:56aad830af8a9db644e80098fe4984a948e2b6fc2e73891538f43bbe478461b8"},
{file = "ruff-0.7.1-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:658304f02f68d3a83c998ad8bf91f9b4f53e93e5412b8f2388359d55869727fd"},
{file = "ruff-0.7.1-py3-none-win32.whl", hash = "sha256:b517a2011333eb7ce2d402652ecaa0ac1a30c114fbbd55c6b8ee466a7f600ee9"},
{file = "ruff-0.7.1-py3-none-win_amd64.whl", hash = "sha256:f38c41fcde1728736b4eb2b18850f6d1e3eedd9678c914dede554a70d5241307"},
{file = "ruff-0.7.1-py3-none-win_arm64.whl", hash = "sha256:19aa200ec824c0f36d0c9114c8ec0087082021732979a359d6f3c390a6ff2a37"},
{file = "ruff-0.7.1.tar.gz", hash = "sha256:9d8a41d4aa2dad1575adb98a82870cf5db5f76b2938cf2206c22c940034a36f4"},
]
[[package]]
@@ -2334,13 +2334,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
[[package]]
name = "sentry-sdk"
version = "2.14.0"
version = "2.17.0"
description = "Python client for Sentry (https://sentry.io)"
optional = true
python-versions = ">=3.6"
files = [
{file = "sentry_sdk-2.14.0-py2.py3-none-any.whl", hash = "sha256:b8bc3dc51d06590df1291b7519b85c75e2ced4f28d9ea655b6d54033503b5bf4"},
{file = "sentry_sdk-2.14.0.tar.gz", hash = "sha256:1e0e2eaf6dad918c7d1e0edac868a7bf20017b177f242cefe2a6bcd47955961d"},
{file = "sentry_sdk-2.17.0-py2.py3-none-any.whl", hash = "sha256:625955884b862cc58748920f9e21efdfb8e0d4f98cca4ab0d3918576d5b606ad"},
{file = "sentry_sdk-2.17.0.tar.gz", hash = "sha256:dd0a05352b78ffeacced73a94e86f38b32e2eae15fff5f30ca5abb568a72eacf"},
]
[package.dependencies]
@@ -2363,6 +2363,7 @@ falcon = ["falcon (>=1.4)"]
fastapi = ["fastapi (>=0.79.0)"]
flask = ["blinker (>=1.1)", "flask (>=0.11)", "markupsafe"]
grpcio = ["grpcio (>=1.21.1)", "protobuf (>=3.8.0)"]
http2 = ["httpcore[http2] (==1.*)"]
httpx = ["httpx (>=0.16.0)"]
huey = ["huey (>=2)"]
huggingface-hub = ["huggingface-hub (>=0.22)"]
@@ -2535,13 +2536,13 @@ twisted = ["twisted"]
[[package]]
name = "tomli"
version = "2.0.1"
version = "2.0.2"
description = "A lil' TOML parser"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
{file = "tomli-2.0.2-py3-none-any.whl", hash = "sha256:2ebe24485c53d303f690b0ec092806a085f07af5a5aa1464f3931eec36caaa38"},
{file = "tomli-2.0.2.tar.gz", hash = "sha256:d46d457a85337051c36524bc5349dd91b1877838e2979ac5ced3e710ed8a60ed"},
]
[[package]]
@@ -2782,13 +2783,13 @@ files = [
[[package]]
name = "types-psycopg2"
version = "2.9.21.20240819"
version = "2.9.21.20241019"
description = "Typing stubs for psycopg2"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-psycopg2-2.9.21.20240819.tar.gz", hash = "sha256:4ed6b47464d6374fa64e5e3b234cea0f710e72123a4596d67ab50b7415a84666"},
{file = "types_psycopg2-2.9.21.20240819-py3-none-any.whl", hash = "sha256:c9192311c27d7ad561eef705f1b2df1074f2cdcf445a98a6a2fcaaaad43278cf"},
{file = "types-psycopg2-2.9.21.20241019.tar.gz", hash = "sha256:bca89b988d2ebd19bcd08b177d22a877ea8b841decb10ed130afcf39404612fa"},
{file = "types_psycopg2-2.9.21.20241019-py3-none-any.whl", hash = "sha256:44d091e67732d16a941baae48cd7b53bf91911bc36888652447cf1ef0c1fb3f6"},
]
[[package]]
@@ -2819,13 +2820,13 @@ files = [
[[package]]
name = "types-requests"
version = "2.32.0.20240914"
version = "2.32.0.20241016"
description = "Typing stubs for requests"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-requests-2.32.0.20240914.tar.gz", hash = "sha256:2850e178db3919d9bf809e434eef65ba49d0e7e33ac92d588f4a5e295fffd405"},
{file = "types_requests-2.32.0.20240914-py3-none-any.whl", hash = "sha256:59c2f673eb55f32a99b2894faf6020e1a9f4a402ad0f192bfee0b64469054310"},
{file = "types-requests-2.32.0.20241016.tar.gz", hash = "sha256:0d9cad2f27515d0e3e3da7134a1b6f28fb97129d86b867f24d9c726452634d95"},
{file = "types_requests-2.32.0.20241016-py3-none-any.whl", hash = "sha256:4195d62d6d3e043a4eaaf08ff8a62184584d2e8684e9d2aa178c7915a7da3747"},
]
[package.dependencies]
@@ -2833,13 +2834,13 @@ urllib3 = ">=2"
[[package]]
name = "types-setuptools"
version = "75.1.0.20240917"
version = "75.2.0.20241019"
description = "Typing stubs for setuptools"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-setuptools-75.1.0.20240917.tar.gz", hash = "sha256:12f12a165e7ed383f31def705e5c0fa1c26215dd466b0af34bd042f7d5331f55"},
{file = "types_setuptools-75.1.0.20240917-py3-none-any.whl", hash = "sha256:06f78307e68d1bbde6938072c57b81cf8a99bc84bd6dc7e4c5014730b097dc0c"},
{file = "types-setuptools-75.2.0.20241019.tar.gz", hash = "sha256:86ea31b5f6df2c6b8f2dc8ae3f72b213607f62549b6fa2ed5866e5299f968694"},
{file = "types_setuptools-75.2.0.20241019-py3-none-any.whl", hash = "sha256:2e48ff3acd4919471e80d5e3f049cce5c177e108d5d36d2d4cee3fa4d4104258"},
]
[[package]]
@@ -3030,50 +3031,57 @@ test = ["zope.testrunner"]
[[package]]
name = "zope-interface"
version = "6.0"
version = "7.1.0"
description = "Interfaces for Python"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "zope.interface-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f299c020c6679cb389814a3b81200fe55d428012c5e76da7e722491f5d205990"},
{file = "zope.interface-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ee4b43f35f5dc15e1fec55ccb53c130adb1d11e8ad8263d68b1284b66a04190d"},
{file = "zope.interface-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5a158846d0fca0a908c1afb281ddba88744d403f2550dc34405c3691769cdd85"},
{file = "zope.interface-6.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f72f23bab1848edb7472309e9898603141644faec9fd57a823ea6b4d1c4c8995"},
{file = "zope.interface-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:48f4d38cf4b462e75fac78b6f11ad47b06b1c568eb59896db5b6ec1094eb467f"},
{file = "zope.interface-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:87b690bbee9876163210fd3f500ee59f5803e4a6607d1b1238833b8885ebd410"},
{file = "zope.interface-6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f2363e5fd81afb650085c6686f2ee3706975c54f331b426800b53531191fdf28"},
{file = "zope.interface-6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:af169ba897692e9cd984a81cb0f02e46dacdc07d6cf9fd5c91e81f8efaf93d52"},
{file = "zope.interface-6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fa90bac61c9dc3e1a563e5babb3fd2c0c1c80567e815442ddbe561eadc803b30"},
{file = "zope.interface-6.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:89086c9d3490a0f265a3c4b794037a84541ff5ffa28bb9c24cc9f66566968464"},
{file = "zope.interface-6.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:809fe3bf1a91393abc7e92d607976bbb8586512913a79f2bf7d7ec15bd8ea518"},
{file = "zope.interface-6.0-cp311-cp311-win_amd64.whl", hash = "sha256:0ec9653825f837fbddc4e4b603d90269b501486c11800d7c761eee7ce46d1bbb"},
{file = "zope.interface-6.0-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:790c1d9d8f9c92819c31ea660cd43c3d5451df1df61e2e814a6f99cebb292788"},
{file = "zope.interface-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b39b8711578dcfd45fc0140993403b8a81e879ec25d53189f3faa1f006087dca"},
{file = "zope.interface-6.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eba51599370c87088d8882ab74f637de0c4f04a6d08a312dce49368ba9ed5c2a"},
{file = "zope.interface-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6ee934f023f875ec2cfd2b05a937bd817efcc6c4c3f55c5778cbf78e58362ddc"},
{file = "zope.interface-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:042f2381118b093714081fd82c98e3b189b68db38ee7d35b63c327c470ef8373"},
{file = "zope.interface-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:dfbbbf0809a3606046a41f8561c3eada9db811be94138f42d9135a5c47e75f6f"},
{file = "zope.interface-6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:424d23b97fa1542d7be882eae0c0fc3d6827784105264a8169a26ce16db260d8"},
{file = "zope.interface-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e538f2d4a6ffb6edfb303ce70ae7e88629ac6e5581870e66c306d9ad7b564a58"},
{file = "zope.interface-6.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:12175ca6b4db7621aedd7c30aa7cfa0a2d65ea3a0105393e05482d7a2d367446"},
{file = "zope.interface-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c3d7dfd897a588ec27e391edbe3dd320a03684457470415870254e714126b1f"},
{file = "zope.interface-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:b3f543ae9d3408549a9900720f18c0194ac0fe810cecda2a584fd4dca2eb3bb8"},
{file = "zope.interface-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d0583b75f2e70ec93f100931660328965bb9ff65ae54695fb3fa0a1255daa6f2"},
{file = "zope.interface-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:23ac41d52fd15dd8be77e3257bc51bbb82469cf7f5e9a30b75e903e21439d16c"},
{file = "zope.interface-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99856d6c98a326abbcc2363827e16bd6044f70f2ef42f453c0bd5440c4ce24e5"},
{file = "zope.interface-6.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1592f68ae11e557b9ff2bc96ac8fc30b187e77c45a3c9cd876e3368c53dc5ba8"},
{file = "zope.interface-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4407b1435572e3e1610797c9203ad2753666c62883b921318c5403fb7139dec2"},
{file = "zope.interface-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:5171eb073474a5038321409a630904fd61f12dd1856dd7e9d19cd6fe092cbbc5"},
{file = "zope.interface-6.0.tar.gz", hash = "sha256:aab584725afd10c710b8f1e6e208dbee2d0ad009f57d674cb9d1b3964037275d"},
{file = "zope.interface-7.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2bd9e9f366a5df08ebbdc159f8224904c1c5ce63893984abb76954e6fbe4381a"},
{file = "zope.interface-7.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:661d5df403cd3c5b8699ac480fa7f58047a3253b029db690efa0c3cf209993ef"},
{file = "zope.interface-7.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:91b6c30689cfd87c8f264acb2fc16ad6b3c72caba2aec1bf189314cf1a84ca33"},
{file = "zope.interface-7.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2b6a4924f5bad9fe21d99f66a07da60d75696a136162427951ec3cb223a5570d"},
{file = "zope.interface-7.1.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80a3c00b35f6170be5454b45abe2719ea65919a2f09e8a6e7b1362312a872cd3"},
{file = "zope.interface-7.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:b936d61dbe29572fd2cfe13e30b925e5383bed1aba867692670f5a2a2eb7b4e9"},
{file = "zope.interface-7.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0ac20581fc6cd7c754f6dff0ae06fedb060fa0e9ea6309d8be8b2701d9ea51c4"},
{file = "zope.interface-7.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:848b6fa92d7c8143646e64124ed46818a0049a24ecc517958c520081fd147685"},
{file = "zope.interface-7.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ec1ef1fdb6f014d5886b97e52b16d0f852364f447d2ab0f0c6027765777b6667"},
{file = "zope.interface-7.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3bcff5c09d0215f42ba64b49205a278e44413d9bf9fa688fd9e42bfe472b5f4f"},
{file = "zope.interface-7.1.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:07add15de0cc7e69917f7d286b64d54125c950aeb43efed7a5ea7172f000fbc1"},
{file = "zope.interface-7.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:9940d5bc441f887c5f375ec62bcf7e7e495a2d5b1da97de1184a88fb567f06af"},
{file = "zope.interface-7.1.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:f245d039f72e6f802902375755846f5de1ee1e14c3e8736c078565599bcab621"},
{file = "zope.interface-7.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6159e767d224d8f18deff634a1d3722e68d27488c357f62ebeb5f3e2f5288b1f"},
{file = "zope.interface-7.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e956b1fd7f3448dd5e00f273072e73e50dfafcb35e4227e6d5af208075593c9"},
{file = "zope.interface-7.1.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ff115ef91c0eeac69cd92daeba36a9d8e14daee445b504eeea2b1c0b55821984"},
{file = "zope.interface-7.1.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bec001798ab62c3fc5447162bf48496ae9fba02edc295a9e10a0b0c639a6452e"},
{file = "zope.interface-7.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:124149e2d42067b9c6597f4dafdc7a0983d0163868f897b7bb5dc850b14f9a87"},
{file = "zope.interface-7.1.0-cp313-cp313-macosx_10_9_x86_64.whl", hash = "sha256:9733a9a0f94ef53d7aa64661811b20875b5bc6039034c6e42fb9732170130573"},
{file = "zope.interface-7.1.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:5fcf379b875c610b5a41bc8a891841533f98de0520287d7f85e25386cd10d3e9"},
{file = "zope.interface-7.1.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d0a45b5af9f72c805ee668d1479480ca85169312211bed6ed18c343e39307d5f"},
{file = "zope.interface-7.1.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4af4a12b459a273b0b34679a5c3dc5e34c1847c3dd14a628aa0668e19e638ea2"},
{file = "zope.interface-7.1.0-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a735f82d2e3ed47ca01a20dfc4c779b966b16352650a8036ab3955aad151ed8a"},
{file = "zope.interface-7.1.0-cp313-cp313-win_amd64.whl", hash = "sha256:5501e772aff595e3c54266bc1bfc5858e8f38974ce413a8f1044aae0f32a83a3"},
{file = "zope.interface-7.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ec59fe53db7d32abb96c6d4efeed84aab4a7c38c62d7a901a9b20c09dd936e7a"},
{file = "zope.interface-7.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e53c291debef523b09e1fe3dffe5f35dde164f1c603d77f770b88a1da34b7ed6"},
{file = "zope.interface-7.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:711eebc77f2092c6a8b304bad0b81a6ce3cf5490b25574e7309fbc07d881e3af"},
{file = "zope.interface-7.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a00ead2e24c76436e1b457a5132d87f83858330f6c923640b7ef82d668525d1"},
{file = "zope.interface-7.1.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e28ea0bc4b084fc93a483877653a033062435317082cdc6388dec3438309faf"},
{file = "zope.interface-7.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:27cfb5205d68b12682b6e55ab8424662d96e8ead19550aad0796b08dd2c9a45e"},
{file = "zope.interface-7.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9e3e48f3dea21c147e1b10c132016cb79af1159facca9736d231694ef5a740a8"},
{file = "zope.interface-7.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a99240b1d02dc469f6afbe7da1bf617645e60290c272968f4e53feec18d7dce8"},
{file = "zope.interface-7.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cc8a318162123eddbdf22fcc7b751288ce52e4ad096d3766ff1799244352449d"},
{file = "zope.interface-7.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7b25db127db3e6b597c5f74af60309c4ad65acd826f89609662f0dc33a54728"},
{file = "zope.interface-7.1.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2a29ac607e970b5576547f0e3589ec156e04de17af42839eedcf478450687317"},
{file = "zope.interface-7.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:a14c9decf0eb61e0892631271d500c1e306c7b6901c998c7035e194d9150fdd1"},
{file = "zope_interface-7.1.0.tar.gz", hash = "sha256:3f005869a1a05e368965adb2075f97f8ee9a26c61898a9e52a9764d93774f237"},
]
[package.dependencies]
setuptools = "*"
[package.extras]
docs = ["Sphinx", "repoze.sphinx.autointerface"]
test = ["coverage (>=5.0.3)", "zope.event", "zope.testing"]
testing = ["coverage (>=5.0.3)", "zope.event", "zope.testing"]
docs = ["Sphinx", "furo", "repoze.sphinx.autointerface"]
test = ["coverage[toml]", "zope.event", "zope.testing"]
testing = ["coverage[toml]", "zope.event", "zope.testing"]
[[package]]
name = "zope-schema"
@@ -3114,4 +3122,4 @@ user-search = ["pyicu"]
[metadata]
lock-version = "2.0"
python-versions = "^3.8.0"
content-hash = "93c267fac3428b764f954e6faa17937b9c97b1ed2bdafc41dd8f6cb5d2ce085b"
content-hash = "aa1f6d97809596c23a6d160c0c5804971dad0ba49e34b137bbfb79df038fe6f0"

View File

@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.116.0rc2"
version = "1.118.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"
@@ -320,7 +320,7 @@ all = [
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.6.7"
ruff = "0.7.1"
# Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2"

View File

@@ -60,6 +60,7 @@ fn bench_match_exact(b: &mut Bencher) {
true,
vec![],
false,
false,
)
.unwrap();
@@ -105,6 +106,7 @@ fn bench_match_word(b: &mut Bencher) {
true,
vec![],
false,
false,
)
.unwrap();
@@ -150,6 +152,7 @@ fn bench_match_word_miss(b: &mut Bencher) {
true,
vec![],
false,
false,
)
.unwrap();
@@ -195,6 +198,7 @@ fn bench_eval_message(b: &mut Bencher) {
true,
vec![],
false,
false,
)
.unwrap();
@@ -205,6 +209,7 @@ fn bench_eval_message(b: &mut Bencher) {
false,
false,
false,
false,
);
b.iter(|| eval.run(&rules, Some("bob"), Some("person")));

View File

@@ -81,7 +81,7 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
))]),
actions: Cow::Borrowed(&[Action::Notify]),
default: true,
default_enabled: false,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.suppress_notices"),

View File

@@ -105,6 +105,9 @@ pub struct PushRuleEvaluator {
/// If MSC3931 (room version feature flags) is enabled. Usually controlled by the same
/// flag as MSC1767 (extensible events core).
msc3931_enabled: bool,
// If MSC4210 (remove legacy mentions) is enabled.
msc4210_enabled: bool,
}
#[pymethods]
@@ -122,6 +125,7 @@ impl PushRuleEvaluator {
related_event_match_enabled,
room_version_feature_flags,
msc3931_enabled,
msc4210_enabled,
))]
pub fn py_new(
flattened_keys: BTreeMap<String, JsonValue>,
@@ -133,6 +137,7 @@ impl PushRuleEvaluator {
related_event_match_enabled: bool,
room_version_feature_flags: Vec<String>,
msc3931_enabled: bool,
msc4210_enabled: bool,
) -> Result<Self, Error> {
let body = match flattened_keys.get("content.body") {
Some(JsonValue::Value(SimpleJsonValue::Str(s))) => s.clone().into_owned(),
@@ -150,6 +155,7 @@ impl PushRuleEvaluator {
related_event_match_enabled,
room_version_feature_flags,
msc3931_enabled,
msc4210_enabled,
})
}
@@ -176,7 +182,8 @@ impl PushRuleEvaluator {
// For backwards-compatibility the legacy mention rules are disabled
// if the event contains the 'm.mentions' property.
if self.has_mentions
// Additionally, MSC4210 always disables the legacy rules.
if (self.has_mentions || self.msc4210_enabled)
&& (rule_id == "global/override/.m.rule.contains_display_name"
|| rule_id == "global/content/.m.rule.contains_user_name"
|| rule_id == "global/override/.m.rule.roomnotif")
@@ -526,6 +533,7 @@ fn push_rule_evaluator() {
true,
vec![],
true,
false,
)
.unwrap();
@@ -555,6 +563,7 @@ fn test_requires_room_version_supports_condition() {
false,
flags,
true,
false,
)
.unwrap();
@@ -582,7 +591,7 @@ fn test_requires_room_version_supports_condition() {
};
let rules = PushRules::new(vec![custom_rule]);
result = evaluator.run(
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false),
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false, false),
None,
None,
);

View File

@@ -534,6 +534,7 @@ pub struct FilteredPushRules {
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc4028_push_encrypted_events: bool,
msc4210_enabled: bool,
}
#[pymethods]
@@ -546,6 +547,7 @@ impl FilteredPushRules {
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc4028_push_encrypted_events: bool,
msc4210_enabled: bool,
) -> Self {
Self {
push_rules,
@@ -554,6 +556,7 @@ impl FilteredPushRules {
msc3381_polls_enabled,
msc3664_enabled,
msc4028_push_encrypted_events,
msc4210_enabled,
}
}
@@ -596,6 +599,14 @@ impl FilteredPushRules {
return false;
}
if self.msc4210_enabled
&& (rule.rule_id == "global/override/.m.rule.contains_display_name"
|| rule.rule_id == "global/content/.m.rule.contains_user_name"
|| rule.rule_id == "global/override/.m.rule.roomnotif")
{
return false;
}
true
})
.map(|r| {

View File

@@ -32,8 +32,8 @@ DISTS = (
"debian:sid", # (EOL not specified yet) (our EOL forced by Python 3.11 is 2027-10-24)
"ubuntu:focal", # 20.04 LTS (EOL 2025-04) (our EOL forced by Python 3.8 is 2024-10-14)
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)
"ubuntu:lunar", # 23.04 (EOL 2024-01) (our EOL forced by Python 3.11 is 2027-10-24)
"ubuntu:mantic", # 23.10 (EOL 2024-07) (our EOL forced by Python 3.11 is 2027-10-24)
"ubuntu:noble", # 24.04 LTS (EOL 2029-06)
"ubuntu:oracular", # 24.10 (EOL 2025-07)
"debian:trixie", # (EOL not specified yet)
)

View File

@@ -360,7 +360,7 @@ def is_cacheable(
# For a type alias, check if the underlying real type is cachable.
return is_cacheable(mypy.types.get_proper_type(rt), signature, verbose)
elif isinstance(rt, UninhabitedType) and rt.is_noreturn:
elif isinstance(rt, UninhabitedType):
# There is no return value, just consider it cachable. This is only used
# in tests.
return True, None

View File

@@ -40,7 +40,7 @@ import commonmark
import git
from click.exceptions import ClickException
from git import GitCommandError, Repo
from github import Github
from github import BadCredentialsException, Github
from packaging import version
@@ -323,10 +323,8 @@ def tag(gh_token: Optional[str]) -> None:
def _tag(gh_token: Optional[str]) -> None:
"""Tags the release and generates a draft GitHub release"""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Test that the GH Token is valid before continuing.
check_valid_gh_token(gh_token)
# Make sure we're in a git repo.
repo = get_repo_and_check_clean_checkout()
@@ -469,10 +467,8 @@ def upload(gh_token: Optional[str]) -> None:
def _upload(gh_token: Optional[str]) -> None:
"""Upload release to pypi."""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Test that the GH Token is valid before continuing.
check_valid_gh_token(gh_token)
current_version = get_package_version()
tag_name = f"v{current_version}"
@@ -569,10 +565,8 @@ def wait_for_actions(gh_token: Optional[str]) -> None:
def _wait_for_actions(gh_token: Optional[str]) -> None:
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Test that the GH Token is valid before continuing.
check_valid_gh_token(gh_token)
# Find out the version and tag name.
current_version = get_package_version()
@@ -806,6 +800,22 @@ def get_repo_and_check_clean_checkout(
return repo
def check_valid_gh_token(gh_token: Optional[str]) -> None:
"""Check that a github token is valid, if supplied"""
if not gh_token:
# No github token supplied, so nothing to do.
return
try:
gh = Github(gh_token)
# We need to lookup name to trigger a request.
_name = gh.get_user().name
except BadCredentialsException as e:
raise click.ClickException(f"Github credentials are bad: {e}")
def find_ref(repo: git.Repo, ref_name: str) -> Optional[git.HEAD]:
"""Find the branch/ref, looking first locally then in the remote."""
if ref_name in repo.references:

View File

@@ -338,7 +338,7 @@ class MSC3861DelegatedAuth(BaseAuth):
logger.exception("Failed to introspect token")
raise SynapseError(503, "Unable to introspect the access token")
logger.info(f"Introspection result: {introspection_result!r}")
logger.debug("Introspection result: %r", introspection_result)
# TODO: introspection verification should be more extensive, especially:
# - verify the audience

View File

@@ -3,7 +3,7 @@
#
# Copyright 2020 The Matrix.org Foundation C.I.C.
# Copyright 2016 OpenMarket Ltd
# Copyright (C) 2023 New Vector, Ltd
# Copyright (C) 2023-2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as

View File

@@ -447,3 +447,6 @@ class ExperimentalConfig(Config):
# MSC4151: Report room API (Client-Server API)
self.msc4151_enabled: bool = experimental.get("msc4151_enabled", False)
# MSC4210: Remove legacy mentions
self.msc4210_enabled: bool = experimental.get("msc4210_enabled", False)

View File

@@ -38,6 +38,7 @@ class JWTConfig(Config):
self.jwt_algorithm = jwt_config["algorithm"]
self.jwt_subject_claim = jwt_config.get("subject_claim", "sub")
self.jwt_display_name_claim = jwt_config.get("display_name_claim")
# The issuer and audiences are optional, if provided, it is asserted
# that the claims exist on the JWT.
@@ -49,5 +50,6 @@ class JWTConfig(Config):
self.jwt_secret = None
self.jwt_algorithm = None
self.jwt_subject_claim = None
self.jwt_display_name_claim = None
self.jwt_issuer = None
self.jwt_audiences = None

View File

@@ -21,10 +21,15 @@
from typing import Any
from synapse.config._base import Config
from synapse.config._base import Config, ConfigError, read_file
from synapse.types import JsonDict
from synapse.util.check_dependencies import check_requirements
CONFLICTING_PASSWORD_OPTS_ERROR = """\
You have configured both `redis.password` and `redis.password_path`.
These are mutually incompatible.
"""
class RedisConfig(Config):
section = "redis"
@@ -43,6 +48,17 @@ class RedisConfig(Config):
self.redis_path = redis_config.get("path", None)
self.redis_dbid = redis_config.get("dbid", None)
self.redis_password = redis_config.get("password")
redis_password_path = redis_config.get("password_path")
if redis_password_path:
if self.redis_password:
raise ConfigError(CONFLICTING_PASSWORD_OPTS_ERROR)
self.redis_password = read_file(
redis_password_path,
(
"redis",
"password_path",
),
).strip()
self.redis_use_tls = redis_config.get("use_tls", False)
self.redis_certificate = redis_config.get("certificate_file", None)

View File

@@ -113,7 +113,7 @@ class Authenticator:
):
raise AuthenticationError(
HTTPStatus.UNAUTHORIZED,
"Destination mismatch in auth header",
f"Destination mismatch in auth header, received: {destination!r}",
Codes.UNAUTHORIZED,
)
if (

View File

@@ -73,6 +73,8 @@ class AdminHandler:
self._redact_all_events, REDACT_ALL_EVENTS_ACTION_NAME
)
self.hs = hs
async def get_redact_task(self, redact_id: str) -> Optional[ScheduledTask]:
"""Get the current status of an active redaction process
@@ -423,8 +425,10 @@ class AdminHandler:
user_id = task.params.get("user_id")
assert user_id is not None
# puppet the user if they're ours, otherwise use admin to redact
requester = create_requester(
user_id, authenticated_entity=admin.user.to_string()
user_id if self.hs.is_mine_id(user_id) else admin.user.to_string(),
authenticated_entity=admin.user.to_string(),
)
reason = task.params.get("reason")
@@ -443,8 +447,8 @@ class AdminHandler:
["m.room.member", "m.room.message"],
)
if not event_ids:
# there's nothing to redact
return TaskStatus.COMPLETE, result, None
# nothing to redact in this room
continue
events = await self._store.get_events_as_list(event_ids)
for event in events:

View File

@@ -1,3 +1,17 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import logging
from typing import TYPE_CHECKING, List, Optional, Set, Tuple

View File

@@ -48,6 +48,7 @@ from synapse.metrics.background_process_metrics import (
wrap_as_background_process,
)
from synapse.storage.databases.main.client_ips import DeviceLastConnectionInfo
from synapse.storage.databases.main.roommember import EventIdMembership
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import (
DeviceListUpdates,
@@ -222,7 +223,6 @@ class DeviceWorkerHandler:
return changed
@trace
@measure_func("device.get_user_ids_changed")
@cancellable
async def get_user_ids_changed(
self, user_id: str, from_token: StreamToken
@@ -290,9 +290,11 @@ class DeviceWorkerHandler:
memberships_to_fetch.add(delta.prev_event_id)
# Fetch all the memberships for the membership events
event_id_to_memberships = await self.store.get_membership_from_event_ids(
memberships_to_fetch
)
event_id_to_memberships: Mapping[str, Optional[EventIdMembership]] = {}
if memberships_to_fetch:
event_id_to_memberships = await self.store.get_membership_from_event_ids(
memberships_to_fetch
)
joined_invited_knocked = (
Membership.JOIN,
@@ -349,7 +351,6 @@ class DeviceWorkerHandler:
return device_list_updates
@measure_func("_generate_sync_entry_for_device_list")
async def generate_sync_entry_for_device_list(
self,
user_id: str,

View File

@@ -18,7 +18,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Optional, Tuple
from authlib.jose import JsonWebToken, JWTClaims
from authlib.jose.errors import BadSignatureError, InvalidClaimError, JoseError
@@ -36,11 +36,12 @@ class JwtHandler:
self.jwt_secret = hs.config.jwt.jwt_secret
self.jwt_subject_claim = hs.config.jwt.jwt_subject_claim
self.jwt_display_name_claim = hs.config.jwt.jwt_display_name_claim
self.jwt_algorithm = hs.config.jwt.jwt_algorithm
self.jwt_issuer = hs.config.jwt.jwt_issuer
self.jwt_audiences = hs.config.jwt.jwt_audiences
def validate_login(self, login_submission: JsonDict) -> str:
def validate_login(self, login_submission: JsonDict) -> Tuple[str, Optional[str]]:
"""
Authenticates the user for the /login API
@@ -49,7 +50,8 @@ class JwtHandler:
(including 'type' and other relevant fields)
Returns:
The user ID that is logging in.
A tuple of (user_id, display_name) of the user that is logging in.
If the JWT does not contain a display name, the second element of the tuple will be None.
Raises:
LoginError if there was an authentication problem.
@@ -109,4 +111,10 @@ class JwtHandler:
if user is None:
raise LoginError(403, "Invalid JWT", errcode=Codes.FORBIDDEN)
return UserID(user, self.hs.hostname).to_string()
default_display_name = None
if self.jwt_display_name_claim:
display_name_claim = claims.get(self.jwt_display_name_claim)
if display_name_claim is not None:
default_display_name = display_name_claim
return UserID(user, self.hs.hostname).to_string(), default_display_name

View File

@@ -1190,6 +1190,26 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
origin_server_ts=origin_server_ts,
)
async def check_for_any_membership_in_room(
self, *, user_id: str, room_id: str
) -> None:
"""
Check if the user has any membership in the room and raise error if not.
Args:
user_id: The user to check.
room_id: The room to check.
Raises:
AuthError if the user doesn't have any membership in the room.
"""
result = await self.store.get_local_current_membership_for_user_in_room(
user_id=user_id, room_id=room_id
)
if result is None or result == (None, None):
raise AuthError(403, f"User {user_id} has no membership in room {room_id}")
async def _should_perform_remote_join(
self,
user_id: str,

View File

@@ -14,7 +14,7 @@
import logging
from itertools import chain
from typing import TYPE_CHECKING, Dict, List, Mapping, Optional, Set, Tuple
from typing import TYPE_CHECKING, AbstractSet, Dict, List, Mapping, Optional, Set, Tuple
from prometheus_client import Histogram
from typing_extensions import assert_never
@@ -49,6 +49,7 @@ from synapse.types import (
Requester,
SlidingSyncStreamToken,
StateMap,
StrCollection,
StreamKeyType,
StreamToken,
)
@@ -293,7 +294,6 @@ class SlidingSyncHandler:
# to record rooms as having updates even if there might not actually
# be anything new for the user (e.g. due to event filters, events
# having happened after the user left, etc).
unsent_room_ids = []
if from_token:
# The set of rooms that the client (may) care about, but aren't
# in any list range (or subscribed to).
@@ -305,15 +305,24 @@ class SlidingSyncHandler:
# TODO: Replace this with something faster. When we land the
# sliding sync tables that record the most recent event
# positions we can use that.
missing_event_map_by_room = (
await self.store.get_room_events_stream_for_rooms(
room_ids=missing_rooms,
from_key=to_token.room_key,
to_key=from_token.stream_token.room_key,
limit=1,
unsent_room_ids: StrCollection
if await self.store.have_finished_sliding_sync_background_jobs():
unsent_room_ids = await (
self.store.get_rooms_that_have_updates_since_sliding_sync_table(
room_ids=missing_rooms,
from_key=from_token.stream_token.room_key,
)
)
)
unsent_room_ids = list(missing_event_map_by_room)
else:
missing_event_map_by_room = (
await self.store.get_room_events_stream_for_rooms(
room_ids=missing_rooms,
from_key=to_token.room_key,
to_key=from_token.stream_token.room_key,
limit=1,
)
)
unsent_room_ids = list(missing_event_map_by_room)
new_connection_state.rooms.record_unsent_rooms(
unsent_room_ids, from_token.stream_token.room_key
@@ -443,13 +452,11 @@ class SlidingSyncHandler:
to_token=to_token,
)
event_map = await self.store.get_events(list(state_ids.values()))
events = await self.store.get_events_as_list(list(state_ids.values()))
state_map = {}
for key, event_id in state_ids.items():
event = event_map.get(event_id)
if event:
state_map[key] = event
for event in events:
state_map[(event.type, event.state_key)] = event
return state_map
@@ -513,6 +520,8 @@ class SlidingSyncHandler:
state_reset_out_of_room = True
prev_room_sync_config = previous_connection_state.room_configs.get(room_id)
# Determine whether we should limit the timeline to the token range.
#
# We should return historical messages (before token range) in the
@@ -541,7 +550,6 @@ class SlidingSyncHandler:
# or `limited` mean for clients that interpret them correctly. In future this
# behavior is almost certainly going to change.
#
# TODO: Also handle changes to `required_state`
from_bound = None
initial = True
ignore_timeline_bound = False
@@ -562,7 +570,6 @@ class SlidingSyncHandler:
log_kv({"sliding_sync.room_status": room_status})
prev_room_sync_config = previous_connection_state.room_configs.get(room_id)
if prev_room_sync_config is not None:
# Check if the timeline limit has increased, if so ignore the
# timeline bound and record the change (see "XXX: Odd behavior"
@@ -573,8 +580,6 @@ class SlidingSyncHandler:
):
ignore_timeline_bound = True
# TODO: Check for changes in `required_state``
log_kv(
{
"sliding_sync.from_bound": from_bound,
@@ -988,6 +993,10 @@ class SlidingSyncHandler:
include_others=required_state_filter.include_others,
)
# The required state map to store in the room sync config, if it has
# changed.
changed_required_state_map: Optional[Mapping[str, AbstractSet[str]]] = None
# We can return all of the state that was requested if this was the first
# time we've sent the room down this connection.
room_state: StateMap[EventBase] = {}
@@ -1001,6 +1010,29 @@ class SlidingSyncHandler:
else:
assert from_bound is not None
if prev_room_sync_config is not None:
# Check if there are any changes to the required state config
# that we need to handle.
changed_required_state_map, added_state_filter = (
_required_state_changes(
user.to_string(),
previous_room_config=prev_room_sync_config,
room_sync_config=room_sync_config,
state_deltas=room_state_delta_id_map,
)
)
if added_state_filter:
# Some state entries got added, so we pull out the current
# state for them. If we don't do this we'd only send down new deltas.
state_ids = await self.get_current_state_ids_at(
room_id=room_id,
room_membership_for_user_at_to_token=room_membership_for_user_at_to_token,
state_filter=added_state_filter,
to_token=to_token,
)
room_state_delta_id_map.update(state_ids)
events = await self.store.get_events(
state_filter.filter_state(room_state_delta_id_map).values()
)
@@ -1048,22 +1080,42 @@ class SlidingSyncHandler:
)
)
# Figure out the last bump event in the room
#
# By default, just choose the membership event position for any non-join membership
bump_stamp = room_membership_for_user_at_to_token.event_pos.stream
# Figure out the last bump event in the room. If the bump stamp hasn't
# changed we omit it from the response.
bump_stamp = None
always_return_bump_stamp = (
# We use the membership event position for any non-join
room_membership_for_user_at_to_token.membership != Membership.JOIN
# We didn't fetch any timeline events but we should still check for
# a bump_stamp that might be somewhere
or limited is None
# There might be a bump event somewhere before the timeline events
# that we fetched, that we didn't previously send down
or limited is True
# Always give the client some frame of reference if this is the
# first time they are seeing the room down the connection
or initial
)
# If we're joined to the room, we need to find the last bump event before the
# `to_token`
if room_membership_for_user_at_to_token.membership == Membership.JOIN:
# Try and get a bump stamp, if not we just fall back to the
# membership token.
# Try and get a bump stamp
new_bump_stamp = await self._get_bump_stamp(
room_id, to_token, timeline_events
room_id,
to_token,
timeline_events,
check_outside_timeline=always_return_bump_stamp,
)
if new_bump_stamp is not None:
bump_stamp = new_bump_stamp
if bump_stamp < 0:
if bump_stamp is None and always_return_bump_stamp:
# By default, just choose the membership event position for any non-join membership
bump_stamp = room_membership_for_user_at_to_token.event_pos.stream
if bump_stamp is not None and bump_stamp < 0:
# We never want to send down negative stream orderings, as you can't
# sensibly compare positive and negative stream orderings (they have
# different meanings).
@@ -1079,10 +1131,13 @@ class SlidingSyncHandler:
# sensible order again.
bump_stamp = 0
unstable_expanded_timeline = False
prev_room_sync_config = previous_connection_state.room_configs.get(room_id)
room_sync_required_state_map_to_persist = room_sync_config.required_state_map
if changed_required_state_map:
room_sync_required_state_map_to_persist = changed_required_state_map
# Record the `room_sync_config` if we're `ignore_timeline_bound` (which means
# that the `timeline_limit` has increased)
unstable_expanded_timeline = False
if ignore_timeline_bound:
# FIXME: We signal the fact that we're sending down more events to
# the client by setting `unstable_expanded_timeline` to true (see
@@ -1091,7 +1146,7 @@ class SlidingSyncHandler:
new_connection_state.room_configs[room_id] = RoomSyncConfig(
timeline_limit=room_sync_config.timeline_limit,
required_state_map=room_sync_config.required_state_map,
required_state_map=room_sync_required_state_map_to_persist,
)
elif prev_room_sync_config is not None:
# If the result is `limited` then we need to record that the
@@ -1120,10 +1175,14 @@ class SlidingSyncHandler:
):
new_connection_state.room_configs[room_id] = RoomSyncConfig(
timeline_limit=room_sync_config.timeline_limit,
required_state_map=room_sync_config.required_state_map,
required_state_map=room_sync_required_state_map_to_persist,
)
# TODO: Record changes in required_state.
elif changed_required_state_map is not None:
new_connection_state.room_configs[room_id] = RoomSyncConfig(
timeline_limit=room_sync_config.timeline_limit,
required_state_map=room_sync_required_state_map_to_persist,
)
else:
new_connection_state.room_configs[room_id] = room_sync_config
@@ -1156,14 +1215,23 @@ class SlidingSyncHandler:
@trace
async def _get_bump_stamp(
self, room_id: str, to_token: StreamToken, timeline: List[EventBase]
self,
room_id: str,
to_token: StreamToken,
timeline: List[EventBase],
check_outside_timeline: bool,
) -> Optional[int]:
"""Get a bump stamp for the room, if we have a bump event
"""Get a bump stamp for the room, if we have a bump event and it has
changed.
Args:
room_id
to_token: The upper bound of token to return
timeline: The list of events we have fetched.
limited: If the timeline was limited.
check_outside_timeline: Whether we need to check for bump stamp for
events before the timeline if we didn't find a bump stamp in
the timeline events.
"""
# First check the timeline events we're returning to see if one of
@@ -1183,6 +1251,11 @@ class SlidingSyncHandler:
if new_bump_stamp > 0:
return new_bump_stamp
if not check_outside_timeline:
# If we are not a limited sync, then we know the bump stamp can't
# have changed.
return None
# We can quickly query for the latest bump event in the room using the
# sliding sync tables.
latest_room_bump_stamp = await self.store.get_latest_bump_stamp_for_room(
@@ -1242,3 +1315,185 @@ class SlidingSyncHandler:
return new_bump_event_pos.stream
return None
def _required_state_changes(
user_id: str,
*,
previous_room_config: "RoomSyncConfig",
room_sync_config: RoomSyncConfig,
state_deltas: StateMap[str],
) -> Tuple[Optional[Mapping[str, AbstractSet[str]]], StateFilter]:
"""Calculates the changes between the required state room config from the
previous requests compared with the current request.
This does two things. First, it calculates if we need to update the room
config due to changes to required state. Secondly, it works out which state
entries we need to pull from current state and return due to the state entry
now appearing in the required state when it previously wasn't (on top of the
state deltas).
This function tries to ensure to handle the case where a state entry is
added, removed and then added again to the required state. In that case we
only want to re-send that entry down sync if it has changed.
Returns:
A 2-tuple of updated required state config (or None if there is no update)
and the state filter to use to fetch extra current state that we need to
return.
"""
prev_required_state_map = previous_room_config.required_state_map
request_required_state_map = room_sync_config.required_state_map
if prev_required_state_map == request_required_state_map:
# There has been no change. Return immediately.
return None, StateFilter.none()
prev_wildcard = prev_required_state_map.get(StateValues.WILDCARD, set())
request_wildcard = request_required_state_map.get(StateValues.WILDCARD, set())
# If we were previously fetching everything ("*", "*"), always update the effective
# room required state config to match the request. And since we we're previously
# already fetching everything, we don't have to fetch anything now that they've
# narrowed.
if StateValues.WILDCARD in prev_wildcard:
return request_required_state_map, StateFilter.none()
# If a event type wildcard has been added or removed we don't try and do
# anything fancy, and instead always update the effective room required
# state config to match the request.
if request_wildcard - prev_wildcard:
# Some keys were added, so we need to fetch everything
return request_required_state_map, StateFilter.all()
if prev_wildcard - request_wildcard:
# Keys were only removed, so we don't have to fetch everything.
return request_required_state_map, StateFilter.none()
# Contains updates to the required state map compared with the previous room
# config. This has the same format as `RoomSyncConfig.required_state`
changes: Dict[str, AbstractSet[str]] = {}
# The set of types/state keys that we need to fetch and return to the
# client. Passed to `StateFilter.from_types(...)`
added: List[Tuple[str, Optional[str]]] = []
# First we calculate what, if anything, has been *added*.
for event_type in (
prev_required_state_map.keys() | request_required_state_map.keys()
):
old_state_keys = prev_required_state_map.get(event_type, set())
request_state_keys = request_required_state_map.get(event_type, set())
if old_state_keys == request_state_keys:
# No change to this type
continue
if not request_state_keys - old_state_keys:
# Nothing *added*, so we skip. Removals happen below.
continue
# Always update changes to include the newly added keys
changes[event_type] = request_state_keys
if StateValues.WILDCARD in old_state_keys:
# We were previously fetching everything for this type, so we don't need to
# fetch anything new.
continue
# Record the new state keys to fetch for this type.
if StateValues.WILDCARD in request_state_keys:
# If we have added a wildcard then we always just fetch everything.
added.append((event_type, None))
else:
for state_key in request_state_keys - old_state_keys:
if state_key == StateValues.ME:
added.append((event_type, user_id))
elif state_key == StateValues.LAZY:
# We handle lazy loading separately (outside this function),
# so don't need to explicitly add anything here.
#
# LAZY values should also be ignore for event types that are
# not membership.
pass
else:
added.append((event_type, state_key))
added_state_filter = StateFilter.from_types(added)
# Convert the list of state deltas to map from type to state_keys that have
# changed.
changed_types_to_state_keys: Dict[str, Set[str]] = {}
for event_type, state_key in state_deltas:
changed_types_to_state_keys.setdefault(event_type, set()).add(state_key)
# Figure out what changes we need to apply to the effective required state
# config.
for event_type, changed_state_keys in changed_types_to_state_keys.items():
old_state_keys = prev_required_state_map.get(event_type, set())
request_state_keys = request_required_state_map.get(event_type, set())
if old_state_keys == request_state_keys:
# No change.
continue
if request_state_keys - old_state_keys:
# We've expanded the set of state keys, so we just clobber the
# current set with the new set.
#
# We could also ensure that we keep entries where the state hasn't
# changed, but are no longer in the requested required state, but
# that's a sufficient edge case that we can ignore (as its only a
# performance optimization).
changes[event_type] = request_state_keys
continue
old_state_key_wildcard = StateValues.WILDCARD in old_state_keys
request_state_key_wildcard = StateValues.WILDCARD in request_state_keys
if old_state_key_wildcard != request_state_key_wildcard:
# If a state_key wildcard has been added or removed, we always update the
# effective room required state config to match the request.
changes[event_type] = request_state_keys
continue
if event_type == EventTypes.Member:
old_state_key_lazy = StateValues.LAZY in old_state_keys
request_state_key_lazy = StateValues.LAZY in request_state_keys
if old_state_key_lazy != request_state_key_lazy:
# If a "$LAZY" has been added or removed we always update the effective room
# required state config to match the request.
changes[event_type] = request_state_keys
continue
# Handle "$ME" values by adding "$ME" if the state key matches the user
# ID.
if user_id in changed_state_keys:
changed_state_keys.add(StateValues.ME)
# At this point there are no wildcards and no additions to the set of
# state keys requested, only deletions.
#
# We only remove state keys from the effective state if they've been
# removed from the request *and* the state has changed. This ensures
# that if a client removes and then re-adds a state key, we only send
# down the associated current state event if its changed (rather than
# sending down the same event twice).
invalidated = (old_state_keys - request_state_keys) & changed_state_keys
if invalidated:
changes[event_type] = old_state_keys - invalidated
if changes:
# Update the required state config based on the changes.
new_required_state_map = dict(prev_required_state_map)
for event_type, state_keys in changes.items():
if state_keys:
new_required_state_map[event_type] = state_keys
else:
# Remove entries with empty state keys.
new_required_state_map.pop(event_type, None)
return new_required_state_map, added_state_filter
else:
return None, added_state_filter

View File

@@ -49,7 +49,10 @@ from synapse.types.handlers.sliding_sync import (
SlidingSyncConfig,
SlidingSyncResult,
)
from synapse.util.async_helpers import concurrently_execute
from synapse.util.async_helpers import (
concurrently_execute,
gather_optional_coroutines,
)
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -97,26 +100,26 @@ class SlidingSyncExtensionHandler:
if sync_config.extensions is None:
return SlidingSyncResult.Extensions()
to_device_response = None
to_device_coro = None
if sync_config.extensions.to_device is not None:
to_device_response = await self.get_to_device_extension_response(
to_device_coro = self.get_to_device_extension_response(
sync_config=sync_config,
to_device_request=sync_config.extensions.to_device,
to_token=to_token,
)
e2ee_response = None
e2ee_coro = None
if sync_config.extensions.e2ee is not None:
e2ee_response = await self.get_e2ee_extension_response(
e2ee_coro = self.get_e2ee_extension_response(
sync_config=sync_config,
e2ee_request=sync_config.extensions.e2ee,
to_token=to_token,
from_token=from_token,
)
account_data_response = None
account_data_coro = None
if sync_config.extensions.account_data is not None:
account_data_response = await self.get_account_data_extension_response(
account_data_coro = self.get_account_data_extension_response(
sync_config=sync_config,
previous_connection_state=previous_connection_state,
new_connection_state=new_connection_state,
@@ -127,9 +130,9 @@ class SlidingSyncExtensionHandler:
from_token=from_token,
)
receipts_response = None
receipts_coro = None
if sync_config.extensions.receipts is not None:
receipts_response = await self.get_receipts_extension_response(
receipts_coro = self.get_receipts_extension_response(
sync_config=sync_config,
previous_connection_state=previous_connection_state,
new_connection_state=new_connection_state,
@@ -141,9 +144,9 @@ class SlidingSyncExtensionHandler:
from_token=from_token,
)
typing_response = None
typing_coro = None
if sync_config.extensions.typing is not None:
typing_response = await self.get_typing_extension_response(
typing_coro = self.get_typing_extension_response(
sync_config=sync_config,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
@@ -153,6 +156,20 @@ class SlidingSyncExtensionHandler:
from_token=from_token,
)
(
to_device_response,
e2ee_response,
account_data_response,
receipts_response,
typing_response,
) = await gather_optional_coroutines(
to_device_coro,
e2ee_coro,
account_data_coro,
receipts_coro,
typing_coro,
)
return SlidingSyncResult.Extensions(
to_device=to_device_response,
e2ee=e2ee_response,
@@ -572,7 +589,8 @@ class SlidingSyncExtensionHandler:
# Now record which rooms are now up to data, and which rooms have
# pending updates to send.
new_connection_state.account_data.record_sent_rooms(relevant_room_ids)
new_connection_state.account_data.record_sent_rooms(previously_rooms.keys())
new_connection_state.account_data.record_sent_rooms(initial_rooms)
missing_updates = (
all_updates_since_the_from_token.keys() - relevant_room_ids
)
@@ -763,9 +781,10 @@ class SlidingSyncExtensionHandler:
room_id_to_receipt_map[room_id] = {"type": type, "content": content}
# Now we update the per-connection state to track which receipts we have
# and haven't sent down.
new_connection_state.receipts.record_sent_rooms(relevant_room_ids)
# Update the per-connection state to track which rooms we have sent
# all the receipts for.
new_connection_state.receipts.record_sent_rooms(previously_rooms.keys())
new_connection_state.receipts.record_sent_rooms(initial_rooms)
if from_token:
# Now find the set of rooms that may have receipts that we're not sending

View File

@@ -123,6 +123,19 @@ class SlidingSyncInterestedRooms:
newly_left_rooms: AbstractSet[str]
dm_room_ids: AbstractSet[str]
@staticmethod
def empty() -> "SlidingSyncInterestedRooms":
return SlidingSyncInterestedRooms(
lists={},
relevant_room_map={},
relevant_rooms_to_send_map={},
all_rooms=set(),
room_membership_for_user_map={},
newly_joined_rooms=set(),
newly_left_rooms=set(),
dm_room_ids=set(),
)
def filter_membership_for_sync(
*,
@@ -181,6 +194,14 @@ class SlidingSyncRoomLists:
from_token: Optional[StreamToken],
) -> SlidingSyncInterestedRooms:
"""Fetch the set of rooms that match the request"""
has_lists = sync_config.lists is not None and len(sync_config.lists) > 0
has_room_subscriptions = (
sync_config.room_subscriptions is not None
and len(sync_config.room_subscriptions) > 0
)
if not has_lists and not has_room_subscriptions:
return SlidingSyncInterestedRooms.empty()
if await self.store.have_finished_sliding_sync_background_jobs():
return await self._compute_interested_rooms_new_tables(
@@ -479,6 +500,16 @@ class SlidingSyncRoomLists:
# depending on the `required_state` requested (see below).
partial_state_rooms = await self.store.get_partial_rooms()
# Fetch any rooms that we have not already fetched from the database.
subscription_sliding_sync_rooms = (
await self.store.get_sliding_sync_room_for_user_batch(
user_id,
sync_config.room_subscriptions.keys()
- room_membership_for_user_map.keys(),
)
)
room_membership_for_user_map.update(subscription_sliding_sync_rooms)
for (
room_id,
room_subscription,
@@ -486,17 +517,11 @@ class SlidingSyncRoomLists:
# Check if we have a membership for the room, but didn't pull it out
# above. This could be e.g. a leave that we don't pull out by
# default.
current_room_entry = (
await self.store.get_sliding_sync_room_for_user(
user_id, room_id
)
)
current_room_entry = room_membership_for_user_map.get(room_id)
if not current_room_entry:
# TODO: Handle rooms the user isn't in.
continue
room_membership_for_user_map[room_id] = current_room_entry
all_rooms.add(room_id)
# Take the superset of the `RoomSyncConfig` for each room.

View File

@@ -1039,7 +1039,7 @@ class _MultipartParserProtocol(protocol.Protocol):
self.deferred = deferred
self.boundary = boundary
self.max_length = max_length
self.parser = None
self.parser: Optional[multipart.MultipartParser] = None
self.multipart_response = MultipartResponse()
self.has_redirect = False
self.in_json = False
@@ -1097,7 +1097,7 @@ class _MultipartParserProtocol(protocol.Protocol):
self.deferred.errback()
self.file_length += end - start
callbacks = {
callbacks: "multipart.multipart.MultipartCallbacks" = {
"on_header_field": on_header_field,
"on_header_value": on_header_value,
"on_part_data": on_part_data,
@@ -1113,7 +1113,7 @@ class _MultipartParserProtocol(protocol.Protocol):
self.transport.abortConnection()
try:
self.parser.write(incoming_data) # type: ignore[attr-defined]
self.parser.write(incoming_data)
except Exception as e:
logger.warning(f"Exception writing to multipart parser: {e}")
self.deferred.errback()

View File

@@ -51,25 +51,17 @@ logger = logging.getLogger(__name__)
# "Hop-by-hop" headers (as opposed to "end-to-end" headers) as defined by RFC2616
# section 13.5.1 and referenced in RFC9110 section 7.6.1. These are meant to only be
# consumed by the immediate recipient and not be forwarded on.
HOP_BY_HOP_HEADERS = {
"Connection",
"Keep-Alive",
"Proxy-Authenticate",
"Proxy-Authorization",
"TE",
"Trailers",
"Transfer-Encoding",
"Upgrade",
HOP_BY_HOP_HEADERS_LOWERCASE = {
"connection",
"keep-alive",
"proxy-authenticate",
"proxy-authorization",
"te",
"trailers",
"transfer-encoding",
"upgrade",
}
if hasattr(Headers, "_canonicalNameCaps"):
# Twisted < 24.7.0rc1
_canonicalHeaderName = Headers()._canonicalNameCaps # type: ignore[attr-defined]
else:
# Twisted >= 24.7.0rc1
# But note that `_encodeName` still exists on prior versions,
# it just encodes differently
_canonicalHeaderName = Headers()._encodeName
assert all(header.lower() == header for header in HOP_BY_HOP_HEADERS_LOWERCASE)
def parse_connection_header_value(
@@ -92,12 +84,12 @@ def parse_connection_header_value(
Returns:
The set of header names that should not be copied over from the remote response.
The keys are capitalized in canonical capitalization.
The keys are lowercased.
"""
extra_headers_to_remove: Set[str] = set()
if connection_header_value:
extra_headers_to_remove = {
_canonicalHeaderName(connection_option.strip()).decode("ascii")
connection_option.decode("ascii").strip().lower()
for connection_option in connection_header_value.split(b",")
}
@@ -194,7 +186,7 @@ class ProxyResource(_AsyncResource):
# The `Connection` header also defines which headers should not be copied over.
connection_header = response_headers.getRawHeaders(b"connection")
extra_headers_to_remove = parse_connection_header_value(
extra_headers_to_remove_lowercase = parse_connection_header_value(
connection_header[0] if connection_header else None
)
@@ -202,10 +194,10 @@ class ProxyResource(_AsyncResource):
for k, v in response_headers.getAllRawHeaders():
# Do not copy over any hop-by-hop headers. These are meant to only be
# consumed by the immediate recipient and not be forwarded on.
header_key = k.decode("ascii")
header_key_lowercase = k.decode("ascii").lower()
if (
header_key in HOP_BY_HOP_HEADERS
or header_key in extra_headers_to_remove
header_key_lowercase in HOP_BY_HOP_HEADERS_LOWERCASE
or header_key_lowercase in extra_headers_to_remove_lowercase
):
continue

View File

@@ -37,6 +37,7 @@ import warnings
from types import TracebackType
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Callable,
Optional,
@@ -850,6 +851,45 @@ def run_in_background(
return d
def run_coroutine_in_background(
coroutine: typing.Coroutine[Any, Any, R],
) -> "defer.Deferred[R]":
"""Run the coroutine, ensuring that the current context is restored after
return from the function, and that the sentinel context is set once the
deferred returned by the function completes.
Useful for wrapping coroutines that you don't yield or await on (for
instance because you want to pass it to deferred.gatherResults()).
This is a special case of `run_in_background` where we can accept a
coroutine directly rather than a function. We can do this because coroutines
do not run until called, and so calling an async function without awaiting
cannot change the log contexts.
"""
current = current_context()
d = defer.ensureDeferred(coroutine)
# The function may have reset the context before returning, so
# we need to restore it now.
ctx = set_current_context(current)
# The original context will be restored when the deferred
# completes, but there is nothing waiting for it, so it will
# get leaked into the reactor or some other function which
# wasn't expecting it. We therefore need to reset the context
# here.
#
# (If this feels asymmetric, consider it this way: we are
# effectively forking a new thread of execution. We are
# probably currently within a ``with LoggingContext()`` block,
# which is supposed to have a single entry and exit point. But
# by spawning off another deferred, we are effectively
# adding a new exit point.)
d.addBoth(_set_context_cb, ctx)
return d
T = TypeVar("T")

View File

@@ -206,7 +206,7 @@ class Thumbnailer:
def _encode_image(self, output_image: Image.Image, output_type: str) -> BytesIO:
output_bytes_io = BytesIO()
fmt = self.FORMATS[output_type]
if fmt == "JPEG":
if fmt == "JPEG" or fmt == "PNG" and output_image.mode == "CMYK":
output_image = output_image.convert("RGB")
output_image.save(output_bytes_io, fmt, quality=80)
return output_bytes_io

View File

@@ -41,6 +41,7 @@ import attr
from prometheus_client import Counter
from twisted.internet import defer
from twisted.internet.defer import Deferred
from synapse.api.constants import EduTypes, EventTypes, HistoryVisibility, Membership
from synapse.api.errors import AuthError
@@ -52,6 +53,7 @@ from synapse.logging.opentracing import log_kv, start_active_span
from synapse.metrics import LaterGauge
from synapse.streams.config import PaginationConfig
from synapse.types import (
ISynapseReactor,
JsonDict,
MultiWriterStreamToken,
PersistedEventPosition,
@@ -61,8 +63,11 @@ from synapse.types import (
StreamToken,
UserID,
)
from synapse.util.async_helpers import ObservableDeferred, timeout_deferred
from synapse.util.async_helpers import (
timeout_deferred,
)
from synapse.util.metrics import Measure
from synapse.util.stringutils import shortstr
from synapse.visibility import filter_events_for_client
if TYPE_CHECKING:
@@ -89,18 +94,6 @@ def count(func: Callable[[T], bool], it: Iterable[T]) -> int:
return n
class _NotificationListener:
"""This represents a single client connection to the events stream.
The events stream handler will have yielded to the deferred, so to
notify the handler it is sufficient to resolve the deferred.
"""
__slots__ = ["deferred"]
def __init__(self, deferred: "defer.Deferred"):
self.deferred = deferred
class _NotifierUserStream:
"""This represents a user connected to the event stream.
It tracks the most recent stream token for that user.
@@ -113,59 +106,49 @@ class _NotifierUserStream:
def __init__(
self,
reactor: ISynapseReactor,
user_id: str,
rooms: StrCollection,
current_token: StreamToken,
time_now_ms: int,
):
self.reactor = reactor
self.user_id = user_id
self.rooms = set(rooms)
self.current_token = current_token
# The last token for which we should wake up any streams that have a
# token that comes before it. This gets updated every time we get poked.
# We start it at the current token since if we get any streams
# that have a token from before we have no idea whether they should be
# woken up or not, so lets just wake them up.
self.last_notified_token = current_token
self.current_token = current_token
self.last_notified_ms = time_now_ms
self.notify_deferred: ObservableDeferred[StreamToken] = ObservableDeferred(
defer.Deferred()
)
# Set of listeners that we need to wake up when there has been a change.
self.listeners: Set[Deferred[StreamToken]] = set()
def notify(
def update_and_fetch_deferreds(
self,
stream_key: StreamKeyType,
stream_id: Union[int, RoomStreamToken, MultiWriterStreamToken],
current_token: StreamToken,
time_now_ms: int,
) -> None:
"""Notify any listeners for this user of a new event from an
event source.
) -> Collection["Deferred[StreamToken]"]:
"""Update the stream for this user because of a new event from an
event source, and return the set of deferreds to wake up.
Args:
stream_key: The stream the event came from.
stream_id: The new id for the stream the event came from.
current_token: The new current token.
time_now_ms: The current time in milliseconds.
Returns:
The set of deferreds that need to be called.
"""
self.current_token = self.current_token.copy_and_advance(stream_key, stream_id)
self.last_notified_token = self.current_token
self.current_token = current_token
self.last_notified_ms = time_now_ms
notify_deferred = self.notify_deferred
log_kv(
{
"notify": self.user_id,
"stream": stream_key,
"stream_id": stream_id,
"listeners": self.count_listeners(),
}
)
listeners = self.listeners
self.listeners = set()
users_woken_by_stream_counter.labels(stream_key).inc()
with PreserveLoggingContext():
self.notify_deferred = ObservableDeferred(defer.Deferred())
notify_deferred.callback(self.current_token)
return listeners
def remove(self, notifier: "Notifier") -> None:
"""Remove this listener from all the indexes in the Notifier
@@ -179,9 +162,9 @@ class _NotifierUserStream:
notifier.user_to_user_stream.pop(self.user_id)
def count_listeners(self) -> int:
return len(self.notify_deferred.observers())
return len(self.listeners)
def new_listener(self, token: StreamToken) -> _NotificationListener:
def new_listener(self, token: StreamToken) -> "Deferred[StreamToken]":
"""Returns a deferred that is resolved when there is a new token
greater than the given token.
@@ -191,10 +174,17 @@ class _NotifierUserStream:
"""
# Immediately wake up stream if something has already since happened
# since their last token.
if self.last_notified_token != token:
return _NotificationListener(defer.succeed(self.current_token))
else:
return _NotificationListener(self.notify_deferred.observe())
if token != self.current_token:
return defer.succeed(self.current_token)
# Create a new deferred and add it to the set of listeners. We add a
# cancel handler to remove it from the set again, to handle timeouts.
deferred: "Deferred[StreamToken]" = Deferred(
canceller=lambda d: self.listeners.discard(d)
)
self.listeners.add(deferred)
return deferred
@attr.s(slots=True, frozen=True, auto_attribs=True)
@@ -247,6 +237,7 @@ class Notifier:
# List of callbacks to be notified when a lock is released
self._lock_released_callback: List[Callable[[str, str, str], None]] = []
self.reactor = hs.get_reactor()
self.clock = hs.get_clock()
self.appservice_handler = hs.get_application_service_handler()
self._pusher_pool = hs.get_pusherpool()
@@ -342,14 +333,25 @@ class Notifier:
# Wake up all related user stream notifiers
user_streams = self.room_to_user_streams.get(room_id, set())
time_now_ms = self.clock.time_msec()
current_token = self.event_sources.get_current_token()
listeners: List["Deferred[StreamToken]"] = []
for user_stream in user_streams:
try:
user_stream.notify(
StreamKeyType.UN_PARTIAL_STATED_ROOMS, new_token, time_now_ms
listeners.extend(
user_stream.update_and_fetch_deferreds(current_token, time_now_ms)
)
except Exception:
logger.exception("Failed to notify listener")
with PreserveLoggingContext():
for listener in listeners:
listener.callback(current_token)
users_woken_by_stream_counter.labels(StreamKeyType.UN_PARTIAL_STATED_ROOMS).inc(
len(user_streams)
)
# Poke the replication so that other workers also see the write to
# the un-partial-stated rooms stream.
self.notify_replication()
@@ -519,12 +521,16 @@ class Notifier:
rooms = rooms or []
with Measure(self.clock, "on_new_event"):
user_streams = set()
user_streams: Set[_NotifierUserStream] = set()
log_kv(
{
"waking_up_explicit_users": len(users),
"waking_up_explicit_rooms": len(rooms),
"users": shortstr(users),
"rooms": shortstr(rooms),
"stream": stream_key,
"stream_id": new_token,
}
)
@@ -544,12 +550,27 @@ class Notifier:
)
time_now_ms = self.clock.time_msec()
current_token = self.event_sources.get_current_token()
listeners: List["Deferred[StreamToken]"] = []
for user_stream in user_streams:
try:
user_stream.notify(stream_key, new_token, time_now_ms)
listeners.extend(
user_stream.update_and_fetch_deferreds(
current_token, time_now_ms
)
)
except Exception:
logger.exception("Failed to notify listener")
# We resolve all these deferreds in one go so that we only need to
# call `PreserveLoggingContext` once, as it has a bunch of overhead
# (to calculate performance stats)
with PreserveLoggingContext():
for listener in listeners:
listener.callback(current_token)
users_woken_by_stream_counter.labels(stream_key).inc(len(user_streams))
self.notify_replication()
# Notify appservices.
@@ -586,6 +607,7 @@ class Notifier:
if room_ids is None:
room_ids = await self.store.get_rooms_for_user(user_id)
user_stream = _NotifierUserStream(
reactor=self.reactor,
user_id=user_id,
rooms=room_ids,
current_token=current_token,
@@ -608,8 +630,8 @@ class Notifier:
# Now we wait for the _NotifierUserStream to be told there
# is a new token.
listener = user_stream.new_listener(prev_token)
listener.deferred = timeout_deferred(
listener.deferred,
listener = timeout_deferred(
listener,
(end_time - now) / 1000.0,
self.hs.get_reactor(),
)
@@ -622,7 +644,7 @@ class Notifier:
)
with PreserveLoggingContext():
await listener.deferred
await listener
log_kv(
{

View File

@@ -436,6 +436,7 @@ class BulkPushRuleEvaluator:
self._related_event_match_enabled,
event.room_version.msc3931_push_features,
self.hs.config.experimental.msc1767_enabled, # MSC3931 flag
self.hs.config.experimental.msc4210_enabled,
)
for uid, rules in rules_by_user.items():

View File

@@ -1,7 +1,7 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
# Copyright (C) 2023-2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as

View File

@@ -1,3 +1,17 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import logging
from typing import TYPE_CHECKING, Dict, Optional, Tuple

View File

@@ -48,7 +48,7 @@ class ReplicationRemovePusherRestServlet(ReplicationEndpoint):
"""
NAME = "add_user_account_data"
NAME = "remove_pusher"
PATH_ARGS = ("user_id",)
CACHE = False

View File

@@ -27,7 +27,7 @@ from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union
import attr
from synapse._pydantic_compat import StrictBool
from synapse._pydantic_compat import StrictBool, StrictInt, StrictStr
from synapse.api.constants import Direction, UserTypes
from synapse.api.errors import Codes, NotFoundError, SynapseError
from synapse.http.servlet import (
@@ -1421,40 +1421,39 @@ class RedactUser(RestServlet):
self._store = hs.get_datastores().main
self.admin_handler = hs.get_admin_handler()
class PostBody(RequestBodyModel):
rooms: List[StrictStr]
reason: Optional[StrictStr]
limit: Optional[StrictInt]
async def on_POST(
self, request: SynapseRequest, user_id: str
) -> Tuple[int, JsonDict]:
requester = await self._auth.get_user_by_req(request)
await assert_user_is_admin(self._auth, requester)
body = parse_json_object_from_request(request, allow_empty_body=True)
rooms = body.get("rooms")
if rooms is None:
# parse provided user id to check that it is valid
UserID.from_string(user_id)
body = parse_and_validate_json_object_from_request(request, self.PostBody)
limit = body.limit
if limit and limit <= 0:
raise SynapseError(
HTTPStatus.BAD_REQUEST, "Must provide a value for rooms."
HTTPStatus.BAD_REQUEST,
"If limit is provided it must be a non-negative integer greater than 0.",
)
reason = body.get("reason")
if reason:
if not isinstance(reason, str):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"If a reason is provided it must be a string.",
)
limit = body.get("limit")
if limit:
if not isinstance(limit, int) or limit <= 0:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"If limit is provided it must be a non-negative integer greater than 0.",
)
rooms = body.rooms
if not rooms:
rooms = await self._store.get_rooms_for_user(user_id)
current_rooms = list(await self._store.get_rooms_for_user(user_id))
banned_rooms = list(
await self._store.get_rooms_user_currently_banned_from(user_id)
)
rooms = current_rooms + banned_rooms
redact_id = await self.admin_handler.start_redact_events(
user_id, list(rooms), requester.serialize(), reason, limit
user_id, rooms, requester.serialize(), body.reason, limit
)
return HTTPStatus.OK, {"redact_id": redact_id}

View File

@@ -1,3 +1,17 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# This module contains REST servlets to do with delayed events: /delayed_events/<paths>
import logging

View File

@@ -363,6 +363,7 @@ class LoginRestServlet(RestServlet):
login_submission: JsonDict,
callback: Optional[Callable[[LoginResponse], Awaitable[None]]] = None,
create_non_existent_users: bool = False,
default_display_name: Optional[str] = None,
ratelimit: bool = True,
auth_provider_id: Optional[str] = None,
should_issue_refresh_token: bool = False,
@@ -410,7 +411,8 @@ class LoginRestServlet(RestServlet):
canonical_uid = await self.auth_handler.check_user_exists(user_id)
if not canonical_uid:
canonical_uid = await self.registration_handler.register_user(
localpart=UserID.from_string(user_id).localpart
localpart=UserID.from_string(user_id).localpart,
default_display_name=default_display_name,
)
user_id = canonical_uid
@@ -546,11 +548,14 @@ class LoginRestServlet(RestServlet):
Returns:
The body of the JSON response.
"""
user_id = self.hs.get_jwt_handler().validate_login(login_submission)
user_id, default_display_name = self.hs.get_jwt_handler().validate_login(
login_submission
)
return await self._complete_login(
user_id,
login_submission,
create_non_existent_users=True,
default_display_name=default_display_name,
should_issue_refresh_token=should_issue_refresh_token,
request_info=request_info,
)

View File

@@ -20,11 +20,13 @@
#
import logging
import re
from http import HTTPStatus
from typing import TYPE_CHECKING, Tuple
from synapse._pydantic_compat import StrictStr
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
from synapse.api.urls import CLIENT_API_PREFIX
from synapse.http.server import HttpServer
from synapse.http.servlet import (
RestServlet,
@@ -105,18 +107,17 @@ class ReportEventRestServlet(RestServlet):
class ReportRoomRestServlet(RestServlet):
"""This endpoint lets clients report a room for abuse.
Whilst MSC4151 is not yet merged, this unstable endpoint is enabled on matrix.org
for content moderation purposes, and therefore backwards compatibility should be
carefully considered when changing anything on this endpoint.
More details on the MSC: https://github.com/matrix-org/matrix-spec-proposals/pull/4151
Introduced by MSC4151: https://github.com/matrix-org/matrix-spec-proposals/pull/4151
"""
PATTERNS = client_patterns(
"/org.matrix.msc4151/rooms/(?P<room_id>[^/]*)/report$",
releases=[],
v1=False,
unstable=True,
# Cast the Iterable to a list so that we can `append` below.
PATTERNS = list(
client_patterns(
"/rooms/(?P<room_id>[^/]*)/report$",
releases=("v3",),
unstable=False,
v1=False,
)
)
def __init__(self, hs: "HomeServer"):
@@ -126,6 +127,16 @@ class ReportRoomRestServlet(RestServlet):
self.clock = hs.get_clock()
self.store = hs.get_datastores().main
# TODO: Remove the unstable variant after 2-3 releases
# https://github.com/element-hq/synapse/issues/17373
if hs.config.experimental.msc4151_enabled:
self.PATTERNS.append(
re.compile(
f"^{CLIENT_API_PREFIX}/unstable/org.matrix.msc4151"
"/rooms/(?P<room_id>[^/]*)/report$"
)
)
class PostBody(RequestBodyModel):
reason: StrictStr
@@ -153,6 +164,4 @@ class ReportRoomRestServlet(RestServlet):
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
ReportEventRestServlet(hs).register(http_server)
if hs.config.experimental.msc4151_enabled:
ReportRoomRestServlet(hs).register(http_server)
ReportRoomRestServlet(hs).register(http_server)

View File

@@ -1010,11 +1010,13 @@ class SlidingSyncRestServlet(RestServlet):
serialized_rooms: Dict[str, JsonDict] = {}
for room_id, room_result in rooms.items():
serialized_rooms[room_id] = {
"bump_stamp": room_result.bump_stamp,
"notification_count": room_result.notification_count,
"highlight_count": room_result.highlight_count,
}
if room_result.bump_stamp is not None:
serialized_rooms[room_id]["bump_stamp"] = room_result.bump_stamp
if room_result.joined_count is not None:
serialized_rooms[room_id]["joined_count"] = room_result.joined_count

View File

@@ -78,6 +78,7 @@ class TagServlet(RestServlet):
super().__init__()
self.auth = hs.get_auth()
self.handler = hs.get_account_data_handler()
self.room_member_handler = hs.get_room_member_handler()
async def on_PUT(
self, request: SynapseRequest, user_id: str, room_id: str, tag: str
@@ -85,6 +86,12 @@ class TagServlet(RestServlet):
requester = await self.auth.get_user_by_req(request)
if user_id != requester.user.to_string():
raise AuthError(403, "Cannot add tags for other users.")
# Check if the user has any membership in the room and raise error if not.
# Although it's not harmful for users to tag random rooms, it's just superfluous
# data we don't need to track or allow.
await self.room_member_handler.check_for_any_membership_in_room(
user_id=user_id, room_id=room_id
)
body = parse_json_object_from_request(request)

View File

@@ -172,7 +172,7 @@ class VersionsRestServlet(RestServlet):
)
),
# MSC4140: Delayed events
"org.matrix.msc4140": True,
"org.matrix.msc4140": bool(self.config.server.max_event_delay_ms),
# MSC4151: Report room API (Client-Server API)
"org.matrix.msc4151": self.config.experimental.msc4151_enabled,
# Simplified sliding sync

View File

@@ -249,6 +249,7 @@ class HomeServer(metaclass=abc.ABCMeta):
"""
REQUIRED_ON_BACKGROUND_TASK_STARTUP = [
"admin",
"account_validity",
"auth",
"deactivate_account",

View File

@@ -1,3 +1,17 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import logging
from typing import List, NewType, Optional, Tuple

View File

@@ -1422,7 +1422,7 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
DELETE FROM device_lists_outbound_last_success
WHERE destination = ? AND user_id = ?
"""
txn.execute_batch(sql, ((row[0], row[1]) for row in rows))
txn.execute_batch(sql, [(row[0], row[1]) for row in rows])
logger.info("Pruned %d device list outbound pokes", count)

View File

@@ -1686,7 +1686,7 @@ class PersistEventsStore:
"""
txn.execute_batch(
sql,
(
[
(
stream_id,
self._instance_name,
@@ -1699,17 +1699,17 @@ class PersistEventsStore:
state_key,
)
for etype, state_key in itertools.chain(to_delete, to_insert)
),
],
)
# Now we actually update the current_state_events table
txn.execute_batch(
"DELETE FROM current_state_events"
" WHERE room_id = ? AND type = ? AND state_key = ?",
(
[
(room_id, etype, state_key)
for etype, state_key in itertools.chain(to_delete, to_insert)
),
],
)
# We include the membership in the current state table, hence we do
@@ -1799,11 +1799,11 @@ class PersistEventsStore:
txn.execute_batch(
"DELETE FROM local_current_membership"
" WHERE room_id = ? AND user_id = ?",
(
[
(room_id, state_key)
for etype, state_key in itertools.chain(to_delete, to_insert)
if etype == EventTypes.Member and self.is_mine_id(state_key)
),
],
)
if to_insert:
@@ -1863,10 +1863,10 @@ class PersistEventsStore:
txn.execute_batch(
f"""
INSERT INTO sliding_sync_membership_snapshots
(room_id, user_id, sender, membership_event_id, membership, event_stream_ordering, event_instance_name
(room_id, user_id, sender, membership_event_id, membership, forgotten, event_stream_ordering, event_instance_name
{("," + ", ".join(sliding_sync_snapshot_keys)) if sliding_sync_snapshot_keys else ""})
VALUES (
?, ?, ?, ?, ?,
?, ?, ?, ?, ?, ?,
(SELECT stream_ordering FROM events WHERE event_id = ?),
(SELECT COALESCE(instance_name, 'master') FROM events WHERE event_id = ?)
{("," + ", ".join("?" for _ in sliding_sync_snapshot_values)) if sliding_sync_snapshot_values else ""}
@@ -1876,6 +1876,7 @@ class PersistEventsStore:
sender = EXCLUDED.sender,
membership_event_id = EXCLUDED.membership_event_id,
membership = EXCLUDED.membership,
forgotten = EXCLUDED.forgotten,
event_stream_ordering = EXCLUDED.event_stream_ordering
{("," + ", ".join(f"{key} = EXCLUDED.{key}" for key in sliding_sync_snapshot_keys)) if sliding_sync_snapshot_keys else ""}
""",
@@ -1886,6 +1887,9 @@ class PersistEventsStore:
membership_info.sender,
membership_info.membership_event_id,
membership_info.membership,
# Since this is a new membership, it isn't forgotten anymore (which
# matches how Synapse currently thinks about the forgotten status)
0,
# XXX: We do not use `membership_info.membership_event_stream_ordering` here
# because it is an unreliable value. See XXX note above.
membership_info.membership_event_id,
@@ -2901,6 +2905,9 @@ class PersistEventsStore:
"sender": event.sender,
"membership_event_id": event.event_id,
"membership": event.membership,
# Since this is a new membership, it isn't forgotten anymore (which
# matches how Synapse currently thinks about the forgotten status)
"forgotten": 0,
"event_stream_ordering": event.internal_metadata.stream_ordering,
"event_instance_name": event.internal_metadata.instance_name,
}
@@ -3201,7 +3208,7 @@ class PersistEventsStore:
if notifiable_events:
txn.execute_batch(
sql,
(
[
(
event.room_id,
event.internal_metadata.stream_ordering,
@@ -3209,18 +3216,18 @@ class PersistEventsStore:
event.event_id,
)
for event in notifiable_events
),
],
)
# Now we delete the staging area for *all* events that were being
# persisted.
txn.execute_batch(
"DELETE FROM event_push_actions_staging WHERE event_id = ?",
(
[
(event.event_id,)
for event, _ in all_events_and_contexts
if event.internal_metadata.is_notifiable()
),
],
)
def _remove_push_actions_for_event_id_txn(

View File

@@ -304,6 +304,12 @@ class EventsBackgroundUpdatesStore(StreamWorkerStore, StateDeltasStore, SQLBaseS
_BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_BG_UPDATE,
self._sliding_sync_membership_snapshots_bg_update,
)
# Add a background update to fix data integrity issue in the
# `sliding_sync_membership_snapshots` -> `forgotten` column
self.db_pool.updates.register_background_update_handler(
_BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_FIX_FORGOTTEN_COLUMN_BG_UPDATE,
self._sliding_sync_membership_snapshots_fix_forgotten_column_bg_update,
)
# We want this to run on the main database at startup before we start processing
# events.
@@ -2429,6 +2435,118 @@ class EventsBackgroundUpdatesStore(StreamWorkerStore, StateDeltasStore, SQLBaseS
return len(memberships_to_update_rows)
async def _sliding_sync_membership_snapshots_fix_forgotten_column_bg_update(
self, progress: JsonDict, batch_size: int
) -> int:
"""
Background update to update the `sliding_sync_membership_snapshots` ->
`forgotten` column to be in sync with the `room_memberships` table.
Because of previously flawed code (now fixed); any room that someone has
forgotten and subsequently re-joined or had any new membership on, we need to go
and update the column to match the `room_memberships` table as it has fallen out
of sync.
"""
last_event_stream_ordering = progress.get(
"last_event_stream_ordering", -(1 << 31)
)
def _txn(
txn: LoggingTransaction,
) -> int:
"""
Returns:
The number of rows updated.
"""
# To simplify things, we can just recheck any row in
# `sliding_sync_membership_snapshots` with `forgotten=1`
txn.execute(
"""
SELECT
s.room_id,
s.user_id,
s.membership_event_id,
s.event_stream_ordering,
m.forgotten
FROM sliding_sync_membership_snapshots AS s
INNER JOIN room_memberships AS m ON (s.membership_event_id = m.event_id)
WHERE s.event_stream_ordering > ?
AND s.forgotten = 1
ORDER BY s.event_stream_ordering ASC
LIMIT ?
""",
(last_event_stream_ordering, batch_size),
)
memberships_to_update_rows = cast(
List[Tuple[str, str, str, int, int]],
txn.fetchall(),
)
if not memberships_to_update_rows:
return 0
# Assemble the values to update
#
# (room_id, user_id)
key_values: List[Tuple[str, str]] = []
# (forgotten,)
value_values: List[Tuple[int]] = []
for (
room_id,
user_id,
_membership_event_id,
_event_stream_ordering,
forgotten,
) in memberships_to_update_rows:
key_values.append(
(
room_id,
user_id,
)
)
value_values.append((forgotten,))
# Update all of the rows in one go
self.db_pool.simple_update_many_txn(
txn,
table="sliding_sync_membership_snapshots",
key_names=("room_id", "user_id"),
key_values=key_values,
value_names=("forgotten",),
value_values=value_values,
)
# Update the progress
(
_room_id,
_user_id,
_membership_event_id,
event_stream_ordering,
_forgotten,
) = memberships_to_update_rows[-1]
self.db_pool.updates._background_update_progress_txn(
txn,
_BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_FIX_FORGOTTEN_COLUMN_BG_UPDATE,
{
"last_event_stream_ordering": event_stream_ordering,
},
)
return len(memberships_to_update_rows)
num_rows = await self.db_pool.runInteraction(
"_sliding_sync_membership_snapshots_fix_forgotten_column_bg_update",
_txn,
)
if not num_rows:
await self.db_pool.updates._end_background_update(
_BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_FIX_FORGOTTEN_COLUMN_BG_UPDATE
)
return num_rows
def _resolve_stale_data_in_sliding_sync_tables(
txn: LoggingTransaction,

View File

@@ -61,7 +61,13 @@ from synapse.logging.context import (
current_context,
make_deferred_yieldable,
)
from synapse.logging.opentracing import start_active_span, tag_args, trace
from synapse.logging.opentracing import (
SynapseTags,
set_tag,
start_active_span,
tag_args,
trace,
)
from synapse.metrics.background_process_metrics import (
run_as_background_process,
wrap_as_background_process,
@@ -525,6 +531,7 @@ class EventsWorkerStore(SQLBaseStore):
return event
@trace
async def get_events(
self,
event_ids: Collection[str],
@@ -556,6 +563,11 @@ class EventsWorkerStore(SQLBaseStore):
Returns:
A mapping from event_id to event.
"""
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
str(len(event_ids)),
)
events = await self.get_events_as_list(
event_ids,
redact_behaviour=redact_behaviour,
@@ -603,6 +615,10 @@ class EventsWorkerStore(SQLBaseStore):
Note that the returned list may be smaller than the list of event
IDs if not all events could be fetched.
"""
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
str(len(event_ids)),
)
if not event_ids:
return []
@@ -723,10 +739,11 @@ class EventsWorkerStore(SQLBaseStore):
return events
@trace
@cancellable
async def get_unredacted_events_from_cache_or_db(
self,
event_ids: Iterable[str],
event_ids: Collection[str],
allow_rejected: bool = False,
) -> Dict[str, EventCacheEntry]:
"""Fetch a bunch of events from the cache or the database.
@@ -748,6 +765,11 @@ class EventsWorkerStore(SQLBaseStore):
Returns:
map from event id to result
"""
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
str(len(event_ids)),
)
# Shortcut: check if we have any events in the *in memory* cache - this function
# may be called repeatedly for the same event so at this point we cannot reach
# out to any external cache for performance reasons. The external cache is
@@ -936,7 +958,7 @@ class EventsWorkerStore(SQLBaseStore):
events, update_metrics=update_metrics
)
missing_event_ids = (e for e in events if e not in event_map)
missing_event_ids = [e for e in events if e not in event_map]
event_map.update(
await self._get_events_from_external_cache(
events=missing_event_ids,
@@ -946,8 +968,9 @@ class EventsWorkerStore(SQLBaseStore):
return event_map
@trace
async def _get_events_from_external_cache(
self, events: Iterable[str], update_metrics: bool = True
self, events: Collection[str], update_metrics: bool = True
) -> Dict[str, EventCacheEntry]:
"""Fetch events from any configured external cache.
@@ -957,6 +980,10 @@ class EventsWorkerStore(SQLBaseStore):
events: list of event_ids to fetch
update_metrics: Whether to update the cache hit ratio metrics
"""
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "events.length",
str(len(events)),
)
event_map = {}
for event_id in events:
@@ -1222,6 +1249,7 @@ class EventsWorkerStore(SQLBaseStore):
with PreserveLoggingContext():
self.hs.get_reactor().callFromThread(fire_errback, e)
@trace
async def _get_events_from_db(
self, event_ids: Collection[str]
) -> Dict[str, EventCacheEntry]:
@@ -1240,6 +1268,11 @@ class EventsWorkerStore(SQLBaseStore):
map from event id to result. May return extra events which
weren't asked for.
"""
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
str(len(event_ids)),
)
fetched_event_ids: Set[str] = set()
fetched_events: Dict[str, _EventRow] = {}

View File

@@ -729,10 +729,10 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
txn.execute_batch(
sql,
(
[
(time_ms, media_origin, media_id)
for media_origin, media_id in remote_media
),
],
)
sql = (
@@ -740,7 +740,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
" WHERE media_id = ?"
)
txn.execute_batch(sql, ((time_ms, media_id) for media_id in local_media))
txn.execute_batch(sql, [(time_ms, media_id) for media_id in local_media])
await self.db_pool.runInteraction(
"update_cached_last_access_time", update_cache_txn

View File

@@ -109,6 +109,7 @@ def _load_rules(
msc3664_enabled=experimental_config.msc3664_enabled,
msc3381_polls_enabled=experimental_config.msc3381_polls_enabled,
msc4028_push_encrypted_events=experimental_config.msc4028_push_encrypted_events,
msc4210_enabled=experimental_config.msc4210_enabled,
)
return filtered_rules

View File

@@ -1175,7 +1175,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
SET quarantined_by = ?
WHERE media_origin = ? AND media_id = ?
""",
((quarantined_by, origin, media_id) for origin, media_id in remote_mxcs),
[(quarantined_by, origin, media_id) for origin, media_id in remote_mxcs],
)
total_media_quarantined += txn.rowcount if txn.rowcount > 0 else 0

View File

@@ -711,6 +711,27 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
return {row[0] for row in txn}
async def get_rooms_user_currently_banned_from(
self, user_id: str
) -> FrozenSet[str]:
"""Returns a set of room_ids the user is currently banned from.
If a remote user only returns rooms this server is currently
participating in.
"""
room_ids = await self.db_pool.simple_select_onecol(
table="current_state_events",
keyvalues={
"type": EventTypes.Member,
"membership": Membership.BAN,
"state_key": user_id,
},
retcol="room_id",
desc="get_rooms_user_currently_banned_from",
)
return frozenset(room_ids)
@cached(max_entries=500000, iterable=True)
async def get_rooms_for_user(self, user_id: str) -> FrozenSet[str]:
"""Returns a set of room_ids the user is currently joined to.
@@ -1354,6 +1375,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
keyvalues={"user_id": user_id, "room_id": room_id},
updatevalues={"forgotten": 1},
)
# Handle updating the `sliding_sync_membership_snapshots` table
self.db_pool.simple_update_txn(
txn,
table="sliding_sync_membership_snapshots",
@@ -1499,6 +1521,57 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
"get_sliding_sync_room_for_user", get_sliding_sync_room_for_user_txn
)
async def get_sliding_sync_room_for_user_batch(
self, user_id: str, room_ids: StrCollection
) -> Dict[str, RoomsForUserSlidingSync]:
"""Get the sliding sync room entry for the given user and rooms."""
if not room_ids:
return {}
def get_sliding_sync_room_for_user_batch_txn(
txn: LoggingTransaction,
) -> Dict[str, RoomsForUserSlidingSync]:
clause, args = make_in_list_sql_clause(
self.database_engine, "m.room_id", room_ids
)
sql = f"""
SELECT m.room_id, m.sender, m.membership, m.membership_event_id,
r.room_version,
m.event_instance_name, m.event_stream_ordering,
m.has_known_state,
COALESCE(j.room_type, m.room_type),
COALESCE(j.is_encrypted, m.is_encrypted)
FROM sliding_sync_membership_snapshots AS m
INNER JOIN rooms AS r USING (room_id)
LEFT JOIN sliding_sync_joined_rooms AS j ON (j.room_id = m.room_id AND m.membership = 'join')
WHERE m.forgotten = 0
AND {clause}
AND user_id = ?
"""
args.append(user_id)
txn.execute(sql, args)
return {
row[0]: RoomsForUserSlidingSync(
room_id=row[0],
sender=row[1],
membership=row[2],
event_id=row[3],
room_version_id=row[4],
event_pos=PersistedEventPosition(row[5], row[6]),
has_known_state=bool(row[7]),
room_type=row[8],
is_encrypted=row[9],
)
for row in txn
}
return await self.db_pool.runInteraction(
"get_sliding_sync_room_for_user_batch",
get_sliding_sync_room_for_user_batch_txn,
)
class RoomMemberBackgroundUpdateStore(SQLBaseStore):
def __init__(

View File

@@ -94,7 +94,7 @@ class SearchWorkerStore(SQLBaseStore):
VALUES (?,?,?,to_tsvector('english', ?),?,?)
"""
args1 = (
args1 = [
(
entry.event_id,
entry.room_id,
@@ -104,7 +104,7 @@ class SearchWorkerStore(SQLBaseStore):
entry.origin_server_ts,
)
for entry in entries
)
]
txn.execute_batch(sql, args1)

View File

@@ -386,8 +386,8 @@ class SlidingSyncStore(SQLBaseStore):
required_state_map: Dict[int, Dict[str, Set[str]]] = {}
for row in rows:
state = required_state_map[row[0]] = {}
for event_type, state_keys in db_to_json(row[1]):
state[event_type] = set(state_keys)
for event_type, state_key in db_to_json(row[1]):
state.setdefault(event_type, set()).add(state_key)
# Get all the room configs, looking up the required state from the map
# above.

View File

@@ -751,6 +751,48 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
if self._events_stream_cache.has_entity_changed(room_id, from_id)
}
async def get_rooms_that_have_updates_since_sliding_sync_table(
self,
room_ids: StrCollection,
from_key: RoomStreamToken,
) -> StrCollection:
"""Return the rooms that probably have had updates since the given
token (changes that are > `from_key`)."""
# If the stream change cache is valid for the stream token, we can just
# use the result of that.
if from_key.stream >= self._events_stream_cache.get_earliest_known_position():
return self._events_stream_cache.get_entities_changed(
room_ids, from_key.stream
)
def get_rooms_that_have_updates_since_sliding_sync_table_txn(
txn: LoggingTransaction,
) -> StrCollection:
sql = """
SELECT room_id
FROM sliding_sync_joined_rooms
WHERE {clause}
AND event_stream_ordering > ?
"""
results: Set[str] = set()
for batch in batch_iter(room_ids, 1000):
clause, args = make_in_list_sql_clause(
self.database_engine, "room_id", batch
)
args.append(from_key.stream)
txn.execute(sql.format(clause=clause), args)
results.update(row[0] for row in txn)
return results
return await self.db_pool.runInteraction(
"get_rooms_that_have_updates_since_sliding_sync_table",
get_rooms_that_have_updates_since_sliding_sync_table_txn,
)
async def paginate_room_events_by_stream_ordering(
self,
*,

View File

@@ -804,11 +804,11 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
logger.info("[purge] removing redundant state groups")
txn.execute_batch(
"DELETE FROM state_groups_state WHERE state_group = ?",
((sg,) for sg in state_groups_to_delete),
[(sg,) for sg in state_groups_to_delete],
)
txn.execute_batch(
"DELETE FROM state_groups WHERE id = ?",
((sg,) for sg in state_groups_to_delete),
[(sg,) for sg in state_groups_to_delete],
)
@trace

View File

@@ -153,6 +153,8 @@ Changes in SCHEMA_VERSION = 87
Changes in SCHEMA_VERSION = 88
- MSC4140: Add `delayed_events` table that keeps track of events that are to
be posted in response to a resettable timeout or an on-demand action.
- Add background update to fix data integrity issue in the
`sliding_sync_membership_snapshots` -> `forgotten` column
"""

View File

@@ -1,3 +1,16 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
CREATE TABLE delayed_events (
delay_id TEXT NOT NULL,
user_localpart TEXT NOT NULL,

View File

@@ -0,0 +1,21 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Add a background update to update the `sliding_sync_membership_snapshots` ->
-- `forgotten` column to be in sync with the `room_memberships` table.
--
-- For any room that someone has forgotten and subsequently re-joined or had any new
-- membership on, we need to go and update the column to match the `room_memberships`
-- table as it has fallen out of sync.
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8802, 'sliding_sync_membership_snapshots_fix_forgotten_column_bg_update', '{}');

View File

@@ -48,6 +48,7 @@ class FilteredPushRules:
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc4028_push_encrypted_events: bool,
msc4210_enabled: bool,
): ...
def rules(self) -> Collection[Tuple[PushRule, bool]]: ...
@@ -65,6 +66,7 @@ class PushRuleEvaluator:
related_event_match_enabled: bool,
room_version_feature_flags: Tuple[str, ...],
msc3931_enabled: bool,
msc4210_enabled: bool,
): ...
def run(
self,

View File

@@ -158,6 +158,7 @@ class SlidingSyncResult:
name changes to mark the room as unread and bump it to the top. For
encrypted rooms, we just have to consider any activity as a bump because we
can't see the content and the client has to figure it out for themselves.
This may not be included if there hasn't been a change.
joined_count: The number of users with membership of join, including the client's
own user ID. (same as sync `v2 m.joined_member_count`)
invited_count: The number of users with membership of invite. (same as sync v2
@@ -193,7 +194,7 @@ class SlidingSyncResult:
limited: Optional[bool]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
num_live: Optional[int]
bump_stamp: int
bump_stamp: Optional[int]
joined_count: Optional[int]
invited_count: Optional[int]
notification_count: int

View File

@@ -616,6 +616,13 @@ class StateFilter:
return False
def __bool__(self) -> bool:
"""Returns true if this state filter will match any state, or false if
this is the empty filter"""
if self.include_others:
return True
return bool(self.types)
_ALL_STATE_FILTER = StateFilter(types=immutabledict(), include_others=True)
_ALL_NON_MEMBER_STATE_FILTER = StateFilter(

View File

@@ -45,3 +45,6 @@ class _BackgroundUpdates:
SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_BG_UPDATE = (
"sliding_sync_membership_snapshots_bg_update"
)
SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_FIX_FORGOTTEN_COLUMN_BG_UPDATE = (
"sliding_sync_membership_snapshots_fix_forgotten_column_bg_update"
)

View File

@@ -51,7 +51,7 @@ from typing import (
)
import attr
from typing_extensions import Concatenate, Literal, ParamSpec
from typing_extensions import Concatenate, Literal, ParamSpec, Unpack
from twisted.internet import defer
from twisted.internet.defer import CancelledError
@@ -61,6 +61,7 @@ from twisted.python.failure import Failure
from synapse.logging.context import (
PreserveLoggingContext,
make_deferred_yieldable,
run_coroutine_in_background,
run_in_background,
)
from synapse.util import Clock
@@ -344,6 +345,7 @@ T1 = TypeVar("T1")
T2 = TypeVar("T2")
T3 = TypeVar("T3")
T4 = TypeVar("T4")
T5 = TypeVar("T5")
@overload
@@ -402,6 +404,112 @@ def gather_results( # type: ignore[misc]
return deferred.addCallback(tuple)
@overload
async def gather_optional_coroutines(
*coroutines: Unpack[Tuple[Optional[Coroutine[Any, Any, T1]]]],
) -> Tuple[Optional[T1]]: ...
@overload
async def gather_optional_coroutines(
*coroutines: Unpack[
Tuple[
Optional[Coroutine[Any, Any, T1]],
Optional[Coroutine[Any, Any, T2]],
]
],
) -> Tuple[Optional[T1], Optional[T2]]: ...
@overload
async def gather_optional_coroutines(
*coroutines: Unpack[
Tuple[
Optional[Coroutine[Any, Any, T1]],
Optional[Coroutine[Any, Any, T2]],
Optional[Coroutine[Any, Any, T3]],
]
],
) -> Tuple[Optional[T1], Optional[T2], Optional[T3]]: ...
@overload
async def gather_optional_coroutines(
*coroutines: Unpack[
Tuple[
Optional[Coroutine[Any, Any, T1]],
Optional[Coroutine[Any, Any, T2]],
Optional[Coroutine[Any, Any, T3]],
Optional[Coroutine[Any, Any, T4]],
]
],
) -> Tuple[Optional[T1], Optional[T2], Optional[T3], Optional[T4]]: ...
@overload
async def gather_optional_coroutines(
*coroutines: Unpack[
Tuple[
Optional[Coroutine[Any, Any, T1]],
Optional[Coroutine[Any, Any, T2]],
Optional[Coroutine[Any, Any, T3]],
Optional[Coroutine[Any, Any, T4]],
Optional[Coroutine[Any, Any, T5]],
]
],
) -> Tuple[Optional[T1], Optional[T2], Optional[T3], Optional[T4], Optional[T5]]: ...
async def gather_optional_coroutines(
*coroutines: Unpack[Tuple[Optional[Coroutine[Any, Any, T1]], ...]],
) -> Tuple[Optional[T1], ...]:
"""Helper function that allows waiting on multiple coroutines at once.
The return value is a tuple of the return values of the coroutines in order.
If a `None` is passed instead of a coroutine, it will be ignored and a None
is returned in the tuple.
Note: For typechecking we need to have an explicit overload for each
distinct number of coroutines passed in. If you see type problems, it's
likely because you're using many arguments and you need to add a new
overload above.
"""
try:
results = await make_deferred_yieldable(
defer.gatherResults(
[
run_coroutine_in_background(coroutine)
for coroutine in coroutines
if coroutine is not None
],
consumeErrors=True,
)
)
results_iter = iter(results)
return tuple(
next(results_iter) if coroutine is not None else None
for coroutine in coroutines
)
except defer.FirstError as dfe:
# unwrap the error from defer.gatherResults.
# The raised exception's traceback only includes func() etc if
# the 'await' happens before the exception is thrown - ie if the failure
# happens *asynchronously* - otherwise Twisted throws away the traceback as it
# could be large.
#
# We could maybe reconstruct a fake traceback from Failure.frames. Or maybe
# we could throw Twisted into the fires of Mordor.
# suppress exception chaining, because the FirstError doesn't tell us anything
# very interesting.
assert isinstance(dfe.subFailure.value, BaseException)
raise dfe.subFailure.value from None
@attr.s(slots=True, auto_attribs=True)
class _LinearizerEntry:
# The number of things executing.

View File

@@ -142,9 +142,9 @@ class StreamChangeCache:
"""
assert isinstance(stream_pos, int)
# _cache is not valid at or before the earliest known stream position, so
# _cache is not valid before the earliest known stream position, so
# return that the entity has changed.
if stream_pos <= self._earliest_known_stream_pos:
if stream_pos < self._earliest_known_stream_pos:
self.metrics.inc_misses()
return True
@@ -186,7 +186,7 @@ class StreamChangeCache:
This will be all entities if the given stream position is at or earlier
than the earliest known stream position.
"""
if not self._cache or stream_pos <= self._earliest_known_stream_pos:
if not self._cache or stream_pos < self._earliest_known_stream_pos:
self.metrics.inc_misses()
return set(entities)
@@ -238,9 +238,9 @@ class StreamChangeCache:
"""
assert isinstance(stream_pos, int)
# _cache is not valid at or before the earliest known stream position, so
# _cache is not valid before the earliest known stream position, so
# return that an entity has changed.
if stream_pos <= self._earliest_known_stream_pos:
if stream_pos < self._earliest_known_stream_pos:
self.metrics.inc_misses()
return True
@@ -270,9 +270,9 @@ class StreamChangeCache:
"""
assert isinstance(stream_pos, int)
# _cache is not valid at or before the earliest known stream position, so
# _cache is not valid before the earliest known stream position, so
# return None to mark that it is unknown if an entity has changed.
if stream_pos <= self._earliest_known_stream_pos:
if stream_pos < self._earliest_known_stream_pos:
return AllEntitiesChangedResult(None)
changed_entities: List[EntityType] = []

View File

@@ -19,9 +19,12 @@
#
#
import json
import os
import sys
import urllib.parse
from hashlib import blake2b
from importlib.metadata import Distribution, PackageNotFoundError
from typing import Optional
import synapse
from synapse.synapse_rust import get_rust_file_digest
@@ -32,22 +35,17 @@ def check_rust_lib_up_to_date() -> None:
be rebuilt.
"""
if not _dist_is_editable():
return
synapse_dir = os.path.dirname(synapse.__file__)
synapse_root = os.path.abspath(os.path.join(synapse_dir, ".."))
# Double check we've not gone into site-packages...
if os.path.basename(synapse_root) == "site-packages":
return
# ... and it looks like the root of a python project.
if not os.path.exists("pyproject.toml"):
return
# Get the location of the editable install.
synapse_root = get_synapse_source_directory()
if synapse_root is None:
return None
# Get the hash of all Rust source files
hash = _hash_rust_files_in_directory(os.path.join(synapse_root, "rust", "src"))
rust_path = os.path.join(synapse_root, "rust", "src")
if not os.path.exists(rust_path):
return None
hash = _hash_rust_files_in_directory(rust_path)
if hash != get_rust_file_digest():
raise Exception("Rust module outdated. Please rebuild using `poetry install`")
@@ -82,10 +80,55 @@ def _hash_rust_files_in_directory(directory: str) -> str:
return hasher.hexdigest()
def _dist_is_editable() -> bool:
"""Is distribution an editable install?"""
for path_item in sys.path:
egg_link = os.path.join(path_item, "matrix-synapse.egg-link")
if os.path.isfile(egg_link):
return True
return False
def get_synapse_source_directory() -> Optional[str]:
"""Try and find the source directory of synapse for editable installs (like
those used in development).
Returns None if not an editable install (or otherwise can't find the source
directory).
"""
# Try and find the installed matrix-synapse package.
try:
package = Distribution.from_name("matrix-synapse")
except PackageNotFoundError:
# The package is not found, so it's not installed and so must be being
# pulled out from a local directory (usually the current one).
synapse_dir = os.path.dirname(synapse.__file__)
synapse_root = os.path.abspath(os.path.join(synapse_dir, ".."))
# Double check we've not gone into site-packages...
if os.path.basename(synapse_root) == "site-packages":
return None
# ... and it looks like the root of a python project.
if not os.path.exists("pyproject.toml"):
return None
return synapse_root
# Read the `direct_url.json` metadata for the package. This won't exist for
# packages installed via a repository/etc.
# c.f. https://packaging.python.org/en/latest/specifications/direct-url/
direct_url_json = package.read_text("direct_url.json")
if direct_url_json is None:
return None
# c.f. https://packaging.python.org/en/latest/specifications/direct-url/ for
# the format
direct_url_dict: dict = json.loads(direct_url_json)
# `url` must exist as a key, and point to where we fetched the repo from.
project_url = urllib.parse.urlparse(direct_url_dict["url"])
# If its not a local file then we must have built the rust libs either a)
# after we downloaded the package, or b) we built the download wheel.
if project_url.scheme != "file":
return None
# And finally if its not an editable install then the files can't have
# changed since we installed the package.
if not direct_url_dict.get("dir_info", {}).get("editable", False):
return None
return project_url.path

View File

@@ -90,6 +90,10 @@ if __name__ == "__main__":
if runner.args.worker:
if runner.args.log:
# sys.__stdout__ can technically be None, just exit if it's the case
if not sys.__stdout__:
exit(1)
globalLogBeginner.beginLoggingTo(
[textFileLogObserver(sys.__stdout__)], redirectStandardIO=False
)

View File

@@ -19,13 +19,23 @@
# [This file includes modifications made by New Vector Limited]
#
#
import tempfile
from typing import Callable
import yaml
from parameterized import parameterized
from synapse.config import ConfigError
from synapse.config._base import RootConfig
from synapse.config.homeserver import HomeServerConfig
from tests.config.utils import ConfigFileTestCase
try:
import hiredis
except ImportError:
hiredis = None # type: ignore
class ConfigLoadingFileTestCase(ConfigFileTestCase):
def test_load_fails_if_server_name_missing(self) -> None:
@@ -116,3 +126,49 @@ class ConfigLoadingFileTestCase(ConfigFileTestCase):
self.add_lines_to_config(["trust_identity_server_for_password_resets: true"])
with self.assertRaises(ConfigError):
HomeServerConfig.load_config("", ["-c", self.config_file])
@parameterized.expand(
[
"turn_shared_secret_path: /does/not/exist",
"registration_shared_secret_path: /does/not/exist",
*["redis:\n enabled: true\n password_path: /does/not/exist"]
* (hiredis is not None),
]
)
def test_secret_files_missing(self, config_str: str) -> None:
self.generate_config()
self.add_lines_to_config(["", config_str])
with self.assertRaises(ConfigError):
HomeServerConfig.load_config("", ["-c", self.config_file])
@parameterized.expand(
[
(
"turn_shared_secret_path: {}",
lambda c: c.voip.turn_shared_secret,
),
(
"registration_shared_secret_path: {}",
lambda c: c.registration.registration_shared_secret,
),
*[
(
"redis:\n enabled: true\n password_path: {}",
lambda c: c.redis.redis_password,
)
]
* (hiredis is not None),
]
)
def test_secret_files_existing(
self, config_line: str, get_secret: Callable[[RootConfig], str]
) -> None:
self.generate_config_and_remove_lines_containing("registration_shared_secret")
with tempfile.NamedTemporaryFile(buffering=0) as secret_file:
secret_file.write(b"53C237")
self.add_lines_to_config(["", config_line.format(secret_file.name)])
config = HomeServerConfig.load_config("", ["-c", self.config_file])
self.assertEqual(get_secret(config), "53C237")

View File

@@ -380,7 +380,7 @@ class RoomMemberMasterHandlerTestCase(HomeserverTestCase):
)
def test_forget_when_not_left(self) -> None:
"""Tests that a user cannot not forgets a room that has not left."""
"""Tests that a user cannot forget a room that they are still in."""
self.get_failure(self.handler.forget(self.alice_ID, self.room_id), SynapseError)
def test_nonlocal_room_user_action(self) -> None:

View File

@@ -18,9 +18,10 @@
#
#
import logging
from typing import AbstractSet, Dict, Optional, Tuple
from typing import AbstractSet, Dict, Mapping, Optional, Set, Tuple
from unittest.mock import patch
import attr
from parameterized import parameterized
from twisted.test.proto_helpers import MemoryReactor
@@ -35,15 +36,18 @@ from synapse.handlers.sliding_sync import (
RoomsForUserType,
RoomSyncConfig,
StateValues,
_required_state_changes,
)
from synapse.rest import admin
from synapse.rest.client import knock, login, room
from synapse.server import HomeServer
from synapse.storage.util.id_generators import MultiWriterIdGenerator
from synapse.types import JsonDict, StreamToken, UserID
from synapse.types import JsonDict, StateMap, StreamToken, UserID
from synapse.types.handlers.sliding_sync import SlidingSyncConfig
from synapse.types.state import StateFilter
from synapse.util import Clock
from tests import unittest
from tests.replication._base import BaseMultiWorkerStreamTestCase
from tests.unittest import HomeserverTestCase, TestCase
@@ -3213,3 +3217,689 @@ class SortRoomsTestCase(HomeserverTestCase):
# We only care about the *latest* event in the room.
[room_id1, room_id2],
)
@attr.s(slots=True, auto_attribs=True, frozen=True)
class RequiredStateChangesTestParameters:
previous_required_state_map: Dict[str, Set[str]]
request_required_state_map: Dict[str, Set[str]]
state_deltas: StateMap[str]
expected_with_state_deltas: Tuple[
Optional[Mapping[str, AbstractSet[str]]], StateFilter
]
expected_without_state_deltas: Tuple[
Optional[Mapping[str, AbstractSet[str]]], StateFilter
]
class RequiredStateChangesTestCase(unittest.TestCase):
"""Test cases for `_required_state_changes`"""
@parameterized.expand(
[
(
"simple_no_change",
"""Test no change to required state""",
RequiredStateChangesTestParameters(
previous_required_state_map={"type1": {"state_key"}},
request_required_state_map={"type1": {"state_key"}},
state_deltas={("type1", "state_key"): "$event_id"},
# No changes
expected_with_state_deltas=(None, StateFilter.none()),
expected_without_state_deltas=(None, StateFilter.none()),
),
),
(
"simple_add_type",
"""Test adding a type to the config""",
RequiredStateChangesTestParameters(
previous_required_state_map={"type1": {"state_key"}},
request_required_state_map={
"type1": {"state_key"},
"type2": {"state_key"},
},
state_deltas={("type2", "state_key"): "$event_id"},
expected_with_state_deltas=(
# We've added a type so we should persist the changed required state
# config.
{"type1": {"state_key"}, "type2": {"state_key"}},
# We should see the new type added
StateFilter.from_types([("type2", "state_key")]),
),
expected_without_state_deltas=(
{"type1": {"state_key"}, "type2": {"state_key"}},
StateFilter.from_types([("type2", "state_key")]),
),
),
),
(
"simple_add_type_from_nothing",
"""Test adding a type to the config when previously requesting nothing""",
RequiredStateChangesTestParameters(
previous_required_state_map={},
request_required_state_map={
"type1": {"state_key"},
"type2": {"state_key"},
},
state_deltas={("type2", "state_key"): "$event_id"},
expected_with_state_deltas=(
# We've added a type so we should persist the changed required state
# config.
{"type1": {"state_key"}, "type2": {"state_key"}},
# We should see the new types added
StateFilter.from_types(
[("type1", "state_key"), ("type2", "state_key")]
),
),
expected_without_state_deltas=(
{"type1": {"state_key"}, "type2": {"state_key"}},
StateFilter.from_types(
[("type1", "state_key"), ("type2", "state_key")]
),
),
),
),
(
"simple_add_state_key",
"""Test adding a state key to the config""",
RequiredStateChangesTestParameters(
previous_required_state_map={"type": {"state_key1"}},
request_required_state_map={"type": {"state_key1", "state_key2"}},
state_deltas={("type", "state_key2"): "$event_id"},
expected_with_state_deltas=(
# We've added a key so we should persist the changed required state
# config.
{"type": {"state_key1", "state_key2"}},
# We should see the new state_keys added
StateFilter.from_types([("type", "state_key2")]),
),
expected_without_state_deltas=(
{"type": {"state_key1", "state_key2"}},
StateFilter.from_types([("type", "state_key2")]),
),
),
),
(
"simple_remove_type",
"""
Test removing a type from the config when there are a matching state
delta does cause the persisted required state config to change
Test removing a type from the config when there are no matching state
deltas does *not* cause the persisted required state config to change
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
"type1": {"state_key"},
"type2": {"state_key"},
},
request_required_state_map={"type1": {"state_key"}},
state_deltas={("type2", "state_key"): "$event_id"},
expected_with_state_deltas=(
# Remove `type2` since there's been a change to that state,
# (persist the change to required state). That way next time,
# they request `type2`, we see that we haven't sent it before
# and send the new state. (we should still keep track that we've
# sent `type1` before).
{"type1": {"state_key"}},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
# `type2` is no longer requested but since that state hasn't
# changed, nothing should change (we should still keep track
# that we've sent `type2` before).
None,
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
),
),
(
"simple_remove_type_to_nothing",
"""
Test removing a type from the config and no longer requesting any state
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
"type1": {"state_key"},
"type2": {"state_key"},
},
request_required_state_map={},
state_deltas={("type2", "state_key"): "$event_id"},
expected_with_state_deltas=(
# Remove `type2` since there's been a change to that state,
# (persist the change to required state). That way next time,
# they request `type2`, we see that we haven't sent it before
# and send the new state. (we should still keep track that we've
# sent `type1` before).
{"type1": {"state_key"}},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
# `type2` is no longer requested but since that state hasn't
# changed, nothing should change (we should still keep track
# that we've sent `type2` before).
None,
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
),
),
(
"simple_remove_state_key",
"""
Test removing a state_key from the config
""",
RequiredStateChangesTestParameters(
previous_required_state_map={"type": {"state_key1", "state_key2"}},
request_required_state_map={"type": {"state_key1"}},
state_deltas={("type", "state_key2"): "$event_id"},
expected_with_state_deltas=(
# Remove `(type, state_key2)` since there's been a change
# to that state (persist the change to required state).
# That way next time, they request `(type, state_key2)`, we see
# that we haven't sent it before and send the new state. (we
# should still keep track that we've sent `(type, state_key1)`
# before).
{"type": {"state_key1"}},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
# `(type, state_key2)` is no longer requested but since that
# state hasn't changed, nothing should change (we should still
# keep track that we've sent `(type, state_key1)` and `(type,
# state_key2)` before).
None,
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
),
),
(
"type_wildcards_add",
"""
Test adding a wildcard type causes the persisted required state config
to change and we request everything.
If a event type wildcard has been added or removed we don't try and do
anything fancy, and instead always update the effective room required
state config to match the request.
""",
RequiredStateChangesTestParameters(
previous_required_state_map={"type1": {"state_key2"}},
request_required_state_map={
"type1": {"state_key2"},
StateValues.WILDCARD: {"state_key"},
},
state_deltas={
("other_type", "state_key"): "$event_id",
},
# We've added a wildcard, so we persist the change and request everything
expected_with_state_deltas=(
{"type1": {"state_key2"}, StateValues.WILDCARD: {"state_key"}},
StateFilter.all(),
),
expected_without_state_deltas=(
{"type1": {"state_key2"}, StateValues.WILDCARD: {"state_key"}},
StateFilter.all(),
),
),
),
(
"type_wildcards_remove",
"""
Test removing a wildcard type causes the persisted required state config
to change and request nothing.
If a event type wildcard has been added or removed we don't try and do
anything fancy, and instead always update the effective room required
state config to match the request.
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
"type1": {"state_key2"},
StateValues.WILDCARD: {"state_key"},
},
request_required_state_map={"type1": {"state_key2"}},
state_deltas={
("other_type", "state_key"): "$event_id",
},
# We've removed a type wildcard, so we persist the change but don't request anything
expected_with_state_deltas=(
{"type1": {"state_key2"}},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
{"type1": {"state_key2"}},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
),
),
(
"state_key_wildcards_add",
"""Test adding a wildcard state_key""",
RequiredStateChangesTestParameters(
previous_required_state_map={"type1": {"state_key"}},
request_required_state_map={
"type1": {"state_key"},
"type2": {StateValues.WILDCARD},
},
state_deltas={("type2", "state_key"): "$event_id"},
# We've added a wildcard state_key, so we persist the change and
# request all of the state for that type
expected_with_state_deltas=(
{"type1": {"state_key"}, "type2": {StateValues.WILDCARD}},
StateFilter.from_types([("type2", None)]),
),
expected_without_state_deltas=(
{"type1": {"state_key"}, "type2": {StateValues.WILDCARD}},
StateFilter.from_types([("type2", None)]),
),
),
),
(
"state_key_wildcards_remove",
"""Test removing a wildcard state_key""",
RequiredStateChangesTestParameters(
previous_required_state_map={
"type1": {"state_key"},
"type2": {StateValues.WILDCARD},
},
request_required_state_map={"type1": {"state_key"}},
state_deltas={("type2", "state_key"): "$event_id"},
# We've removed a state_key wildcard, so we persist the change and
# request nothing
expected_with_state_deltas=(
{"type1": {"state_key"}},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
# We've removed a state_key wildcard but there have been no matching
# state changes, so no changes needed, just persist the
# `request_required_state_map` as-is.
expected_without_state_deltas=(
None,
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
),
),
(
"state_key_remove_some",
"""
Test that removing state keys work when only some of the state keys have
changed
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
"type1": {"state_key1", "state_key2", "state_key3"}
},
request_required_state_map={"type1": {"state_key1"}},
state_deltas={("type1", "state_key3"): "$event_id"},
expected_with_state_deltas=(
# We've removed some state keys from the type, but only state_key3 was
# changed so only that one should be removed.
{"type1": {"state_key1", "state_key2"}},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
# No changes needed, just persist the
# `request_required_state_map` as-is
None,
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
),
),
(
"state_key_me_add",
"""
Test adding state keys work when using "$ME"
""",
RequiredStateChangesTestParameters(
previous_required_state_map={},
request_required_state_map={"type1": {StateValues.ME}},
state_deltas={("type1", "@user:test"): "$event_id"},
expected_with_state_deltas=(
# We've added a type so we should persist the changed required state
# config.
{"type1": {StateValues.ME}},
# We should see the new state_keys added
StateFilter.from_types([("type1", "@user:test")]),
),
expected_without_state_deltas=(
{"type1": {StateValues.ME}},
StateFilter.from_types([("type1", "@user:test")]),
),
),
),
(
"state_key_me_remove",
"""
Test removing state keys work when using "$ME"
""",
RequiredStateChangesTestParameters(
previous_required_state_map={"type1": {StateValues.ME}},
request_required_state_map={},
state_deltas={("type1", "@user:test"): "$event_id"},
expected_with_state_deltas=(
# Remove `type1` since there's been a change to that state,
# (persist the change to required state). That way next time,
# they request `type1`, we see that we haven't sent it before
# and send the new state. (if we were tracking that we sent any
# other state, we should still keep track that).
{},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
# `type1` is no longer requested but since that state hasn't
# changed, nothing should change (we should still keep track
# that we've sent `type1` before).
None,
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
),
),
(
"state_key_user_id_add",
"""
Test adding state keys work when using your own user ID
""",
RequiredStateChangesTestParameters(
previous_required_state_map={},
request_required_state_map={"type1": {"@user:test"}},
state_deltas={("type1", "@user:test"): "$event_id"},
expected_with_state_deltas=(
# We've added a type so we should persist the changed required state
# config.
{"type1": {"@user:test"}},
# We should see the new state_keys added
StateFilter.from_types([("type1", "@user:test")]),
),
expected_without_state_deltas=(
{"type1": {"@user:test"}},
StateFilter.from_types([("type1", "@user:test")]),
),
),
),
(
"state_key_me_remove",
"""
Test removing state keys work when using your own user ID
""",
RequiredStateChangesTestParameters(
previous_required_state_map={"type1": {"@user:test"}},
request_required_state_map={},
state_deltas={("type1", "@user:test"): "$event_id"},
expected_with_state_deltas=(
# Remove `type1` since there's been a change to that state,
# (persist the change to required state). That way next time,
# they request `type1`, we see that we haven't sent it before
# and send the new state. (if we were tracking that we sent any
# other state, we should still keep track that).
{},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
# `type1` is no longer requested but since that state hasn't
# changed, nothing should change (we should still keep track
# that we've sent `type1` before).
None,
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
),
),
(
"state_key_lazy_add",
"""
Test adding state keys work when using "$LAZY"
""",
RequiredStateChangesTestParameters(
previous_required_state_map={},
request_required_state_map={EventTypes.Member: {StateValues.LAZY}},
state_deltas={(EventTypes.Member, "@user:test"): "$event_id"},
expected_with_state_deltas=(
# If a "$LAZY" has been added or removed we always update the
# required state to what was requested for simplicity.
{EventTypes.Member: {StateValues.LAZY}},
StateFilter.none(),
),
expected_without_state_deltas=(
{EventTypes.Member: {StateValues.LAZY}},
StateFilter.none(),
),
),
),
(
"state_key_lazy_remove",
"""
Test removing state keys work when using "$LAZY"
""",
RequiredStateChangesTestParameters(
previous_required_state_map={EventTypes.Member: {StateValues.LAZY}},
request_required_state_map={},
state_deltas={(EventTypes.Member, "@user:test"): "$event_id"},
expected_with_state_deltas=(
# If a "$LAZY" has been added or removed we always update the
# required state to what was requested for simplicity.
{},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
# `EventTypes.Member` is no longer requested but since that
# state hasn't changed, nothing should change (we should still
# keep track that we've sent `EventTypes.Member` before).
None,
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
),
),
(
"type_wildcard_with_state_key_wildcard_to_explicit_state_keys",
"""
Test switching from a wildcard ("*", "*") to explicit state keys
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
StateValues.WILDCARD: {StateValues.WILDCARD}
},
request_required_state_map={
StateValues.WILDCARD: {"state_key1", "state_key2", "state_key3"}
},
state_deltas={("type1", "state_key1"): "$event_id"},
# If we were previously fetching everything ("*", "*"), always update the effective
# room required state config to match the request. And since we we're previously
# already fetching everything, we don't have to fetch anything now that they've
# narrowed.
expected_with_state_deltas=(
{
StateValues.WILDCARD: {
"state_key1",
"state_key2",
"state_key3",
}
},
StateFilter.none(),
),
expected_without_state_deltas=(
{
StateValues.WILDCARD: {
"state_key1",
"state_key2",
"state_key3",
}
},
StateFilter.none(),
),
),
),
(
"type_wildcard_with_explicit_state_keys_to_wildcard_state_key",
"""
Test switching from explicit to wildcard state keys ("*", "*")
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
StateValues.WILDCARD: {"state_key1", "state_key2", "state_key3"}
},
request_required_state_map={
StateValues.WILDCARD: {StateValues.WILDCARD}
},
state_deltas={("type1", "state_key1"): "$event_id"},
# We've added a wildcard, so we persist the change and request everything
expected_with_state_deltas=(
{StateValues.WILDCARD: {StateValues.WILDCARD}},
StateFilter.all(),
),
expected_without_state_deltas=(
{StateValues.WILDCARD: {StateValues.WILDCARD}},
StateFilter.all(),
),
),
),
(
"state_key_wildcard_to_explicit_state_keys",
"""Test switching from a wildcard to explicit state keys with a concrete type""",
RequiredStateChangesTestParameters(
previous_required_state_map={"type1": {StateValues.WILDCARD}},
request_required_state_map={
"type1": {"state_key1", "state_key2", "state_key3"}
},
state_deltas={("type1", "state_key1"): "$event_id"},
# If a state_key wildcard has been added or removed, we always
# update the effective room required state config to match the
# request. And since we we're previously already fetching
# everything, we don't have to fetch anything now that they've
# narrowed.
expected_with_state_deltas=(
{
"type1": {
"state_key1",
"state_key2",
"state_key3",
}
},
StateFilter.none(),
),
expected_without_state_deltas=(
{
"type1": {
"state_key1",
"state_key2",
"state_key3",
}
},
StateFilter.none(),
),
),
),
(
"state_key_wildcard_to_explicit_state_keys",
"""Test switching from a wildcard to explicit state keys with a concrete type""",
RequiredStateChangesTestParameters(
previous_required_state_map={
"type1": {"state_key1", "state_key2", "state_key3"}
},
request_required_state_map={"type1": {StateValues.WILDCARD}},
state_deltas={("type1", "state_key1"): "$event_id"},
# If a state_key wildcard has been added or removed, we always
# update the effective room required state config to match the
# request. And we need to request all of the state for that type
# because we previously, only sent down a few keys.
expected_with_state_deltas=(
{"type1": {StateValues.WILDCARD}},
StateFilter.from_types([("type1", None)]),
),
expected_without_state_deltas=(
{"type1": {StateValues.WILDCARD}},
StateFilter.from_types([("type1", None)]),
),
),
),
]
)
def test_xxx(
self,
_test_label: str,
_test_description: str,
test_parameters: RequiredStateChangesTestParameters,
) -> None:
# Without `state_deltas`
changed_required_state_map, added_state_filter = _required_state_changes(
user_id="@user:test",
previous_room_config=RoomSyncConfig(
timeline_limit=0,
required_state_map=test_parameters.previous_required_state_map,
),
room_sync_config=RoomSyncConfig(
timeline_limit=0,
required_state_map=test_parameters.request_required_state_map,
),
state_deltas={},
)
self.assertEqual(
changed_required_state_map,
test_parameters.expected_without_state_deltas[0],
"changed_required_state_map does not match (without state_deltas)",
)
self.assertEqual(
added_state_filter,
test_parameters.expected_without_state_deltas[1],
"added_state_filter does not match (without state_deltas)",
)
# With `state_deltas`
changed_required_state_map, added_state_filter = _required_state_changes(
user_id="@user:test",
previous_room_config=RoomSyncConfig(
timeline_limit=0,
required_state_map=test_parameters.previous_required_state_map,
),
room_sync_config=RoomSyncConfig(
timeline_limit=0,
required_state_map=test_parameters.request_required_state_map,
),
state_deltas=test_parameters.state_deltas,
)
self.assertEqual(
changed_required_state_map,
test_parameters.expected_with_state_deltas[0],
"changed_required_state_map does not match (with state_deltas)",
)
self.assertEqual(
added_state_filter,
test_parameters.expected_with_state_deltas[1],
"added_state_filter does not match (with state_deltas)",
)

View File

@@ -903,12 +903,18 @@ class FederationClientProxyTests(BaseMultiWorkerStreamTestCase):
headers=Headers(
{
"Content-Type": ["application/json"],
"Connection": ["close, X-Foo, X-Bar"],
# Define some hop-by-hop headers (try with varying casing to
# make sure we still match-up the headers)
"Connection": ["close, X-fOo, X-Bar", "X-baz"],
# Should be removed because it's defined in the `Connection` header
"X-Foo": ["foo"],
"X-Bar": ["bar"],
# (not in canonical case)
"x-baZ": ["baz"],
# Should be removed because it's a hop-by-hop header
"Proxy-Authorization": "abcdef",
# Should be removed because it's a hop-by-hop header (not in canonical case)
"transfer-EnCoDiNg": "abcdef",
}
),
)

View File

@@ -30,19 +30,19 @@ from tests.unittest import TestCase
class ProxyTests(TestCase):
@parameterized.expand(
[
[b"close, X-Foo, X-Bar", {"Close", "X-Foo", "X-Bar"}],
[b"close, X-Foo, X-Bar", {"close", "x-foo", "x-bar"}],
# No whitespace
[b"close,X-Foo,X-Bar", {"Close", "X-Foo", "X-Bar"}],
[b"close,X-Foo,X-Bar", {"close", "x-foo", "x-bar"}],
# More whitespace
[b"close, X-Foo, X-Bar", {"Close", "X-Foo", "X-Bar"}],
[b"close, X-Foo, X-Bar", {"close", "x-foo", "x-bar"}],
# "close" directive in not the first position
[b"X-Foo, X-Bar, close", {"X-Foo", "X-Bar", "Close"}],
[b"X-Foo, X-Bar, close", {"x-foo", "x-bar", "close"}],
# Normalizes header capitalization
[b"keep-alive, x-fOo, x-bAr", {"Keep-Alive", "X-Foo", "X-Bar"}],
[b"keep-alive, x-fOo, x-bAr", {"keep-alive", "x-foo", "x-bar"}],
# Handles header names with whitespace
[
b"keep-alive, x foo, x bar",
{"Keep-Alive", "X foo", "X bar"},
{"keep-alive", "x foo", "x bar"},
],
]
)

View File

@@ -60,7 +60,7 @@ from synapse.util import Clock
from tests import unittest
from tests.server import FakeChannel
from tests.test_utils import SMALL_PNG
from tests.test_utils import SMALL_CMYK_JPEG, SMALL_PNG
from tests.unittest import override_config
from tests.utils import default_config
@@ -187,6 +187,68 @@ small_png_with_transparency = TestImage(
# different versions of Pillow.
)
small_cmyk_jpeg = TestImage(
SMALL_CMYK_JPEG,
b"image/jpeg",
b".jpeg",
# These values were sourced simply by seeing at what the tests produced at
# the time of writing. If this changes, the tests will fail.
unhexlify(
b"ffd8ffe000104a46494600010100000100010000ffdb00430006"
b"040506050406060506070706080a100a0a09090a140e0f0c1017"
b"141818171416161a1d251f1a1b231c1616202c20232627292a29"
b"191f2d302d283025282928ffdb0043010707070a080a130a0a13"
b"281a161a28282828282828282828282828282828282828282828"
b"2828282828282828282828282828282828282828282828282828"
b"2828ffc00011080020002003012200021101031101ffc4001f00"
b"0001050101010101010000000000000000010203040506070809"
b"0a0bffc400b5100002010303020403050504040000017d010203"
b"00041105122131410613516107227114328191a1082342b1c115"
b"52d1f02433627282090a161718191a25262728292a3435363738"
b"393a434445464748494a535455565758595a636465666768696a"
b"737475767778797a838485868788898a92939495969798999aa2"
b"a3a4a5a6a7a8a9aab2b3b4b5b6b7b8b9bac2c3c4c5c6c7c8c9ca"
b"d2d3d4d5d6d7d8d9dae1e2e3e4e5e6e7e8e9eaf1f2f3f4f5f6f7"
b"f8f9faffc4001f01000301010101010101010100000000000001"
b"02030405060708090a0bffc400b5110002010204040304070504"
b"0400010277000102031104052131061241510761711322328108"
b"144291a1b1c109233352f0156272d10a162434e125f11718191a"
b"262728292a35363738393a434445464748494a53545556575859"
b"5a636465666768696a737475767778797a82838485868788898a"
b"92939495969798999aa2a3a4a5a6a7a8a9aab2b3b4b5b6b7b8b9"
b"bac2c3c4c5c6c7c8c9cad2d3d4d5d6d7d8d9dae2e3e4e5e6e7e8"
b"e9eaf2f3f4f5f6f7f8f9faffda000c03010002110311003f00fa"
b"a68a28a0028a28a0028a28a0028a28a00fffd9"
),
unhexlify(
b"ffd8ffe000104a46494600010100000100010000ffdb00430006"
b"040506050406060506070706080a100a0a09090a140e0f0c1017"
b"141818171416161a1d251f1a1b231c1616202c20232627292a29"
b"191f2d302d283025282928ffdb0043010707070a080a130a0a13"
b"281a161a28282828282828282828282828282828282828282828"
b"2828282828282828282828282828282828282828282828282828"
b"2828ffc00011080001000103012200021101031101ffc4001f00"
b"0001050101010101010000000000000000010203040506070809"
b"0a0bffc400b5100002010303020403050504040000017d010203"
b"00041105122131410613516107227114328191a1082342b1c115"
b"52d1f02433627282090a161718191a25262728292a3435363738"
b"393a434445464748494a535455565758595a636465666768696a"
b"737475767778797a838485868788898a92939495969798999aa2"
b"a3a4a5a6a7a8a9aab2b3b4b5b6b7b8b9bac2c3c4c5c6c7c8c9ca"
b"d2d3d4d5d6d7d8d9dae1e2e3e4e5e6e7e8e9eaf1f2f3f4f5f6f7"
b"f8f9faffc4001f01000301010101010101010100000000000001"
b"02030405060708090a0bffc400b5110002010204040304070504"
b"0400010277000102031104052131061241510761711322328108"
b"144291a1b1c109233352f0156272d10a162434e125f11718191a"
b"262728292a35363738393a434445464748494a53545556575859"
b"5a636465666768696a737475767778797a82838485868788898a"
b"92939495969798999aa2a3a4a5a6a7a8a9aab2b3b4b5b6b7b8b9"
b"bac2c3c4c5c6c7c8c9cad2d3d4d5d6d7d8d9dae2e3e4e5e6e7e8"
b"e9eaf2f3f4f5f6f7f8f9faffda000c03010002110311003f00fa"
b"a68a28a00fffd9"
),
)
small_lossless_webp = TestImage(
unhexlify(
b"524946461a000000574542505650384c0d0000002f0000001007" b"1011118888fe0700"

View File

@@ -20,6 +20,7 @@
#
import json
from typing import Any
from parameterized import parameterized
@@ -52,6 +53,7 @@ class OEmbedTests(HomeserverTestCase):
def test_version(self) -> None:
"""Accept versions that are similar to 1.0 as a string or int (or missing)."""
version: Any
for version in ("1.0", 1.0, 1):
result = self.parse_response({"version": version})
# An empty Open Graph response is an error, ensure the URL is included.
@@ -69,6 +71,7 @@ class OEmbedTests(HomeserverTestCase):
def test_cache_age(self) -> None:
"""Ensure a cache-age is parsed properly."""
cache_age: Any
# Correct-ish cache ages are allowed.
for cache_age in ("1", 1.0, 1):
result = self.parse_response({"cache_age": cache_age})

View File

@@ -149,6 +149,7 @@ class PushRuleEvaluatorTestCase(unittest.TestCase):
content: JsonMapping,
*,
related_events: Optional[JsonDict] = None,
msc4210: bool = False,
) -> PushRuleEvaluator:
event = FrozenEvent(
{
@@ -174,6 +175,7 @@ class PushRuleEvaluatorTestCase(unittest.TestCase):
related_event_match_enabled=True,
room_version_feature_flags=event.room_version.msc3931_push_features,
msc3931_enabled=True,
msc4210_enabled=msc4210,
)
def test_display_name(self) -> None:
@@ -452,6 +454,7 @@ class PushRuleEvaluatorTestCase(unittest.TestCase):
{"value": False},
"incorrect values should not match",
)
value: Any
for value in ("foobaz", 1, 1.1, None, [], {}):
self._assert_not_matches(
condition,
@@ -492,6 +495,7 @@ class PushRuleEvaluatorTestCase(unittest.TestCase):
{"value": None},
"exact value should match",
)
value: Any
for value in ("foobaz", True, False, 1, 1.1, [], {}):
self._assert_not_matches(
condition,

View File

@@ -23,6 +23,7 @@ import hashlib
import hmac
import json
import os
import time
import urllib.parse
from binascii import unhexlify
from http import HTTPStatus
@@ -56,6 +57,7 @@ from synapse.types import JsonDict, UserID, create_requester
from synapse.util import Clock
from tests import unittest
from tests.replication._base import BaseMultiWorkerStreamTestCase
from tests.test_utils import SMALL_PNG
from tests.unittest import override_config
@@ -5127,7 +5129,6 @@ class UserRedactionTestCase(unittest.HomeserverTestCase):
"""
Test that request to redact events in all rooms user is member of is successful
"""
# join rooms, send some messages
originals = []
for rm in [self.rm1, self.rm2, self.rm3]:
@@ -5288,19 +5289,26 @@ class UserRedactionTestCase(unittest.HomeserverTestCase):
self.assertEqual(len(matched), len(rm2_originals))
def test_admin_redact_works_if_user_kicked_or_banned(self) -> None:
originals = []
originals1 = []
originals2 = []
for rm in [self.rm1, self.rm2, self.rm3]:
join = self.helper.join(rm, self.bad_user, tok=self.bad_user_tok)
originals.append(join["event_id"])
if rm in [self.rm1, self.rm3]:
originals1.append(join["event_id"])
else:
originals2.append(join["event_id"])
for i in range(5):
event = {"body": f"hello{i}", "msgtype": "m.text"}
res = self.helper.send_event(
rm, "m.room.message", event, tok=self.bad_user_tok
)
originals.append(res["event_id"])
if rm in [self.rm1, self.rm3]:
originals1.append(res["event_id"])
else:
originals2.append(res["event_id"])
# kick user from rooms 1 and 3
for r in [self.rm1, self.rm2]:
for r in [self.rm1, self.rm3]:
channel = self.make_request(
"POST",
f"/_matrix/client/r0/rooms/{r}/kick",
@@ -5330,32 +5338,165 @@ class UserRedactionTestCase(unittest.HomeserverTestCase):
failed_redactions = channel2.json_body.get("failed_redactions")
self.assertEqual(failed_redactions, {})
# ban user
channel3 = self.make_request(
# double check
for rm in [self.rm1, self.rm3]:
filter = json.dumps({"types": [EventTypes.Redaction]})
channel3 = self.make_request(
"GET",
f"rooms/{rm}/messages?filter={filter}&limit=50",
access_token=self.admin_tok,
)
self.assertEqual(channel3.code, 200)
matches = []
for event in channel3.json_body["chunk"]:
for event_id in originals1:
if (
event["type"] == "m.room.redaction"
and event["redacts"] == event_id
):
matches.append((event_id, event))
# we redacted 6 messages
self.assertEqual(len(matches), 6)
# ban user from room 2
channel4 = self.make_request(
"POST",
f"/_matrix/client/r0/rooms/{self.rm2}/ban",
content={"reason": "being a bummer", "user_id": self.bad_user},
access_token=self.admin_tok,
)
self.assertEqual(channel3.code, HTTPStatus.OK, channel3.result)
self.assertEqual(channel4.code, HTTPStatus.OK, channel4.result)
# redact messages in room 2
channel4 = self.make_request(
# make a request to ban all user's messages
channel5 = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": [self.rm2]},
content={"rooms": []},
access_token=self.admin_tok,
)
self.assertEqual(channel4.code, 200)
id2 = channel1.json_body.get("redact_id")
self.assertEqual(channel5.code, 200)
id2 = channel5.json_body.get("redact_id")
# check that there were no failed redactions in room 2
channel5 = self.make_request(
channel6 = self.make_request(
"GET",
f"/_synapse/admin/v1/user/redact_status/{id2}",
access_token=self.admin_tok,
)
self.assertEqual(channel5.code, 200)
self.assertEqual(channel5.json_body.get("status"), "complete")
failed_redactions = channel5.json_body.get("failed_redactions")
self.assertEqual(channel6.code, 200)
self.assertEqual(channel6.json_body.get("status"), "complete")
failed_redactions = channel6.json_body.get("failed_redactions")
self.assertEqual(failed_redactions, {})
# double check messages in room 2 were redacted
filter = json.dumps({"types": [EventTypes.Redaction]})
channel7 = self.make_request(
"GET",
f"rooms/{self.rm2}/messages?filter={filter}&limit=50",
access_token=self.admin_tok,
)
self.assertEqual(channel7.code, 200)
matches = []
for event in channel7.json_body["chunk"]:
for event_id in originals2:
if event["type"] == "m.room.redaction" and event["redacts"] == event_id:
matches.append((event_id, event))
# we redacted 6 messages
self.assertEqual(len(matches), 6)
class UserRedactionBackgroundTaskTestCase(BaseMultiWorkerStreamTestCase):
servlets = [
synapse.rest.admin.register_servlets,
login.register_servlets,
admin.register_servlets,
room.register_servlets,
sync.register_servlets,
]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.admin = self.register_user("thomas", "pass", True)
self.admin_tok = self.login("thomas", "pass")
self.bad_user = self.register_user("teresa", "pass")
self.bad_user_tok = self.login("teresa", "pass")
# create rooms - room versions 11+ store the `redacts` key in content while
# earlier ones don't so we use a mix of room versions
self.rm1 = self.helper.create_room_as(
self.admin, tok=self.admin_tok, room_version="7"
)
self.rm2 = self.helper.create_room_as(self.admin, tok=self.admin_tok)
self.rm3 = self.helper.create_room_as(
self.admin, tok=self.admin_tok, room_version="11"
)
@override_config({"run_background_tasks_on": "worker1"})
def test_redact_messages_all_rooms(self) -> None:
"""
Test that redact task successfully runs when `run_background_tasks_on` is specified
"""
self.make_worker_hs(
"synapse.app.generic_worker",
extra_config={
"worker_name": "worker1",
"run_background_tasks_on": "worker1",
"redis": {"enabled": True},
},
)
# join rooms, send some messages
original_event_ids = set()
for rm in [self.rm1, self.rm2, self.rm3]:
join = self.helper.join(rm, self.bad_user, tok=self.bad_user_tok)
original_event_ids.add(join["event_id"])
for i in range(15):
event = {"body": f"hello{i}", "msgtype": "m.text"}
res = self.helper.send_event(
rm, "m.room.message", event, tok=self.bad_user_tok, expect_code=200
)
original_event_ids.add(res["event_id"])
# redact all events in all rooms
channel = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": []},
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
id = channel.json_body.get("redact_id")
timeout_s = 10
start_time = time.time()
redact_result = ""
while redact_result != "complete":
if start_time + timeout_s < time.time():
self.fail("Timed out waiting for redactions.")
channel2 = self.make_request(
"GET",
f"/_synapse/admin/v1/user/redact_status/{id}",
access_token=self.admin_tok,
)
redact_result = channel2.json_body["status"]
if redact_result == "failed":
self.fail("Redaction task failed.")
redaction_ids = set()
for rm in [self.rm1, self.rm2, self.rm3]:
filter = json.dumps({"types": [EventTypes.Redaction]})
channel = self.make_request(
"GET",
f"rooms/{rm}/messages?filter={filter}&limit=50",
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
for event in channel.json_body["chunk"]:
if event["type"] == "m.room.redaction":
redaction_ids.add(event["redacts"])
self.assertIncludes(redaction_ids, original_event_ids, exact=True)

View File

@@ -1096,6 +1096,92 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
self.assertGreater(response_body["rooms"][room_id]["bump_stamp"], 0)
def test_rooms_bump_stamp_no_change_incremental(self) -> None:
"""Test that the bump stamp is omitted if there has been no change"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
room_id1 = self.helper.create_room_as(
user1_id,
tok=user1_tok,
)
# Make the Sliding Sync request
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [],
"timeline_limit": 100,
}
}
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# Initial sync so we expect to see a bump stamp
self.assertIn("bump_stamp", response_body["rooms"][room_id1])
# Send an event that is not in the bump events list
self.helper.send_event(
room_id1, type="org.matrix.test", content={}, tok=user1_tok
)
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# There hasn't been a change to the bump stamps, so we ignore it
self.assertNotIn("bump_stamp", response_body["rooms"][room_id1])
def test_rooms_bump_stamp_change_incremental(self) -> None:
"""Test that the bump stamp is included if there has been a change, even
if its not in the timeline"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
room_id1 = self.helper.create_room_as(
user1_id,
tok=user1_tok,
)
# Make the Sliding Sync request
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [],
"timeline_limit": 2,
}
}
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# Initial sync so we expect to see a bump stamp
self.assertIn("bump_stamp", response_body["rooms"][room_id1])
first_bump_stamp = response_body["rooms"][room_id1]["bump_stamp"]
# Send a bump event at the start.
self.helper.send(room_id1, "test", tok=user1_tok)
# Send events that are not in the bump events list to fill the timeline
for _ in range(5):
self.helper.send_event(
room_id1, type="org.matrix.test", content={}, tok=user1_tok
)
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# There was a bump event in the timeline gap, so we should see the bump
# stamp be updated.
self.assertIn("bump_stamp", response_body["rooms"][room_id1])
second_bump_stamp = response_body["rooms"][room_id1]["bump_stamp"]
self.assertGreater(second_bump_stamp, first_bump_stamp)
def test_rooms_bump_stamp_invites(self) -> None:
"""
Test that `bump_stamp` is present and points to the membership event,

View File

@@ -862,3 +862,264 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
exact=True,
message=f"Expected only fully-stated rooms to show up for test_key={list_key}.",
)
def test_rooms_required_state_expand(self) -> None:
"""Test that when we expand the required state argument we get the
expanded state, and not just the changes to the new expanded."""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
# Create a room with a room name.
room_id1 = self.helper.create_room_as(
user1_id, tok=user1_tok, extra_content={"name": "Foo"}
)
# Only request the state event to begin with
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [
[EventTypes.Create, ""],
],
"timeline_limit": 1,
}
}
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
state_map = self.get_success(
self.storage_controllers.state.get_current_state(room_id1)
)
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
{
state_map[(EventTypes.Create, "")],
},
exact=True,
)
# Send a message so the room comes down sync.
self.helper.send(room_id1, "msg", tok=user1_tok)
# Update the sliding sync requests to include the room name
sync_body["lists"]["foo-list"]["required_state"] = [
[EventTypes.Create, ""],
[EventTypes.Name, ""],
]
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# We should see the room name, even though there haven't been any
# changes.
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
{
state_map[(EventTypes.Name, "")],
},
exact=True,
)
# Send a message so the room comes down sync.
self.helper.send(room_id1, "msg", tok=user1_tok)
# We should not see any state changes.
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
self.assertIsNone(response_body["rooms"][room_id1].get("required_state"))
def test_rooms_required_state_expand_retract_expand(self) -> None:
"""Test that when expanding, retracting and then expanding the required
state, we get the changes that happened."""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
# Create a room with a room name.
room_id1 = self.helper.create_room_as(
user1_id, tok=user1_tok, extra_content={"name": "Foo"}
)
# Only request the state event to begin with
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [
[EventTypes.Create, ""],
],
"timeline_limit": 1,
}
}
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
state_map = self.get_success(
self.storage_controllers.state.get_current_state(room_id1)
)
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
{
state_map[(EventTypes.Create, "")],
},
exact=True,
)
# Send a message so the room comes down sync.
self.helper.send(room_id1, "msg", tok=user1_tok)
# Update the sliding sync requests to include the room name
sync_body["lists"]["foo-list"]["required_state"] = [
[EventTypes.Create, ""],
[EventTypes.Name, ""],
]
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# We should see the room name, even though there haven't been any
# changes.
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
{
state_map[(EventTypes.Name, "")],
},
exact=True,
)
# Update the room name
self.helper.send_state(
room_id1, "m.room.name", {"name": "Bar"}, state_key="", tok=user1_tok
)
# Update the sliding sync requests to exclude the room name again
sync_body["lists"]["foo-list"]["required_state"] = [
[EventTypes.Create, ""],
]
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# We should not see the updated room name in state (though it will be in
# the timeline).
self.assertIsNone(response_body["rooms"][room_id1].get("required_state"))
# Send a message so the room comes down sync.
self.helper.send(room_id1, "msg", tok=user1_tok)
# Update the sliding sync requests to include the room name again
sync_body["lists"]["foo-list"]["required_state"] = [
[EventTypes.Create, ""],
[EventTypes.Name, ""],
]
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# We should see the *new* room name, even though there haven't been any
# changes.
state_map = self.get_success(
self.storage_controllers.state.get_current_state(room_id1)
)
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
{
state_map[(EventTypes.Name, "")],
},
exact=True,
)
def test_rooms_required_state_expand_deduplicate(self) -> None:
"""Test that when expanding, retracting and then expanding the required
state, we don't get the state down again if it hasn't changed"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
# Create a room with a room name.
room_id1 = self.helper.create_room_as(
user1_id, tok=user1_tok, extra_content={"name": "Foo"}
)
# Only request the state event to begin with
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [
[EventTypes.Create, ""],
],
"timeline_limit": 1,
}
}
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
state_map = self.get_success(
self.storage_controllers.state.get_current_state(room_id1)
)
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
{
state_map[(EventTypes.Create, "")],
},
exact=True,
)
# Send a message so the room comes down sync.
self.helper.send(room_id1, "msg", tok=user1_tok)
# Update the sliding sync requests to include the room name
sync_body["lists"]["foo-list"]["required_state"] = [
[EventTypes.Create, ""],
[EventTypes.Name, ""],
]
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# We should see the room name, even though there haven't been any
# changes.
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
{
state_map[(EventTypes.Name, "")],
},
exact=True,
)
# Send a message so the room comes down sync.
self.helper.send(room_id1, "msg", tok=user1_tok)
# Update the sliding sync requests to exclude the room name again
sync_body["lists"]["foo-list"]["required_state"] = [
[EventTypes.Create, ""],
]
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# We should not see any state updates
self.assertIsNone(response_body["rooms"][room_id1].get("required_state"))
# Send a message so the room comes down sync.
self.helper.send(room_id1, "msg", tok=user1_tok)
# Update the sliding sync requests to include the room name again
sync_body["lists"]["foo-list"]["required_state"] = [
[EventTypes.Create, ""],
[EventTypes.Name, ""],
]
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# We should not see the room name again, as we have already sent that
# down.
self.assertIsNone(response_body["rooms"][room_id1].get("required_state"))

Some files were not shown because too many files have changed in this diff Show More