Compare commits

...

21 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
0f6c30aefd Add test to verify limit parameter defaults to 100 for thread subscriptions extension
Co-authored-by: reivilibre <38398653+reivilibre@users.noreply.github.com>
2025-12-01 18:03:41 +00:00
copilot-swe-agent[bot]
918b4fba17 Initial plan 2025-12-01 17:52:16 +00:00
dependabot[bot]
3cf21bc649 Bump rpds-py from 0.29.0 to 0.30.0 (#19247) 2025-12-01 16:55:36 +00:00
dependabot[bot]
e0e7a44fe9 Bump pyopenssl from 25.1.0 to 25.3.0 (#19248) 2025-12-01 16:55:16 +00:00
dependabot[bot]
c09298eeaf Bump pydantic from 2.12.4 to 2.12.5 (#19250)
Bumps [pydantic](https://github.com/pydantic/pydantic) from 2.12.4 to
2.12.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/releases">pydantic's
releases</a>.</em></p>
<blockquote>
<h2>v2.12.5 2025-11-26</h2>
<h2>v2.12.5 (2025-11-26)</h2>
<p>This is the fifth 2.12 patch release, addressing an issue with the
<code>MISSING</code> sentinel and providing several documentation
improvements.</p>
<p>The next 2.13 minor release will be published in a couple weeks, and
will include a new <em>polymorphic serialization</em> feature addressing
the remaining unexpected changes to the <em>serialize as any</em>
behavior.</p>
<ul>
<li>Fix pickle error when using <code>model_construct()</code> on a
model with <code>MISSING</code> as a default value by <a
href="https://github.com/ornariece"><code>@​ornariece</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12522">#12522</a>.</li>
<li>Several updates to the documentation by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a>.</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.12.4...v2.12.5">https://github.com/pydantic/pydantic/compare/v2.12.4...v2.12.5</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/blob/main/HISTORY.md">pydantic's
changelog</a>.</em></p>
<blockquote>
<h2>v2.12.5 (2025-11-26)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.12.5">GitHub
release</a></p>
<p>This is the fifth 2.12 patch release, addressing an issue with the
<code>MISSING</code> sentinel and providing several documentation
improvements.</p>
<p>The next 2.13 minor release will be published in a couple weeks, and
will include a new <em>polymorphic serialization</em> feature addressing
the remaining unexpected changes to the <em>serialize as any</em>
behavior.</p>
<ul>
<li>Fix pickle error when using <code>model_construct()</code> on a
model with <code>MISSING</code> as a default value by <a
href="https://github.com/ornariece"><code>@​ornariece</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12522">#12522</a>.</li>
<li>Several updates to the documentation by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bd2d0dd013"><code>bd2d0dd</code></a>
Prepare release v2.12.5</li>
<li><a
href="7d0302ec7e"><code>7d0302e</code></a>
Document security implications when using
<code>create_model()</code></li>
<li><a
href="e9ef980def"><code>e9ef980</code></a>
Fix typo in Standard Library Types documentation</li>
<li><a
href="f2c20c00c2"><code>f2c20c0</code></a>
Add <code>pydantic-docs</code> dev dependency, make use of versioning
blocks</li>
<li><a
href="a76c1aa26f"><code>a76c1aa</code></a>
Update documentation about JSON Schema</li>
<li><a
href="8cbc72ca48"><code>8cbc72c</code></a>
Add documentation about custom <code>__init__()</code></li>
<li><a
href="99eba59906"><code>99eba59</code></a>
Add additional test for <code>FieldInfo.get_default()</code></li>
<li><a
href="c71076988e"><code>c710769</code></a>
Special case <code>MISSING</code> sentinel in
<code>smart_deepcopy()</code></li>
<li><a
href="20a9d771c2"><code>20a9d77</code></a>
Do not delete mock validator/serializer in
<code>rebuild_dataclass()</code></li>
<li><a
href="c86515a3a8"><code>c86515a</code></a>
Update parts of the model and <code>revalidate_instances</code>
documentation</li>
<li>See full diff in <a
href="https://github.com/pydantic/pydantic/compare/v2.12.4...v2.12.5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pydantic&package-manager=pip&previous-version=2.12.4&new-version=2.12.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 16:45:41 +00:00
dependabot[bot]
38588f9462 Bump Swatinem/rust-cache from 2.8.1 to 2.8.2 (#19244)
Bumps [Swatinem/rust-cache](https://github.com/swatinem/rust-cache) from
2.8.1 to 2.8.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/swatinem/rust-cache/releases">Swatinem/rust-cache's
releases</a>.</em></p>
<blockquote>
<h2>v2.8.2</h2>
<h2>What's Changed</h2>
<ul>
<li>ci: address lint findings, add zizmor workflow by <a
href="https://github.com/woodruffw"><code>@​woodruffw</code></a> in <a
href="https://redirect.github.com/Swatinem/rust-cache/pull/262">Swatinem/rust-cache#262</a></li>
<li>feat: Implement ability to disable adding job ID + rust environment
hashes to cache names by <a
href="https://github.com/Ryan-Brice"><code>@​Ryan-Brice</code></a> in <a
href="https://redirect.github.com/Swatinem/rust-cache/pull/279">Swatinem/rust-cache#279</a></li>
<li>Don't overwrite env for cargo-metadata call by <a
href="https://github.com/MaeIsBad"><code>@​MaeIsBad</code></a> in <a
href="https://redirect.github.com/Swatinem/rust-cache/pull/285">Swatinem/rust-cache#285</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/woodruffw"><code>@​woodruffw</code></a>
made their first contribution in <a
href="https://redirect.github.com/Swatinem/rust-cache/pull/262">Swatinem/rust-cache#262</a></li>
<li><a
href="https://github.com/Ryan-Brice"><code>@​Ryan-Brice</code></a> made
their first contribution in <a
href="https://redirect.github.com/Swatinem/rust-cache/pull/279">Swatinem/rust-cache#279</a></li>
<li><a href="https://github.com/MaeIsBad"><code>@​MaeIsBad</code></a>
made their first contribution in <a
href="https://redirect.github.com/Swatinem/rust-cache/pull/285">Swatinem/rust-cache#285</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Swatinem/rust-cache/compare/v2.8.1...v2.8.2">https://github.com/Swatinem/rust-cache/compare/v2.8.1...v2.8.2</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Swatinem/rust-cache/blob/master/CHANGELOG.md">Swatinem/rust-cache's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>2.8.2</h2>
<ul>
<li>Don't overwrite env for cargo-metadata call</li>
</ul>
<h2>2.8.1</h2>
<ul>
<li>Set empty <code>CARGO_ENCODED_RUSTFLAGS</code> when retrieving
metadata</li>
<li>Various dependency updates</li>
</ul>
<h2>2.8.0</h2>
<ul>
<li>Add support for <code>warpbuild</code> cache provider</li>
<li>Add new <code>cache-workspace-crates</code> feature</li>
</ul>
<h2>2.7.8</h2>
<ul>
<li>Include CPU arch in the cache key</li>
</ul>
<h2>2.7.7</h2>
<ul>
<li>Also cache <code>cargo install</code> metadata</li>
</ul>
<h2>2.7.6</h2>
<ul>
<li>Allow opting out of caching $CARGO_HOME/bin</li>
<li>Add runner OS in cache key</li>
<li>Adds an option to do lookup-only of the cache</li>
</ul>
<h2>2.7.5</h2>
<ul>
<li>Support Cargo.lock format cargo-lock v4</li>
<li>Only run macOsWorkaround() on macOS</li>
</ul>
<h2>2.7.3</h2>
<ul>
<li>Work around upstream problem that causes cache saving to hang for
minutes.</li>
</ul>
<h2>2.7.2</h2>
<ul>
<li>Only key by <code>Cargo.toml</code> and <code>Cargo.lock</code>
files of workspace members.</li>
</ul>
<h2>2.7.1</h2>
<ul>
<li>Update toml parser to fix parsing errors.</li>
</ul>
<h2>2.7.0</h2>
<ul>
<li>Properly cache <code>trybuild</code> tests.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="779680da71"><code>779680d</code></a>
2.8.2</li>
<li><a
href="2ea64efb25"><code>2ea64ef</code></a>
Bump smol-toml from 1.4.2 to 1.5.2 in the prd-minor group (<a
href="https://redirect.github.com/swatinem/rust-cache/issues/287">#287</a>)</li>
<li><a
href="8930d9c33e"><code>8930d9c</code></a>
Bump the actions group with 3 updates (<a
href="https://redirect.github.com/swatinem/rust-cache/issues/288">#288</a>)</li>
<li><a
href="c071727fc9"><code>c071727</code></a>
Bump <code>@​actions/io</code> from 1.1.3 to 2.0.0 in the prd-major
group (<a
href="https://redirect.github.com/swatinem/rust-cache/issues/281">#281</a>)</li>
<li><a
href="f2a41b7c11"><code>f2a41b7</code></a>
Bump <code>@​types/node</code> from 24.9.0 to 24.10.0 in the dev-minor
group (<a
href="https://redirect.github.com/swatinem/rust-cache/issues/282">#282</a>)</li>
<li><a
href="e306f83d21"><code>e306f83</code></a>
Don't overwrite env for cargo-metadata call (<a
href="https://redirect.github.com/swatinem/rust-cache/issues/285">#285</a>)</li>
<li><a
href="c9119007a1"><code>c911900</code></a>
Merge pull request <a
href="https://redirect.github.com/swatinem/rust-cache/issues/284">#284</a>
from Swatinem/dependabot/github_actions/actions-baeb0...</li>
<li><a
href="3aaed5547e"><code>3aaed55</code></a>
Bump the actions group with 2 updates</li>
<li><a
href="972b315a82"><code>972b315</code></a>
Merge pull request <a
href="https://redirect.github.com/swatinem/rust-cache/issues/283">#283</a>
from Swatinem/dependabot/github_actions/actions-b360d...</li>
<li><a
href="07caf06f7a"><code>07caf06</code></a>
Bump taiki-e/install-action from 2.62.45 to 2.62.49 in the actions
group</li>
<li>Additional commits viewable in <a
href="f13886b937...779680da71">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=Swatinem/rust-cache&package-manager=github_actions&previous-version=2.8.1&new-version=2.8.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 16:25:31 +00:00
Andre Klärner
c20dd888bd Document how merging config files works - see #11203 (#19243)
---------

Signed-off-by: Andre Klärner <kandre@ak-online.be>
Co-authored-by: Olivier 'reivilibre <olivier@librepush.net>
2025-12-01 16:05:07 +00:00
Devon Hudson
d435cfc125 Add mention of future deprecations to release script (#19239)
Small improvement to the release script to prompt the user to consider
upcoming deprecations that should be mentioned in the changelog.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Olivier 'reivilibre' <oliverw@element.io>
2025-12-01 15:47:36 +00:00
dependabot[bot]
58dd25976c Bump http from 1.3.1 to 1.4.0 (#19249)
Bumps [http](https://github.com/hyperium/http) from 1.3.1 to 1.4.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/hyperium/http/releases">http's
releases</a>.</em></p>
<blockquote>
<h2>v1.4.0</h2>
<h2>Highlights</h2>
<ul>
<li>Add <code>StatusCode::EARLY_HINTS</code> constant for 103 Early
Hints.</li>
<li>Make <code>StatusCode::from_u16</code> now a <code>const
fn</code>.</li>
<li>Make <code>Authority::from_static</code> now a <code>const
fn</code>.</li>
<li>Make <code>PathAndQuery::from_static</code> now a <code>const
fn</code>.</li>
<li>MSRV increased to 1.57 (allows legible const fn panic
messages).</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>Updated Rand dependency to v0.9.1 by <a
href="https://github.com/FarzadMohtasham"><code>@​FarzadMohtasham</code></a>
in <a
href="https://redirect.github.com/hyperium/http/pull/763">hyperium/http#763</a></li>
<li>Fix compilation on latest nightly by <a
href="https://github.com/akonradi-signal"><code>@​akonradi-signal</code></a>
in <a
href="https://redirect.github.com/hyperium/http/pull/769">hyperium/http#769</a></li>
<li>Avoid unnecessary .expect()s for empty HeaderMap by <a
href="https://github.com/akonradi-signal"><code>@​akonradi-signal</code></a>
in <a
href="https://redirect.github.com/hyperium/http/pull/768">hyperium/http#768</a></li>
<li>feat: show types in <code>Extensions</code> debug output by <a
href="https://github.com/crepererum"><code>@​crepererum</code></a> in <a
href="https://redirect.github.com/hyperium/http/pull/773">hyperium/http#773</a></li>
<li>Docs: Clarify the <code>HeaderMap</code> documentaion by <a
href="https://github.com/Sol-Ell"><code>@​Sol-Ell</code></a> in <a
href="https://redirect.github.com/hyperium/http/pull/774">hyperium/http#774</a></li>
<li>style: update format for tests by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/hyperium/http/pull/782">hyperium/http#782</a></li>
<li>Make <code>StatusCode::from_u16</code> const by <a
href="https://github.com/coolreader18"><code>@​coolreader18</code></a>
in <a
href="https://redirect.github.com/hyperium/http/pull/761">hyperium/http#761</a></li>
<li>docs: Fix typo 'an' to 'and' in http::status module documentation by
<a href="https://github.com/zxzxovo"><code>@​zxzxovo</code></a> in <a
href="https://redirect.github.com/hyperium/http/pull/784">hyperium/http#784</a></li>
<li>fix: Prevent panic in try_reserve/try_with_capacity on capacity
overflow by <a
href="https://github.com/AriajSarkar"><code>@​AriajSarkar</code></a> in
<a
href="https://redirect.github.com/hyperium/http/pull/787">hyperium/http#787</a></li>
<li>fix: Add reserve() to Extend impl for (Option<!-- raw HTML omitted
-->, T)) by <a
href="https://github.com/AriajSarkar"><code>@​AriajSarkar</code></a> in
<a
href="https://redirect.github.com/hyperium/http/pull/788">hyperium/http#788</a></li>
<li>chore: minor improvement for docs by <a
href="https://github.com/claudecodering"><code>@​claudecodering</code></a>
in <a
href="https://redirect.github.com/hyperium/http/pull/790">hyperium/http#790</a></li>
<li>chore: bump MSRV to 1.57 by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/hyperium/http/pull/793">hyperium/http#793</a></li>
<li>Add EARLY_HINTS status code by <a
href="https://github.com/mdevino"><code>@​mdevino</code></a> in <a
href="https://redirect.github.com/hyperium/http/pull/758">hyperium/http#758</a></li>
<li>refactor(header): use better panic message in const HeaderName and
HeaderValue by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/hyperium/http/pull/797">hyperium/http#797</a></li>
<li>docs: remove unnecessary extern crate sentence by <a
href="https://github.com/tottoto"><code>@​tottoto</code></a> in <a
href="https://redirect.github.com/hyperium/http/pull/799">hyperium/http#799</a></li>
<li>chore(ci): update to actions/checkout@v5 by <a
href="https://github.com/tottoto"><code>@​tottoto</code></a> in <a
href="https://redirect.github.com/hyperium/http/pull/800">hyperium/http#800</a></li>
<li>feat(uri): make <code>Authority/PathAndQuery::from_static</code>
const by <a
href="https://github.com/WaterWhisperer"><code>@​WaterWhisperer</code></a>
in <a
href="https://redirect.github.com/hyperium/http/pull/786">hyperium/http#786</a></li>
<li>refactor(header): inline FNV hasher to reduce dependencies by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/hyperium/http/pull/796">hyperium/http#796</a></li>
<li>v1.4.0 by <a
href="https://github.com/seanmonstar"><code>@​seanmonstar</code></a> in
<a
href="https://redirect.github.com/hyperium/http/pull/803">hyperium/http#803</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/FarzadMohtasham"><code>@​FarzadMohtasham</code></a>
made their first contribution in <a
href="https://redirect.github.com/hyperium/http/pull/763">hyperium/http#763</a></li>
<li><a
href="https://github.com/akonradi-signal"><code>@​akonradi-signal</code></a>
made their first contribution in <a
href="https://redirect.github.com/hyperium/http/pull/769">hyperium/http#769</a></li>
<li><a
href="https://github.com/crepererum"><code>@​crepererum</code></a> made
their first contribution in <a
href="https://redirect.github.com/hyperium/http/pull/773">hyperium/http#773</a></li>
<li><a href="https://github.com/Sol-Ell"><code>@​Sol-Ell</code></a> made
their first contribution in <a
href="https://redirect.github.com/hyperium/http/pull/774">hyperium/http#774</a></li>
<li><a
href="https://github.com/coolreader18"><code>@​coolreader18</code></a>
made their first contribution in <a
href="https://redirect.github.com/hyperium/http/pull/761">hyperium/http#761</a></li>
<li><a href="https://github.com/zxzxovo"><code>@​zxzxovo</code></a> made
their first contribution in <a
href="https://redirect.github.com/hyperium/http/pull/784">hyperium/http#784</a></li>
<li><a
href="https://github.com/AriajSarkar"><code>@​AriajSarkar</code></a>
made their first contribution in <a
href="https://redirect.github.com/hyperium/http/pull/787">hyperium/http#787</a></li>
<li><a
href="https://github.com/claudecodering"><code>@​claudecodering</code></a>
made their first contribution in <a
href="https://redirect.github.com/hyperium/http/pull/790">hyperium/http#790</a></li>
<li><a href="https://github.com/mdevino"><code>@​mdevino</code></a> made
their first contribution in <a
href="https://redirect.github.com/hyperium/http/pull/758">hyperium/http#758</a></li>
<li><a
href="https://github.com/WaterWhisperer"><code>@​WaterWhisperer</code></a>
made their first contribution in <a
href="https://redirect.github.com/hyperium/http/pull/786">hyperium/http#786</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/hyperium/http/compare/v1.3.1...v1.4.0">https://github.com/hyperium/http/compare/v1.3.1...v1.4.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/hyperium/http/blob/master/CHANGELOG.md">http's
changelog</a>.</em></p>
<blockquote>
<h1>1.4.0 (November 24, 2025)</h1>
<ul>
<li>Add <code>StatusCode::EARLY_HINTS</code> constant for 103 Early
Hints.</li>
<li>Make <code>StatusCode::from_u16</code> now a <code>const
fn</code>.</li>
<li>Make <code>Authority::from_static</code> now a <code>const
fn</code>.</li>
<li>Make <code>PathAndQuery::from_static</code> now a <code>const
fn</code>.</li>
<li>MSRV increased to 1.57 (allows legible const fn panic
messages).</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b9625d83b5"><code>b9625d8</code></a>
v1.4.0</li>
<li><a
href="50b009c367"><code>50b009c</code></a>
refactor(header): inline FNV hasher to reduce dependencies (<a
href="https://redirect.github.com/hyperium/http/issues/796">#796</a>)</li>
<li><a
href="b370d361c1"><code>b370d36</code></a>
feat(uri): make <code>Authority/PathAndQuery::from_static</code> const
(<a
href="https://redirect.github.com/hyperium/http/issues/786">#786</a>)</li>
<li><a
href="0d7425146e"><code>0d74251</code></a>
chore(ci): update to actions/checkout@v5 (<a
href="https://redirect.github.com/hyperium/http/issues/800">#800</a>)</li>
<li><a
href="a7607679dc"><code>a760767</code></a>
docs: remove unnecessary extern crate sentence (<a
href="https://redirect.github.com/hyperium/http/issues/799">#799</a>)</li>
<li><a
href="fb1d4572ee"><code>fb1d457</code></a>
refactor(header): use better panic message in const HeaderName and
HeaderValu...</li>
<li><a
href="20dbd6e54e"><code>20dbd6e</code></a>
feat(status): Add 103 EARLY_HINTS status code (<a
href="https://redirect.github.com/hyperium/http/issues/758">#758</a>)</li>
<li><a
href="e7a73372f5"><code>e7a7337</code></a>
chore: bump MSRV to 1.57</li>
<li><a
href="1888e28c54"><code>1888e28</code></a>
tests: downgrade rand back to 0.8 for now</li>
<li><a
href="918bbc3c24"><code>918bbc3</code></a>
chore: minor improvement for docs (<a
href="https://redirect.github.com/hyperium/http/issues/790">#790</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/hyperium/http/compare/v1.3.1...v1.4.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=http&package-manager=cargo&previous-version=1.3.1&new-version=1.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 15:41:40 +00:00
dependabot[bot]
bf6163c8bf Bump docker/metadata-action from 5.9.0 to 5.10.0 (#19246)
Bumps
[docker/metadata-action](https://github.com/docker/metadata-action) from
5.9.0 to 5.10.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/docker/metadata-action/releases">docker/metadata-action's
releases</a>.</em></p>
<blockquote>
<h2>v5.10.0</h2>
<ul>
<li>Bump <code>@​docker/actions-toolkit</code> from 0.66.0 to 0.68.0 in
<a
href="https://redirect.github.com/docker/metadata-action/pull/559">docker/metadata-action#559</a>
<a
href="https://redirect.github.com/docker/metadata-action/pull/569">docker/metadata-action#569</a></li>
<li>Bump js-yaml from 3.14.1 to 3.14.2 in <a
href="https://redirect.github.com/docker/metadata-action/pull/564">docker/metadata-action#564</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/docker/metadata-action/compare/v5.9.0...v5.10.0">https://github.com/docker/metadata-action/compare/v5.9.0...v5.10.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c299e40c65"><code>c299e40</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/metadata-action/issues/569">#569</a>
from docker/dependabot/npm_and_yarn/docker/actions-to...</li>
<li><a
href="f015d7914a"><code>f015d79</code></a>
chore: update generated content</li>
<li><a
href="121bcc2ca8"><code>121bcc2</code></a>
chore(deps): Bump <code>@​docker/actions-toolkit</code> from 0.67.0 to
0.68.0</li>
<li><a
href="f7b6bf41b9"><code>f7b6bf4</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/metadata-action/issues/564">#564</a>
from docker/dependabot/npm_and_yarn/js-yaml-3.14.2</li>
<li><a
href="0b95c6b860"><code>0b95c6b</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/metadata-action/issues/565">#565</a>
from docker/dependabot/github_actions/actions/checkout-6</li>
<li><a
href="17f70d7525"><code>17f70d7</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/metadata-action/issues/568">#568</a>
from motoki317/docs/fix-to-24h-schedule-pattern</li>
<li><a
href="afd7e6d7bb"><code>afd7e6d</code></a>
docs(README): Fix date format from 12h to 24h in schedule pattern</li>
<li><a
href="602aff8e11"><code>602aff8</code></a>
chore(deps): Bump actions/checkout from 5 to 6</li>
<li><a
href="aecb1a49a5"><code>aecb1a4</code></a>
chore(deps): Bump js-yaml from 3.14.1 to 3.14.2</li>
<li><a
href="8d8c7c12f7"><code>8d8c7c1</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/metadata-action/issues/559">#559</a>
from docker/dependabot/npm_and_yarn/docker/actions-to...</li>
<li>Additional commits viewable in <a
href="318604b99e...c299e40c65">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/metadata-action&package-manager=github_actions&previous-version=5.9.0&new-version=5.10.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 15:38:50 +00:00
dependabot[bot]
b4ee0bf71e Bump actions/setup-python from 6.0.0 to 6.1.0 (#19245)
Bumps [actions/setup-python](https://github.com/actions/setup-python)
from 6.0.0 to 6.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-python/releases">actions/setup-python's
releases</a>.</em></p>
<blockquote>
<h2>v6.1.0</h2>
<h2>What's Changed</h2>
<h3>Enhancements:</h3>
<ul>
<li>Add support for <code>pip-install</code> input by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/setup-python/pull/1201">actions/setup-python#1201</a></li>
<li>Add graalpy early-access and windows builds by <a
href="https://github.com/timfel"><code>@​timfel</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/880">actions/setup-python#880</a></li>
</ul>
<h3>Dependency and Documentation updates:</h3>
<ul>
<li>Enhanced wording and updated example usage for
<code>allow-prereleases</code> by <a
href="https://github.com/yarikoptic"><code>@​yarikoptic</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/979">actions/setup-python#979</a></li>
<li>Upgrade urllib3 from 1.26.19 to 2.5.0 and document breaking changes
in v6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1139">actions/setup-python#1139</a></li>
<li>Upgrade typescript from 5.4.2 to 5.9.3 and Documentation update by
<a href="https://github.com/dependabot"><code>@​dependabot</code></a> in
<a
href="https://redirect.github.com/actions/setup-python/pull/1094">actions/setup-python#1094</a></li>
<li>Upgrade actions/publish-action from 0.3.0 to 0.4.0 &amp;
Documentation update for pip-install input by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1199">actions/setup-python#1199</a></li>
<li>Upgrade requests from 2.32.2 to 2.32.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1130">actions/setup-python#1130</a></li>
<li>Upgrade prettier from 3.5.3 to 3.6.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1234">actions/setup-python#1234</a></li>
<li>Upgrade <code>@​types/node</code> from 24.1.0 to 24.9.1 and update
macos-13 to macos-15-intel by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1235">actions/setup-python#1235</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/yarikoptic"><code>@​yarikoptic</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/979">actions/setup-python#979</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v6...v6.1.0">https://github.com/actions/setup-python/compare/v6...v6.1.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="83679a892e"><code>83679a8</code></a>
Bump <code>@​types/node</code> from 24.1.0 to 24.9.1 and update macos-13
to macos-15-intel ...</li>
<li><a
href="bfc4944b43"><code>bfc4944</code></a>
Bump prettier from 3.5.3 to 3.6.2 (<a
href="https://redirect.github.com/actions/setup-python/issues/1234">#1234</a>)</li>
<li><a
href="97aeb3efb8"><code>97aeb3e</code></a>
Bump requests from 2.32.2 to 2.32.4 in /<strong>tests</strong>/data (<a
href="https://redirect.github.com/actions/setup-python/issues/1130">#1130</a>)</li>
<li><a
href="443da59188"><code>443da59</code></a>
Bump actions/publish-action from 0.3.0 to 0.4.0 &amp; Documentation
update for pi...</li>
<li><a
href="cfd55ca824"><code>cfd55ca</code></a>
graalpy: add graalpy early-access and windows builds (<a
href="https://redirect.github.com/actions/setup-python/issues/880">#880</a>)</li>
<li><a
href="bba65e51ff"><code>bba65e5</code></a>
Bump typescript from 5.4.2 to 5.9.3 and update docs/advanced-usage.md
(<a
href="https://redirect.github.com/actions/setup-python/issues/1094">#1094</a>)</li>
<li><a
href="18566f86b3"><code>18566f8</code></a>
Improve wording and &quot;fix example&quot; (remove 3.13) on testing
against pre-releas...</li>
<li><a
href="2e3e4b15a8"><code>2e3e4b1</code></a>
Add support for pip-install input (<a
href="https://redirect.github.com/actions/setup-python/issues/1201">#1201</a>)</li>
<li><a
href="4267e283df"><code>4267e28</code></a>
Bump urllib3 from 1.26.19 to 2.5.0 in /<strong>tests</strong>/data and
document breaking c...</li>
<li>See full diff in <a
href="e797f83bcb...83679a892e">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-python&package-manager=github_actions&previous-version=6.0.0&new-version=6.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 15:38:19 +00:00
Devon Hudson
119f02e3b3 Return 400 when canonical_alias content invalid (#19240)
Fixes #19198

Returns HTTP 400 when `alias` or `alt_alias` inside of
`m.room.canonical_alias` `content` are not of type string.
Previously this resulted in HTTP 500 errors as Synapse assumed they were
strings and would raise an exception when it tried to treat them as such
if they actually weren't.

With the changes implemented:
<img width="800" height="616" alt="Screenshot from 2025-11-28 16-48-06"
src="https://github.com/user-attachments/assets/1333a4b3-7b4f-435f-bbff-f48870bc4d96"
/>
<img width="800" height="316" alt="Screenshot from 2025-11-28 16-47-42"
src="https://github.com/user-attachments/assets/5928abf8-88a2-4bd9-9420-9a1f743f66f5"
/>

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-12-01 15:24:26 +00:00
Erik Johnston
1bddd25a85 Port Clock functions to use Duration class (#19229)
This changes the arguments in clock functions to be `Duration` and
converts call sites and constants into `Duration`. There are still some
more functions around that should be converted (e.g.
`timeout_deferred`), but we leave that to another PR.

We also changes `.as_secs()` to return a float, as the rounding broke
things subtly. The only reason to keep it (its the same as
`timedelta.total_seconds()`) is for symmetry with `as_millis()`.

Follows on from https://github.com/element-hq/synapse/pull/19223
2025-12-01 13:55:06 +00:00
Erik Johnston
d143276bda Fix rust source check when using .egg-info (#19251)
We have checks to try and catch the case where Synapse is being run from
a source directory, but the compiled Rust code is out-of-date. This
commonly happens when Synapse is updated without running `poetry
install` (or equivalent).

These checks did not correctly handle `.egg-info` installs, and so were
not run.

Currently, the `.egg-info` directory is created automatically by poetry
(due to using setuptools to build Rust).
2025-12-01 13:34:21 +00:00
Andrew Morgan
034c5e625c Move call invite filtering logic to filter_events_for_client (#17782) 2025-11-28 17:41:56 +00:00
Andrew Morgan
778897a4e9 Add a unit test that ensures that deleting a device purges the associated refresh token (#19230) 2025-11-28 17:01:15 +00:00
Erik Johnston
78ec3043d6 Use sqlglot to properly check SQL delta files (#19224)
Rather than using dodgy regexes which keep breaking.

Also fixes a regression where it looks like we didn't fail CI if the
delta was in the wrong place.
2025-11-28 15:49:15 +00:00
Andrew Morgan
566670c363 Move RestartDelayedEventServlet to workers (#19207) 2025-11-27 16:44:17 +00:00
Andrew Morgan
52089f1f79 Prevent lint-newsfile job activating when fixing dependabot PR branches (#19220) 2025-11-27 16:15:06 +00:00
Andrew Morgan
703464c1f7 Fix case where get_partial_current_state_deltas could return >100 rows (#18960) 2025-11-26 17:17:04 +00:00
Richard van der Hoff
c928347779 Implement MSC4380: Invite blocking (#19203)
MSC4380 aims to be a simplified implementation of MSC4155; the hope is
that we can get it specced and rolled out rapidly, so that we can
resolve the fact that `matrix.org` has enabled MSC4155.

The implementation leans heavily on what's already there for MSC4155.

It has its own `experimental_features` flag. If both MSC4155 and MSC4380
are enabled, and a user has both configurations set, then we prioritise
the MSC4380 one.

Contributed wearing my 🎩 Spec Core Team hat.
2025-11-26 16:12:14 +00:00
145 changed files with 1610 additions and 573 deletions

View File

@@ -123,7 +123,7 @@ jobs:
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
- name: Calculate docker image tag
uses: docker/metadata-action@318604b99e75e41977312d83839a89be02ca4893 # v5.9.0
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
with:
images: ${{ matrix.repository }}
flavor: |

View File

@@ -24,7 +24,7 @@ jobs:
mdbook-version: '0.4.17'
- name: Setup python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"

View File

@@ -64,7 +64,7 @@ jobs:
run: echo 'window.SYNAPSE_VERSION = "${{ needs.pre.outputs.branch-version }}";' > ./docs/website_files/version.js
- name: Setup python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"

View File

@@ -25,7 +25,7 @@ jobs:
with:
toolchain: ${{ env.RUST_VERSION }}
components: clippy, rustfmt
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0

View File

@@ -47,7 +47,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
# The dev dependencies aren't exposed in the wheel metadata (at least with current
# poetry-core versions), so we install with poetry.
@@ -83,7 +83,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.postgres-version }}
@@ -93,7 +93,7 @@ jobs:
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.postgres-version }}
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: pip install .[all,test]
@@ -158,7 +158,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Ensure sytest runs `pip install`
# Delete the lockfile so sytest will `pip install` rather than `poetry install`

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: '3.x'
- run: pip install tomli

View File

@@ -55,7 +55,7 @@ jobs:
password: ${{ secrets.GITHUB_TOKEN }}
- name: Work out labels for complement image
id: meta
uses: docker/metadata-action@318604b99e75e41977312d83839a89be02ca4893 # v5.9.0
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
with:
images: ghcr.io/${{ github.repository }}/complement-synapse
tags: |

View File

@@ -28,7 +28,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- id: set-distros
@@ -74,7 +74,7 @@ jobs:
${{ runner.os }}-buildx-
- name: Set up python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
@@ -134,7 +134,7 @@ jobs:
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
# setup-python@v4 doesn't impose a default python version. Need to use 3.x
# here, because `python` on osx points to Python 2.7.
@@ -171,7 +171,7 @@ jobs:
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.10"

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- name: Install check-jsonschema
@@ -41,7 +41,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- name: Install PyYAML

View File

@@ -91,7 +91,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: "3.x"
@@ -107,17 +107,17 @@ jobs:
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: "pip install 'click==8.1.1' 'GitPython>=3.1.20'"
- run: "pip install 'click==8.1.1' 'GitPython>=3.1.20' 'sqlglot>=28.0.0'"
- run: scripts-dev/check_schema_delta.py --force-colors
check-lockfile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: .ci/scripts/check_lockfile.py
@@ -157,7 +157,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
@@ -192,14 +192,15 @@ jobs:
run: scripts-dev/check_line_terminators.sh
lint-newsfile:
if: ${{ (github.base_ref == 'develop' || contains(github.base_ref, 'release-')) && github.actor != 'dependabot[bot]' }}
# Only run on pull_request events, targeting develop/release branches, and skip when the PR author is dependabot[bot].
if: ${{ github.event_name == 'pull_request' && (github.base_ref == 'develop' || contains(github.base_ref, 'release-')) && github.event.pull_request.user.login != 'dependabot[bot]' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: "pip install 'towncrier>=18.6.0rc1'"
@@ -220,7 +221,7 @@ jobs:
with:
components: clippy
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: cargo clippy -- -D warnings
@@ -239,7 +240,7 @@ jobs:
with:
toolchain: nightly-2025-04-23
components: clippy
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: cargo clippy --all-features -- -D warnings
@@ -256,7 +257,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
@@ -295,7 +296,7 @@ jobs:
# `.rustfmt.toml`.
toolchain: nightly-2025-04-23
components: rustfmt
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: cargo fmt --check
@@ -307,7 +308,7 @@ jobs:
if: ${{ needs.changes.outputs.linting_readme == 'true' }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: "pip install rstcheck"
@@ -355,7 +356,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- id: get-matrix
@@ -393,7 +394,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
@@ -437,7 +438,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
# There aren't wheels for some of the older deps, so we need to install
# their build dependencies
@@ -446,7 +447,7 @@ jobs:
sudo apt-get -qq install build-essential libffi-dev python3-dev \
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: '3.10'
@@ -554,7 +555,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Run SyTest
run: /bootstrap.sh synapse
@@ -700,7 +701,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
@@ -734,7 +735,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: cargo test
@@ -754,7 +755,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: nightly-2022-12-01
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: cargo bench --no-run

View File

@@ -49,7 +49,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
@@ -77,7 +77,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
@@ -123,7 +123,7 @@ jobs:
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@f13886b937689c021905a6b90929199931d60db1 # v2.8.1
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Patch dependencies
# Note: The poetry commands want to create a virtualenv in /src/.venv/,

5
Cargo.lock generated
View File

@@ -374,12 +374,11 @@ checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "http"
version = "1.3.1"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f4a85d31aea989eead29a3aaf9e1115a180df8282431156e533de47660892565"
checksum = "e3ba2a386d7f85a81f119ad7498ebe444d2e22c2af0b86b069416ace48b3311a"
dependencies = [
"bytes",
"fnv",
"itoa",
]

1
changelog.d/17782.misc Normal file
View File

@@ -0,0 +1 @@
Improve event filtering for Simplified Sliding Sync.

1
changelog.d/18960.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug in the database function for fetching state deltas that could result in unnecessarily long query times.

View File

@@ -0,0 +1 @@
Add experimentatal implememntation of [MSC4380](https://github.com/matrix-org/matrix-spec-proposals/pull/4380) (invite blocking).

View File

@@ -0,0 +1 @@
Allow restarting delayed event timeouts on workers.

1
changelog.d/19220.misc Normal file
View File

@@ -0,0 +1 @@
Prevent changelog check CI running on @dependabot's PRs even when a human has modified the branch.

1
changelog.d/19224.misc Normal file
View File

@@ -0,0 +1 @@
Improve robustness of the SQL schema linting in CI.

1
changelog.d/19229.misc Normal file
View File

@@ -0,0 +1 @@
Move towards using a dedicated `Duration` type.

1
changelog.d/19230.misc Normal file
View File

@@ -0,0 +1 @@
Add a unit test for ensuring associated refresh tokens are erased when a device is delted.

1
changelog.d/19239.misc Normal file
View File

@@ -0,0 +1 @@
Prompt user to consider adding future deprecations to the changelog in release script.

1
changelog.d/19240.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug where invalid `canonical_alias` content would return 500 instead of 400.

1
changelog.d/19243.doc Normal file
View File

@@ -0,0 +1 @@
Document in the `--config-path` help how multiple files are merged - by merging them shallowly.

1
changelog.d/19251.misc Normal file
View File

@@ -0,0 +1 @@
Fix check of the Rust compiled code being outdated when using source checkout and `.egg-info`.

View File

@@ -196,6 +196,7 @@ WORKERS_CONFIG: dict[str, dict[str, Any]] = {
"^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload",
"^/_matrix/client/(api/v1|r0|v3|unstable)/keys/device_signing/upload$",
"^/_matrix/client/(api/v1|r0|v3|unstable)/keys/signatures/upload$",
"^/_matrix/client/unstable/org.matrix.msc4140/delayed_events(/.*/restart)?$",
],
"shared_extra_conf": {},
"worker_extra_conf": "",

View File

@@ -119,6 +119,14 @@ stacking them up. You can monitor the currently running background updates with
# Upgrading to v1.144.0
## Worker support for unstable MSC4140 `/restart` endpoint
The following unstable endpoint pattern may now be routed to worker processes:
```
^/_matrix/client/unstable/org.matrix.msc4140/delayed_events/.*/restart$
```
## Unstable mutual rooms endpoint is now behind an experimental feature flag
The unstable mutual rooms endpoint from

View File

@@ -285,10 +285,13 @@ information.
# User directory search requests
^/_matrix/client/(r0|v3|unstable)/user_directory/search$
# Unstable MSC4140 support
^/_matrix/client/unstable/org.matrix.msc4140/delayed_events(/.*/restart)?$
Additionally, the following REST endpoints can be handled for GET requests:
# Push rules requests
^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/
^/_matrix/client/unstable/org.matrix.msc4140/delayed_events
# Account data requests
^/_matrix/client/(r0|v3|unstable)/.*/tags

264
poetry.lock generated
View File

@@ -1799,14 +1799,14 @@ files = [
[[package]]
name = "pydantic"
version = "2.12.4"
version = "2.12.5"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pydantic-2.12.4-py3-none-any.whl", hash = "sha256:92d3d202a745d46f9be6df459ac5a064fdaa3c1c4cd8adcfa332ccf3c05f871e"},
{file = "pydantic-2.12.4.tar.gz", hash = "sha256:0f8cb9555000a4b5b617f66bfd2566264c4984b27589d3b845685983e8ea85ac"},
{file = "pydantic-2.12.5-py3-none-any.whl", hash = "sha256:e561593fccf61e8a20fc46dfc2dfe075b8be7d0188df33f221ad1f0139180f9d"},
{file = "pydantic-2.12.5.tar.gz", hash = "sha256:4d351024c75c0f085a9febbb665ce8c0c6ec5d30e903bdb6394b7ede26aebb49"},
]
[package.dependencies]
@@ -2066,18 +2066,18 @@ tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"]
[[package]]
name = "pyopenssl"
version = "25.1.0"
version = "25.3.0"
description = "Python wrapper module around the OpenSSL library"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "pyopenssl-25.1.0-py3-none-any.whl", hash = "sha256:2b11f239acc47ac2e5aca04fd7fa829800aeee22a2eb30d744572a157bd8a1ab"},
{file = "pyopenssl-25.1.0.tar.gz", hash = "sha256:8d031884482e0c67ee92bf9a4d8cceb08d92aba7136432ffb0703c5280fc205b"},
{file = "pyopenssl-25.3.0-py3-none-any.whl", hash = "sha256:1fda6fc034d5e3d179d39e59c1895c9faeaf40a79de5fc4cbbfbe0d36f4a77b6"},
{file = "pyopenssl-25.3.0.tar.gz", hash = "sha256:c981cb0a3fd84e8602d7afc209522773b94c1c2446a3c710a75b06fe1beae329"},
]
[package.dependencies]
cryptography = ">=41.0.5,<46"
cryptography = ">=45.0.7,<47"
typing-extensions = {version = ">=4.9", markers = "python_version < \"3.13\" and python_version >= \"3.8\""}
[package.extras]
@@ -2356,127 +2356,127 @@ jupyter = ["ipywidgets (>=7.5.1,<9)"]
[[package]]
name = "rpds-py"
version = "0.29.0"
version = "0.30.0"
description = "Python bindings to Rust's persistent data structures (rpds)"
optional = false
python-versions = ">=3.10"
groups = ["main", "dev"]
files = [
{file = "rpds_py-0.29.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:4ae4b88c6617e1b9e5038ab3fccd7bac0842fdda2b703117b2aa99bc85379113"},
{file = "rpds_py-0.29.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:7d9128ec9d8cecda6f044001fde4fb71ea7c24325336612ef8179091eb9596b9"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d37812c3da8e06f2bb35b3cf10e4a7b68e776a706c13058997238762b4e07f4f"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:66786c3fb1d8de416a7fa8e1cb1ec6ba0a745b2b0eee42f9b7daa26f1a495545"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b58f5c77f1af888b5fd1876c9a0d9858f6f88a39c9dd7c073a88e57e577da66d"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:799156ef1f3529ed82c36eb012b5d7a4cf4b6ef556dd7cc192148991d07206ae"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:453783477aa4f2d9104c4b59b08c871431647cb7af51b549bbf2d9eb9c827756"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_31_riscv64.whl", hash = "sha256:24a7231493e3c4a4b30138b50cca089a598e52c34cf60b2f35cebf62f274fdea"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7033c1010b1f57bb44d8067e8c25aa6fa2e944dbf46ccc8c92b25043839c3fd2"},
{file = "rpds_py-0.29.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:0248b19405422573621172ab8e3a1f29141362d13d9f72bafa2e28ea0cdca5a2"},
{file = "rpds_py-0.29.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:f9f436aee28d13b9ad2c764fc273e0457e37c2e61529a07b928346b219fcde3b"},
{file = "rpds_py-0.29.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:24a16cb7163933906c62c272de20ea3c228e4542c8c45c1d7dc2b9913e17369a"},
{file = "rpds_py-0.29.0-cp310-cp310-win32.whl", hash = "sha256:1a409b0310a566bfd1be82119891fefbdce615ccc8aa558aff7835c27988cbef"},
{file = "rpds_py-0.29.0-cp310-cp310-win_amd64.whl", hash = "sha256:c5523b0009e7c3c1263471b69d8da1c7d41b3ecb4cb62ef72be206b92040a950"},
{file = "rpds_py-0.29.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:9b9c764a11fd637e0322a488560533112837f5334ffeb48b1be20f6d98a7b437"},
{file = "rpds_py-0.29.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3fd2164d73812026ce970d44c3ebd51e019d2a26a4425a5dcbdfa93a34abc383"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a097b7f7f7274164566ae90a221fd725363c0e9d243e2e9ed43d195ccc5495c"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7cdc0490374e31cedefefaa1520d5fe38e82fde8748cbc926e7284574c714d6b"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:89ca2e673ddd5bde9b386da9a0aac0cab0e76f40c8f0aaf0d6311b6bbf2aa311"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a5d9da3ff5af1ca1249b1adb8ef0573b94c76e6ae880ba1852f033bf429d4588"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8238d1d310283e87376c12f658b61e1ee23a14c0e54c7c0ce953efdbdc72deed"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_31_riscv64.whl", hash = "sha256:2d6fb2ad1c36f91c4646989811e84b1ea5e0c3cf9690b826b6e32b7965853a63"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:534dc9df211387547267ccdb42253aa30527482acb38dd9b21c5c115d66a96d2"},
{file = "rpds_py-0.29.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d456e64724a075441e4ed648d7f154dc62e9aabff29bcdf723d0c00e9e1d352f"},
{file = "rpds_py-0.29.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:a738f2da2f565989401bd6fd0b15990a4d1523c6d7fe83f300b7e7d17212feca"},
{file = "rpds_py-0.29.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a110e14508fd26fd2e472bb541f37c209409876ba601cf57e739e87d8a53cf95"},
{file = "rpds_py-0.29.0-cp311-cp311-win32.whl", hash = "sha256:923248a56dd8d158389a28934f6f69ebf89f218ef96a6b216a9be6861804d3f4"},
{file = "rpds_py-0.29.0-cp311-cp311-win_amd64.whl", hash = "sha256:539eb77eb043afcc45314d1be09ea6d6cafb3addc73e0547c171c6d636957f60"},
{file = "rpds_py-0.29.0-cp311-cp311-win_arm64.whl", hash = "sha256:bdb67151ea81fcf02d8f494703fb728d4d34d24556cbff5f417d74f6f5792e7c"},
{file = "rpds_py-0.29.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a0891cfd8db43e085c0ab93ab7e9b0c8fee84780d436d3b266b113e51e79f954"},
{file = "rpds_py-0.29.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3897924d3f9a0361472d884051f9a2460358f9a45b1d85a39a158d2f8f1ad71c"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2a21deb8e0d1571508c6491ce5ea5e25669b1dd4adf1c9d64b6314842f708b5d"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:9efe71687d6427737a0a2de9ca1c0a216510e6cd08925c44162be23ed7bed2d5"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:40f65470919dc189c833e86b2c4bd21bd355f98436a2cef9e0a9a92aebc8e57e"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:def48ff59f181130f1a2cb7c517d16328efac3ec03951cca40c1dc2049747e83"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ad7bd570be92695d89285a4b373006930715b78d96449f686af422debb4d3949"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_31_riscv64.whl", hash = "sha256:5a572911cd053137bbff8e3a52d31c5d2dba51d3a67ad902629c70185f3f2181"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d583d4403bcbf10cffc3ab5cee23d7643fcc960dff85973fd3c2d6c86e8dbb0c"},
{file = "rpds_py-0.29.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:070befbb868f257d24c3bb350dbd6e2f645e83731f31264b19d7231dd5c396c7"},
{file = "rpds_py-0.29.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:fc935f6b20b0c9f919a8ff024739174522abd331978f750a74bb68abd117bd19"},
{file = "rpds_py-0.29.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:8c5a8ecaa44ce2d8d9d20a68a2483a74c07f05d72e94a4dff88906c8807e77b0"},
{file = "rpds_py-0.29.0-cp312-cp312-win32.whl", hash = "sha256:ba5e1aeaf8dd6d8f6caba1f5539cddda87d511331714b7b5fc908b6cfc3636b7"},
{file = "rpds_py-0.29.0-cp312-cp312-win_amd64.whl", hash = "sha256:b5f6134faf54b3cb83375db0f113506f8b7770785be1f95a631e7e2892101977"},
{file = "rpds_py-0.29.0-cp312-cp312-win_arm64.whl", hash = "sha256:b016eddf00dca7944721bf0cd85b6af7f6c4efaf83ee0b37c4133bd39757a8c7"},
{file = "rpds_py-0.29.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1585648d0760b88292eecab5181f5651111a69d90eff35d6b78aa32998886a61"},
{file = "rpds_py-0.29.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:521807963971a23996ddaf764c682b3e46459b3c58ccd79fefbe16718db43154"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a8896986efaa243ab713c69e6491a4138410f0fe36f2f4c71e18bd5501e8014"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1d24564a700ef41480a984c5ebed62b74e6ce5860429b98b1fede76049e953e6"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e6596b93c010d386ae46c9fba9bfc9fc5965fa8228edeac51576299182c2e31c"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5cc58aac218826d054c7da7f95821eba94125d88be673ff44267bb89d12a5866"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de73e40ebc04dd5d9556f50180395322193a78ec247e637e741c1b954810f295"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_31_riscv64.whl", hash = "sha256:295ce5ac7f0cf69a651ea75c8f76d02a31f98e5698e82a50a5f4d4982fbbae3b"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1ea59b23ea931d494459c8338056fe7d93458c0bf3ecc061cd03916505369d55"},
{file = "rpds_py-0.29.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f49d41559cebd608042fdcf54ba597a4a7555b49ad5c1c0c03e0af82692661cd"},
{file = "rpds_py-0.29.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:05a2bd42768ea988294ca328206efbcc66e220d2d9b7836ee5712c07ad6340ea"},
{file = "rpds_py-0.29.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:33ca7bdfedd83339ca55da3a5e1527ee5870d4b8369456b5777b197756f3ca22"},
{file = "rpds_py-0.29.0-cp313-cp313-win32.whl", hash = "sha256:20c51ae86a0bb9accc9ad4e6cdeec58d5ebb7f1b09dd4466331fc65e1766aae7"},
{file = "rpds_py-0.29.0-cp313-cp313-win_amd64.whl", hash = "sha256:6410e66f02803600edb0b1889541f4b5cc298a5ccda0ad789cc50ef23b54813e"},
{file = "rpds_py-0.29.0-cp313-cp313-win_arm64.whl", hash = "sha256:56838e1cd9174dc23c5691ee29f1d1be9eab357f27efef6bded1328b23e1ced2"},
{file = "rpds_py-0.29.0-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:37d94eadf764d16b9a04307f2ab1d7af6dc28774bbe0535c9323101e14877b4c"},
{file = "rpds_py-0.29.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d472cf73efe5726a067dce63eebe8215b14beabea7c12606fd9994267b3cfe2b"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:72fdfd5ff8992e4636621826371e3ac5f3e3b8323e9d0e48378e9c13c3dac9d0"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2549d833abdf8275c901313b9e8ff8fba57e50f6a495035a2a4e30621a2f7cc4"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4448dad428f28a6a767c3e3b80cde3446a22a0efbddaa2360f4bb4dc836d0688"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:115f48170fd4296a33938d8c11f697f5f26e0472e43d28f35624764173a60e4d"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8e5bb73ffc029820f4348e9b66b3027493ae00bca6629129cd433fd7a76308ee"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_31_riscv64.whl", hash = "sha256:b1581fcde18fcdf42ea2403a16a6b646f8eb1e58d7f90a0ce693da441f76942e"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:16e9da2bda9eb17ea318b4c335ec9ac1818e88922cbe03a5743ea0da9ecf74fb"},
{file = "rpds_py-0.29.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:28fd300326dd21198f311534bdb6d7e989dd09b3418b3a91d54a0f384c700967"},
{file = "rpds_py-0.29.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:2aba991e041d031c7939e1358f583ae405a7bf04804ca806b97a5c0e0af1ea5e"},
{file = "rpds_py-0.29.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:7f437026dbbc3f08c99cc41a5b2570c6e1a1ddbe48ab19a9b814254128d4ea7a"},
{file = "rpds_py-0.29.0-cp313-cp313t-win32.whl", hash = "sha256:6e97846e9800a5d0fe7be4d008f0c93d0feeb2700da7b1f7528dabafb31dfadb"},
{file = "rpds_py-0.29.0-cp313-cp313t-win_amd64.whl", hash = "sha256:f49196aec7c4b406495f60e6f947ad71f317a765f956d74bbd83996b9edc0352"},
{file = "rpds_py-0.29.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:394d27e4453d3b4d82bb85665dc1fcf4b0badc30fc84282defed71643b50e1a1"},
{file = "rpds_py-0.29.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:55d827b2ae95425d3be9bc9a5838b6c29d664924f98146557f7715e331d06df8"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fc31a07ed352e5462d3ee1b22e89285f4ce97d5266f6d1169da1142e78045626"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c4695dd224212f6105db7ea62197144230b808d6b2bba52238906a2762f1d1e7"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fcae1770b401167f8b9e1e3f566562e6966ffa9ce63639916248a9e25fa8a244"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:90f30d15f45048448b8da21c41703b31c61119c06c216a1bf8c245812a0f0c17"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:44a91e0ab77bdc0004b43261a4b8cd6d6b451e8d443754cfda830002b5745b32"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_31_riscv64.whl", hash = "sha256:4aa195e5804d32c682e453b34474f411ca108e4291c6a0f824ebdc30a91c973c"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7971bdb7bf4ee0f7e6f67fa4c7fbc6019d9850cc977d126904392d363f6f8318"},
{file = "rpds_py-0.29.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:8ae33ad9ce580c7a47452c3b3f7d8a9095ef6208e0a0c7e4e2384f9fc5bf8212"},
{file = "rpds_py-0.29.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:c661132ab2fb4eeede2ef69670fd60da5235209874d001a98f1542f31f2a8a94"},
{file = "rpds_py-0.29.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:bb78b3a0d31ac1bde132c67015a809948db751cb4e92cdb3f0b242e430b6ed0d"},
{file = "rpds_py-0.29.0-cp314-cp314-win32.whl", hash = "sha256:f475f103488312e9bd4000bc890a95955a07b2d0b6e8884aef4be56132adbbf1"},
{file = "rpds_py-0.29.0-cp314-cp314-win_amd64.whl", hash = "sha256:b9cf2359a4fca87cfb6801fae83a76aedf66ee1254a7a151f1341632acf67f1b"},
{file = "rpds_py-0.29.0-cp314-cp314-win_arm64.whl", hash = "sha256:9ba8028597e824854f0f1733d8b964e914ae3003b22a10c2c664cb6927e0feb9"},
{file = "rpds_py-0.29.0-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:e71136fd0612556b35c575dc2726ae04a1669e6a6c378f2240312cf5d1a2ab10"},
{file = "rpds_py-0.29.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:76fe96632d53f3bf0ea31ede2f53bbe3540cc2736d4aec3b3801b0458499ef3a"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9459a33f077130dbb2c7c3cea72ee9932271fb3126404ba2a2661e4fe9eb7b79"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5c9546cfdd5d45e562cc0444b6dddc191e625c62e866bf567a2c69487c7ad28a"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12597d11d97b8f7e376c88929a6e17acb980e234547c92992f9f7c058f1a7310"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28de03cf48b8a9e6ec10318f2197b83946ed91e2891f651a109611be4106ac4b"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd7951c964069039acc9d67a8ff1f0a7f34845ae180ca542b17dc1456b1f1808"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_31_riscv64.whl", hash = "sha256:c07d107b7316088f1ac0177a7661ca0c6670d443f6fe72e836069025e6266761"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1de2345af363d25696969befc0c1688a6cb5e8b1d32b515ef84fc245c6cddba3"},
{file = "rpds_py-0.29.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:00e56b12d2199ca96068057e1ae7f9998ab6e99cda82431afafd32f3ec98cca9"},
{file = "rpds_py-0.29.0-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:3919a3bbecee589300ed25000b6944174e07cd20db70552159207b3f4bbb45b8"},
{file = "rpds_py-0.29.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:e7fa2ccc312bbd91e43aa5e0869e46bc03278a3dddb8d58833150a18b0f0283a"},
{file = "rpds_py-0.29.0-cp314-cp314t-win32.whl", hash = "sha256:97c817863ffc397f1e6a6e9d2d89fe5408c0a9922dac0329672fb0f35c867ea5"},
{file = "rpds_py-0.29.0-cp314-cp314t-win_amd64.whl", hash = "sha256:2023473f444752f0f82a58dfcbee040d0a1b3d1b3c2ec40e884bd25db6d117d2"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:acd82a9e39082dc5f4492d15a6b6c8599aa21db5c35aaf7d6889aea16502c07d"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:715b67eac317bf1c7657508170a3e011a1ea6ccb1c9d5f296e20ba14196be6b3"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3b1b87a237cb2dba4db18bcfaaa44ba4cd5936b91121b62292ff21df577fc43"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1c3c3e8101bb06e337c88eb0c0ede3187131f19d97d43ea0e1c5407ea74c0cbf"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2b8e54d6e61f3ecd3abe032065ce83ea63417a24f437e4a3d73d2f85ce7b7cfe"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3fbd4e9aebf110473a420dea85a238b254cf8a15acb04b22a5a6b5ce8925b760"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80fdf53d36e6c72819993e35d1ebeeb8e8fc688d0c6c2b391b55e335b3afba5a"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_31_riscv64.whl", hash = "sha256:ea7173df5d86f625f8dde6d5929629ad811ed8decda3b60ae603903839ac9ac0"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:76054d540061eda273274f3d13a21a4abdde90e13eaefdc205db37c05230efce"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:9f84c549746a5be3bc7415830747a3a0312573afc9f95785eb35228bb17742ec"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-musllinux_1_2_i686.whl", hash = "sha256:0ea962671af5cb9a260489e311fa22b2e97103e3f9f0caaea6f81390af96a9ed"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:f7728653900035fb7b8d06e1e5900545d8088efc9d5d4545782da7df03ec803f"},
{file = "rpds_py-0.29.0.tar.gz", hash = "sha256:fe55fe686908f50154d1dc599232016e50c243b438c3b7432f24e2895b0e5359"},
{file = "rpds_py-0.30.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:679ae98e00c0e8d68a7fda324e16b90fd5260945b45d3b824c892cec9eea3288"},
{file = "rpds_py-0.30.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4cc2206b76b4f576934f0ed374b10d7ca5f457858b157ca52064bdfc26b9fc00"},
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:389a2d49eded1896c3d48b0136ead37c48e221b391c052fba3f4055c367f60a6"},
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:32c8528634e1bf7121f3de08fa85b138f4e0dc47657866630611b03967f041d7"},
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f207f69853edd6f6700b86efb84999651baf3789e78a466431df1331608e5324"},
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:67b02ec25ba7a9e8fa74c63b6ca44cf5707f2fbfadae3ee8e7494297d56aa9df"},
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0c0e95f6819a19965ff420f65578bacb0b00f251fefe2c8b23347c37174271f3"},
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_31_riscv64.whl", hash = "sha256:a452763cc5198f2f98898eb98f7569649fe5da666c2dc6b5ddb10fde5a574221"},
{file = "rpds_py-0.30.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e0b65193a413ccc930671c55153a03ee57cecb49e6227204b04fae512eb657a7"},
{file = "rpds_py-0.30.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:858738e9c32147f78b3ac24dc0edb6610000e56dc0f700fd5f651d0a0f0eb9ff"},
{file = "rpds_py-0.30.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:da279aa314f00acbb803da1e76fa18666778e8a8f83484fba94526da5de2cba7"},
{file = "rpds_py-0.30.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:7c64d38fb49b6cdeda16ab49e35fe0da2e1e9b34bc38bd78386530f218b37139"},
{file = "rpds_py-0.30.0-cp310-cp310-win32.whl", hash = "sha256:6de2a32a1665b93233cde140ff8b3467bdb9e2af2b91079f0333a0974d12d464"},
{file = "rpds_py-0.30.0-cp310-cp310-win_amd64.whl", hash = "sha256:1726859cd0de969f88dc8673bdd954185b9104e05806be64bcd87badbe313169"},
{file = "rpds_py-0.30.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:a2bffea6a4ca9f01b3f8e548302470306689684e61602aa3d141e34da06cf425"},
{file = "rpds_py-0.30.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:dc4f992dfe1e2bc3ebc7444f6c7051b4bc13cd8e33e43511e8ffd13bf407010d"},
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:422c3cb9856d80b09d30d2eb255d0754b23e090034e1deb4083f8004bd0761e4"},
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:07ae8a593e1c3c6b82ca3292efbe73c30b61332fd612e05abee07c79359f292f"},
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12f90dd7557b6bd57f40abe7747e81e0c0b119bef015ea7726e69fe550e394a4"},
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:99b47d6ad9a6da00bec6aabe5a6279ecd3c06a329d4aa4771034a21e335c3a97"},
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:33f559f3104504506a44bb666b93a33f5d33133765b0c216a5bf2f1e1503af89"},
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_31_riscv64.whl", hash = "sha256:946fe926af6e44f3697abbc305ea168c2c31d3e3ef1058cf68f379bf0335a78d"},
{file = "rpds_py-0.30.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:495aeca4b93d465efde585977365187149e75383ad2684f81519f504f5c13038"},
{file = "rpds_py-0.30.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d9a0ca5da0386dee0655b4ccdf46119df60e0f10da268d04fe7cc87886872ba7"},
{file = "rpds_py-0.30.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:8d6d1cc13664ec13c1b84241204ff3b12f9bb82464b8ad6e7a5d3486975c2eed"},
{file = "rpds_py-0.30.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:3896fa1be39912cf0757753826bc8bdc8ca331a28a7c4ae46b7a21280b06bb85"},
{file = "rpds_py-0.30.0-cp311-cp311-win32.whl", hash = "sha256:55f66022632205940f1827effeff17c4fa7ae1953d2b74a8581baaefb7d16f8c"},
{file = "rpds_py-0.30.0-cp311-cp311-win_amd64.whl", hash = "sha256:a51033ff701fca756439d641c0ad09a41d9242fa69121c7d8769604a0a629825"},
{file = "rpds_py-0.30.0-cp311-cp311-win_arm64.whl", hash = "sha256:47b0ef6231c58f506ef0b74d44e330405caa8428e770fec25329ed2cb971a229"},
{file = "rpds_py-0.30.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a161f20d9a43006833cd7068375a94d035714d73a172b681d8881820600abfad"},
{file = "rpds_py-0.30.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6abc8880d9d036ecaafe709079969f56e876fcf107f7a8e9920ba6d5a3878d05"},
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca28829ae5f5d569bb62a79512c842a03a12576375d5ece7d2cadf8abe96ec28"},
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a1010ed9524c73b94d15919ca4d41d8780980e1765babf85f9a2f90d247153dd"},
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f8d1736cfb49381ba528cd5baa46f82fdc65c06e843dab24dd70b63d09121b3f"},
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d948b135c4693daff7bc2dcfc4ec57237a29bd37e60c2fabf5aff2bbacf3e2f1"},
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47f236970bccb2233267d89173d3ad2703cd36a0e2a6e92d0560d333871a3d23"},
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_31_riscv64.whl", hash = "sha256:2e6ecb5a5bcacf59c3f912155044479af1d0b6681280048b338b28e364aca1f6"},
{file = "rpds_py-0.30.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a8fa71a2e078c527c3e9dc9fc5a98c9db40bcc8a92b4e8858e36d329f8684b51"},
{file = "rpds_py-0.30.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:73c67f2db7bc334e518d097c6d1e6fed021bbc9b7d678d6cc433478365d1d5f5"},
{file = "rpds_py-0.30.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:5ba103fb455be00f3b1c2076c9d4264bfcb037c976167a6047ed82f23153f02e"},
{file = "rpds_py-0.30.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:7cee9c752c0364588353e627da8a7e808a66873672bcb5f52890c33fd965b394"},
{file = "rpds_py-0.30.0-cp312-cp312-win32.whl", hash = "sha256:1ab5b83dbcf55acc8b08fc62b796ef672c457b17dbd7820a11d6c52c06839bdf"},
{file = "rpds_py-0.30.0-cp312-cp312-win_amd64.whl", hash = "sha256:a090322ca841abd453d43456ac34db46e8b05fd9b3b4ac0c78bcde8b089f959b"},
{file = "rpds_py-0.30.0-cp312-cp312-win_arm64.whl", hash = "sha256:669b1805bd639dd2989b281be2cfd951c6121b65e729d9b843e9639ef1fd555e"},
{file = "rpds_py-0.30.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:f83424d738204d9770830d35290ff3273fbb02b41f919870479fab14b9d303b2"},
{file = "rpds_py-0.30.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e7536cd91353c5273434b4e003cbda89034d67e7710eab8761fd918ec6c69cf8"},
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2771c6c15973347f50fece41fc447c054b7ac2ae0502388ce3b6738cd366e3d4"},
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:0a59119fc6e3f460315fe9d08149f8102aa322299deaa5cab5b40092345c2136"},
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:76fec018282b4ead0364022e3c54b60bf368b9d926877957a8624b58419169b7"},
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:692bef75a5525db97318e8cd061542b5a79812d711ea03dbc1f6f8dbb0c5f0d2"},
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9027da1ce107104c50c81383cae773ef5c24d296dd11c99e2629dbd7967a20c6"},
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_31_riscv64.whl", hash = "sha256:9cf69cdda1f5968a30a359aba2f7f9aa648a9ce4b580d6826437f2b291cfc86e"},
{file = "rpds_py-0.30.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a4796a717bf12b9da9d3ad002519a86063dcac8988b030e405704ef7d74d2d9d"},
{file = "rpds_py-0.30.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:5d4c2aa7c50ad4728a094ebd5eb46c452e9cb7edbfdb18f9e1221f597a73e1e7"},
{file = "rpds_py-0.30.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:ba81a9203d07805435eb06f536d95a266c21e5b2dfbf6517748ca40c98d19e31"},
{file = "rpds_py-0.30.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:945dccface01af02675628334f7cf49c2af4c1c904748efc5cf7bbdf0b579f95"},
{file = "rpds_py-0.30.0-cp313-cp313-win32.whl", hash = "sha256:b40fb160a2db369a194cb27943582b38f79fc4887291417685f3ad693c5a1d5d"},
{file = "rpds_py-0.30.0-cp313-cp313-win_amd64.whl", hash = "sha256:806f36b1b605e2d6a72716f321f20036b9489d29c51c91f4dd29a3e3afb73b15"},
{file = "rpds_py-0.30.0-cp313-cp313-win_arm64.whl", hash = "sha256:d96c2086587c7c30d44f31f42eae4eac89b60dabbac18c7669be3700f13c3ce1"},
{file = "rpds_py-0.30.0-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:eb0b93f2e5c2189ee831ee43f156ed34e2a89a78a66b98cadad955972548be5a"},
{file = "rpds_py-0.30.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:922e10f31f303c7c920da8981051ff6d8c1a56207dbdf330d9047f6d30b70e5e"},
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cdc62c8286ba9bf7f47befdcea13ea0e26bf294bda99758fd90535cbaf408000"},
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:47f9a91efc418b54fb8190a6b4aa7813a23fb79c51f4bb84e418f5476c38b8db"},
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1f3587eb9b17f3789ad50824084fa6f81921bbf9a795826570bda82cb3ed91f2"},
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:39c02563fc592411c2c61d26b6c5fe1e51eaa44a75aa2c8735ca88b0d9599daa"},
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:51a1234d8febafdfd33a42d97da7a43f5dcb120c1060e352a3fbc0c6d36e2083"},
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_31_riscv64.whl", hash = "sha256:eb2c4071ab598733724c08221091e8d80e89064cd472819285a9ab0f24bcedb9"},
{file = "rpds_py-0.30.0-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6bdfdb946967d816e6adf9a3d8201bfad269c67efe6cefd7093ef959683c8de0"},
{file = "rpds_py-0.30.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:c77afbd5f5250bf27bf516c7c4a016813eb2d3e116139aed0096940c5982da94"},
{file = "rpds_py-0.30.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:61046904275472a76c8c90c9ccee9013d70a6d0f73eecefd38c1ae7c39045a08"},
{file = "rpds_py-0.30.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:4c5f36a861bc4b7da6516dbdf302c55313afa09b81931e8280361a4f6c9a2d27"},
{file = "rpds_py-0.30.0-cp313-cp313t-win32.whl", hash = "sha256:3d4a69de7a3e50ffc214ae16d79d8fbb0922972da0356dcf4d0fdca2878559c6"},
{file = "rpds_py-0.30.0-cp313-cp313t-win_amd64.whl", hash = "sha256:f14fc5df50a716f7ece6a80b6c78bb35ea2ca47c499e422aa4463455dd96d56d"},
{file = "rpds_py-0.30.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:68f19c879420aa08f61203801423f6cd5ac5f0ac4ac82a2368a9fcd6a9a075e0"},
{file = "rpds_py-0.30.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:ec7c4490c672c1a0389d319b3a9cfcd098dcdc4783991553c332a15acf7249be"},
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f251c812357a3fed308d684a5079ddfb9d933860fc6de89f2b7ab00da481e65f"},
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ac98b175585ecf4c0348fd7b29c3864bda53b805c773cbf7bfdaffc8070c976f"},
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3e62880792319dbeb7eb866547f2e35973289e7d5696c6e295476448f5b63c87"},
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4e7fc54e0900ab35d041b0601431b0a0eb495f0851a0639b6ef90f7741b39a18"},
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47e77dc9822d3ad616c3d5759ea5631a75e5809d5a28707744ef79d7a1bcfcad"},
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_31_riscv64.whl", hash = "sha256:b4dc1a6ff022ff85ecafef7979a2c6eb423430e05f1165d6688234e62ba99a07"},
{file = "rpds_py-0.30.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:4559c972db3a360808309e06a74628b95eaccbf961c335c8fe0d590cf587456f"},
{file = "rpds_py-0.30.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:0ed177ed9bded28f8deb6ab40c183cd1192aa0de40c12f38be4d59cd33cb5c65"},
{file = "rpds_py-0.30.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:ad1fa8db769b76ea911cb4e10f049d80bf518c104f15b3edb2371cc65375c46f"},
{file = "rpds_py-0.30.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:46e83c697b1f1c72b50e5ee5adb4353eef7406fb3f2043d64c33f20ad1c2fc53"},
{file = "rpds_py-0.30.0-cp314-cp314-win32.whl", hash = "sha256:ee454b2a007d57363c2dfd5b6ca4a5d7e2c518938f8ed3b706e37e5d470801ed"},
{file = "rpds_py-0.30.0-cp314-cp314-win_amd64.whl", hash = "sha256:95f0802447ac2d10bcc69f6dc28fe95fdf17940367b21d34e34c737870758950"},
{file = "rpds_py-0.30.0-cp314-cp314-win_arm64.whl", hash = "sha256:613aa4771c99f03346e54c3f038e4cc574ac09a3ddfb0e8878487335e96dead6"},
{file = "rpds_py-0.30.0-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:7e6ecfcb62edfd632e56983964e6884851786443739dbfe3582947e87274f7cb"},
{file = "rpds_py-0.30.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a1d0bc22a7cdc173fedebb73ef81e07faef93692b8c1ad3733b67e31e1b6e1b8"},
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d08f00679177226c4cb8c5265012eea897c8ca3b93f429e546600c971bcbae7"},
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5965af57d5848192c13534f90f9dd16464f3c37aaf166cc1da1cae1fd5a34898"},
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9a4e86e34e9ab6b667c27f3211ca48f73dba7cd3d90f8d5b11be56e5dbc3fb4e"},
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e5d3e6b26f2c785d65cc25ef1e5267ccbe1b069c5c21b8cc724efee290554419"},
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:626a7433c34566535b6e56a1b39a7b17ba961e97ce3b80ec62e6f1312c025551"},
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_31_riscv64.whl", hash = "sha256:acd7eb3f4471577b9b5a41baf02a978e8bdeb08b4b355273994f8b87032000a8"},
{file = "rpds_py-0.30.0-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:fe5fa731a1fa8a0a56b0977413f8cacac1768dad38d16b3a296712709476fbd5"},
{file = "rpds_py-0.30.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:74a3243a411126362712ee1524dfc90c650a503502f135d54d1b352bd01f2404"},
{file = "rpds_py-0.30.0-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:3e8eeb0544f2eb0d2581774be4c3410356eba189529a6b3e36bbbf9696175856"},
{file = "rpds_py-0.30.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:dbd936cde57abfee19ab3213cf9c26be06d60750e60a8e4dd85d1ab12c8b1f40"},
{file = "rpds_py-0.30.0-cp314-cp314t-win32.whl", hash = "sha256:dc824125c72246d924f7f796b4f63c1e9dc810c7d9e2355864b3c3a73d59ade0"},
{file = "rpds_py-0.30.0-cp314-cp314t-win_amd64.whl", hash = "sha256:27f4b0e92de5bfbc6f86e43959e6edd1425c33b5e69aab0984a72047f2bcf1e3"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c2262bdba0ad4fc6fb5545660673925c2d2a5d9e2e0fb603aad545427be0fc58"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:ee6af14263f25eedc3bb918a3c04245106a42dfd4f5c2285ea6f997b1fc3f89a"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3adbb8179ce342d235c31ab8ec511e66c73faa27a47e076ccc92421add53e2bb"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:250fa00e9543ac9b97ac258bd37367ff5256666122c2d0f2bc97577c60a1818c"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9854cf4f488b3d57b9aaeb105f06d78e5529d3145b1e4a41750167e8c213c6d3"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:993914b8e560023bc0a8bf742c5f303551992dcb85e247b1e5c7f4a7d145bda5"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58edca431fb9b29950807e301826586e5bbf24163677732429770a697ffe6738"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_31_riscv64.whl", hash = "sha256:dea5b552272a944763b34394d04577cf0f9bd013207bc32323b5a89a53cf9c2f"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ba3af48635eb83d03f6c9735dfb21785303e73d22ad03d489e88adae6eab8877"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:dff13836529b921e22f15cb099751209a60009731a68519630a24d61f0b1b30a"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-musllinux_1_2_i686.whl", hash = "sha256:1b151685b23929ab7beec71080a8889d4d6d9fa9a983d213f07121205d48e2c4"},
{file = "rpds_py-0.30.0-pp311-pypy311_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:ac37f9f516c51e5753f27dfdef11a88330f04de2d564be3991384b2f3535d02e"},
{file = "rpds_py-0.30.0.tar.gz", hash = "sha256:dd8ff7cf90014af0c0f787eea34794ebf6415242ee1d6fa91eaba725cc441e84"},
]
[[package]]
@@ -2723,6 +2723,22 @@ files = [
{file = "sortedcontainers-2.4.0.tar.gz", hash = "sha256:25caa5a06cc30b6b83d11423433f65d1f9d76c4c6a0c90e3379eaa43b9bfdb88"},
]
[[package]]
name = "sqlglot"
version = "28.0.0"
description = "An easily customizable SQL parser and transpiler"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "sqlglot-28.0.0-py3-none-any.whl", hash = "sha256:ac1778e7fa4812f4f7e5881b260632fc167b00ca4c1226868891fb15467122e4"},
{file = "sqlglot-28.0.0.tar.gz", hash = "sha256:cc9a651ef4182e61dac58aa955e5fb21845a5865c6a4d7d7b5a7857450285ad4"},
]
[package.extras]
dev = ["duckdb (>=0.6)", "maturin (>=1.4,<2.0)", "mypy", "pandas", "pandas-stubs", "pdoc", "pre-commit", "pyperf", "python-dateutil", "pytz", "ruff (==0.7.2)", "types-python-dateutil", "types-pytz", "typing_extensions"]
rs = ["sqlglotrs (==0.7.3)"]
[[package]]
name = "systemd-python"
version = "235"
@@ -3346,4 +3362,4 @@ url-preview = ["lxml"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10.0,<4.0.0"
content-hash = "4f8d98723236eaf3d13f440dce95ec6cc3c4dc49ba3a0e45bf9cfbb51aca899c"
content-hash = "98b9062f48205a3bcc99b43ae665083d360a15d4a208927fa978df9c36fd5315"

View File

@@ -370,6 +370,9 @@ towncrier = ">=18.6.0rc1"
# Used for checking the Poetry lockfile
tomli = ">=1.2.3"
# Used for checking the schema delta files
sqlglot = ">=28.0.0"
[build-system]
# The upper bounds here are defensive, intended to prevent situations like

56
rust/src/duration.rs Normal file
View File

@@ -0,0 +1,56 @@
/*
* This file is licensed under the Affero General Public License (AGPL) version 3.
*
* Copyright (C) 2025 Element Creations, Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* See the GNU Affero General Public License for more details:
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*/
use once_cell::sync::OnceCell;
use pyo3::{
types::{IntoPyDict, PyAnyMethods},
Bound, BoundObject, IntoPyObject, Py, PyAny, PyErr, PyResult, Python,
};
/// A reference to the `synapse.util.duration` module.
static DURATION: OnceCell<Py<PyAny>> = OnceCell::new();
/// Access to the `synapse.util.duration` module.
fn duration_module(py: Python<'_>) -> PyResult<&Bound<'_, PyAny>> {
Ok(DURATION
.get_or_try_init(|| py.import("synapse.util.duration").map(Into::into))?
.bind(py))
}
/// Mirrors the `synapse.util.duration.Duration` Python class.
pub struct SynapseDuration {
microseconds: u64,
}
impl SynapseDuration {
/// For now we only need to create durations from milliseconds.
pub fn from_milliseconds(milliseconds: u64) -> Self {
Self {
microseconds: milliseconds * 1_000,
}
}
}
impl<'py> IntoPyObject<'py> for &SynapseDuration {
type Target = PyAny;
type Output = Bound<'py, Self::Target>;
type Error = PyErr;
fn into_pyobject(self, py: Python<'py>) -> Result<Self::Output, Self::Error> {
let duration_module = duration_module(py)?;
let kwargs = [("microseconds", self.microseconds)].into_py_dict(py)?;
let duration_instance = duration_module.call_method("Duration", (), Some(&kwargs))?;
Ok(duration_instance.into_bound())
}
}

View File

@@ -5,6 +5,7 @@ use pyo3::prelude::*;
use pyo3_log::ResetHandle;
pub mod acl;
pub mod duration;
pub mod errors;
pub mod events;
pub mod http;

View File

@@ -35,6 +35,7 @@ use ulid::Ulid;
use self::session::Session;
use crate::{
duration::SynapseDuration,
errors::{NotFoundError, SynapseError},
http::{http_request_from_twisted, http_response_to_twisted, HeaderMapPyExt},
UnwrapInfallible,
@@ -132,6 +133,8 @@ impl RendezvousHandler {
.unwrap_infallible()
.unbind();
let eviction_duration = SynapseDuration::from_milliseconds(eviction_interval);
// Construct a Python object so that we can get a reference to the
// evict method and schedule it to run.
let self_ = Py::new(
@@ -149,7 +152,7 @@ impl RendezvousHandler {
let evict = self_.getattr(py, "_evict")?;
homeserver.call_method0("get_clock")?.call_method(
"looping_call",
(evict, eviction_interval),
(evict, &eviction_duration),
None,
)?;

View File

@@ -9,15 +9,11 @@ from typing import Any
import click
import git
import sqlglot
import sqlglot.expressions
SCHEMA_FILE_REGEX = re.compile(r"^synapse/storage/schema/(.*)/delta/(.*)/(.*)$")
INDEX_CREATION_REGEX = re.compile(
r"CREATE .*INDEX .*ON ([a-z_0-9]+)", flags=re.IGNORECASE
)
INDEX_DELETION_REGEX = re.compile(r"DROP .*INDEX ([a-z_0-9]+)", flags=re.IGNORECASE)
TABLE_CREATION_REGEX = re.compile(
r"CREATE .*TABLE.* ([a-z_0-9]+)\s*\(", flags=re.IGNORECASE
)
# The base branch we want to check against. We use the main development branch
# on the assumption that is what we are developing against.
@@ -141,6 +137,9 @@ def main(force_colors: bool) -> None:
color=force_colors,
)
# Mark this run as not successful, but continue so that we report *all*
# errors.
return_code = 1
else:
click.secho(
f"All deltas are in the correct folder: {current_schema_version}!",
@@ -153,60 +152,90 @@ def main(force_colors: bool) -> None:
# and delta files are also numbered in order.
changed_delta_files.sort()
# Now check that we're not trying to create or drop indices. If we want to
# do that they should be in background updates. The exception is when we
# create indices on tables we've just created.
created_tables = set()
for delta_file in changed_delta_files:
with open(delta_file) as fd:
delta_lines = fd.readlines()
for line in delta_lines:
# Strip SQL comments
line = line.split("--", maxsplit=1)[0]
# Check and track any tables we create
match = TABLE_CREATION_REGEX.search(line)
if match:
table_name = match.group(1)
created_tables.add(table_name)
# Check for dropping indices, these are always banned
match = INDEX_DELETION_REGEX.search(line)
if match:
clause = match.group()
click.secho(
f"Found delta with index deletion: '{clause}' in {delta_file}",
fg="red",
bold=True,
color=force_colors,
)
click.secho(
" ↪ These should be in background updates.",
)
return_code = 1
# Check for index creation, which is only allowed for tables we've
# created.
match = INDEX_CREATION_REGEX.search(line)
if match:
clause = match.group()
table_name = match.group(1)
if table_name not in created_tables:
click.secho(
f"Found delta with index creation for existing table: '{clause}' in {delta_file}",
fg="red",
bold=True,
color=force_colors,
)
click.secho(
" ↪ These should be in background updates (or the table should be created in the same delta).",
)
return_code = 1
success = check_schema_delta(changed_delta_files, force_colors)
if not success:
return_code = 1
click.get_current_context().exit(return_code)
def check_schema_delta(delta_files: list[str], force_colors: bool) -> bool:
"""Check that the given schema delta files do not create or drop indices
inappropriately.
Index creation is only allowed on tables created in the same set of deltas.
Index deletion is never allowed and should be done in background updates.
Returns:
True if all checks succeeded, False if at least one failed.
"""
# The tables created in this delta
created_tables = set[str]()
# The indices created/dropped in this delta, each a tuple of (table_name, sql)
created_indices = list[tuple[str, str]]()
# The indices dropped in this delta, just the sql
dropped_indices = list[str]()
for delta_file in delta_files:
with open(delta_file) as fd:
delta_contents = fd.read()
# Assume the SQL dialect from the file extension, defaulting to Postgres.
sql_lang = "postgres"
if delta_file.endswith(".sqlite"):
sql_lang = "sqlite"
statements = sqlglot.parse(delta_contents, read=sql_lang)
for statement in statements:
if isinstance(statement, sqlglot.expressions.Create):
if statement.kind == "TABLE":
assert isinstance(statement.this, sqlglot.expressions.Schema)
assert isinstance(statement.this.this, sqlglot.expressions.Table)
table_name = statement.this.this.name
created_tables.add(table_name)
elif statement.kind == "INDEX":
assert isinstance(statement.this, sqlglot.expressions.Index)
table_name = statement.this.args["table"].name
created_indices.append((table_name, statement.sql()))
elif isinstance(statement, sqlglot.expressions.Drop):
if statement.kind == "INDEX":
dropped_indices.append(statement.sql())
success = True
for table_name, clause in created_indices:
if table_name not in created_tables:
click.secho(
f"Found delta with index creation for existing table: '{clause}'",
fg="red",
bold=True,
color=force_colors,
)
click.secho(
" ↪ These should be in background updates (or the table should be created in the same delta).",
)
success = False
for clause in dropped_indices:
click.secho(
f"Found delta with index deletion: '{clause}'",
fg="red",
bold=True,
color=force_colors,
)
click.secho(
" ↪ These should be in background updates.",
)
success = False
return success
if __name__ == "__main__":
main()

View File

@@ -291,6 +291,12 @@ def _prepare() -> None:
synapse_repo.git.add("-u")
subprocess.run("git diff --cached", shell=True)
print(
"Consider any upcoming platform deprecations that should be mentioned in the changelog. (e.g. upcoming Python, PostgreSQL or SQLite deprecations)"
)
print(
"Platform deprecations should be mentioned at least 1 release prior to being unsupported."
)
if click.confirm("Edit changelog?", default=False):
click.edit(filename="CHANGES.md")

View File

@@ -307,6 +307,10 @@ class AccountDataTypes:
MSC4155_INVITE_PERMISSION_CONFIG: Final = (
"org.matrix.msc4155.invite_permission_config"
)
# MSC4380: Invite blocking
MSC4380_INVITE_PERMISSION_CONFIG: Final = (
"org.matrix.msc4380.invite_permission_config"
)
# Synapse-specific behaviour. See "Client-Server API Extensions" documentation
# in Admin API for more information.
SYNAPSE_ADMIN_CLIENT_CONFIG: Final = "io.element.synapse.admin_client_config"

View File

@@ -137,7 +137,7 @@ class Codes(str, Enum):
PROFILE_TOO_LARGE = "M_PROFILE_TOO_LARGE"
KEY_TOO_LARGE = "M_KEY_TOO_LARGE"
# Part of MSC4155
# Part of MSC4155/MSC4380
INVITE_BLOCKED = "ORG.MATRIX.MSC4155.M_INVITE_BLOCKED"
# Part of MSC4190

View File

@@ -27,6 +27,7 @@ from synapse.config.ratelimiting import RatelimitSettings
from synapse.storage.databases.main import DataStore
from synapse.types import Requester
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.wheel_timer import WheelTimer
if TYPE_CHECKING:
@@ -100,7 +101,7 @@ class Ratelimiter:
# and doesn't affect correctness.
self._timer: WheelTimer[Hashable] = WheelTimer()
self.clock.looping_call(self._prune_message_counts, 15 * 1000)
self.clock.looping_call(self._prune_message_counts, Duration(seconds=15))
def _get_key(self, requester: Requester | None, key: Hashable | None) -> Hashable:
"""Use the requester's MXID as a fallback key if no key is provided."""

View File

@@ -218,13 +218,13 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
# table will decrease
clock.looping_call(
hs.get_datastores().main.generate_user_daily_visits,
Duration(minutes=5).as_millis(),
Duration(minutes=5),
)
# monthly active user limiting functionality
clock.looping_call(
hs.get_datastores().main.reap_monthly_active_users,
Duration(hours=1).as_millis(),
Duration(hours=1),
)
hs.get_datastores().main.reap_monthly_active_users()
@@ -263,14 +263,14 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
if hs.config.server.limit_usage_by_mau or hs.config.server.mau_stats_only:
generate_monthly_active_users()
clock.looping_call(generate_monthly_active_users, 5 * 60 * 1000)
clock.looping_call(generate_monthly_active_users, Duration(minutes=5))
# End of monthly active user settings
if hs.config.metrics.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(
phone_stats_home,
PHONE_HOME_INTERVAL.as_millis(),
PHONE_HOME_INTERVAL,
hs,
stats,
)
@@ -278,14 +278,14 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
# We need to defer this init for the cases that we daemonize
# otherwise the process ID we get is that of the non-daemon process
clock.call_later(
0,
Duration(seconds=0),
performance_stats_init,
)
# We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes
clock.call_later(
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME.as_secs(),
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME,
phone_stats_home,
hs,
stats,

View File

@@ -77,6 +77,7 @@ from synapse.logging.context import run_in_background
from synapse.storage.databases.main import DataStore
from synapse.types import DeviceListUpdates, JsonMapping
from synapse.util.clock import Clock, DelayedCallWrapper
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -504,7 +505,7 @@ class _Recoverer:
self.scheduled_recovery: DelayedCallWrapper | None = None
def recover(self) -> None:
delay = 2**self.backoff_counter
delay = Duration(seconds=2**self.backoff_counter)
logger.info("Scheduling retries on %s in %fs", self.service.id, delay)
self.scheduled_recovery = self.clock.call_later(
delay,

View File

@@ -672,7 +672,8 @@ class RootConfig:
action="append",
metavar="CONFIG_FILE",
help="Specify config file. Can be given multiple times and"
" may specify directories containing *.yaml files.",
" may specify directories containing *.yaml files."
" Top-level keys in later files overwrite ones in earlier files.",
)
parser.add_argument(
"--no-secrets-in-config",

View File

@@ -596,3 +596,6 @@ class ExperimentalConfig(Config):
# MSC4306: Thread Subscriptions
# (and MSC4308: Thread Subscriptions extension to Sliding Sync)
self.msc4306_enabled: bool = experimental.get("msc4306_enabled", False)
# MSC4380: Invite blocking
self.msc4380_enabled: bool = experimental.get("msc4380_enabled", False)

View File

@@ -75,6 +75,7 @@ from synapse.types import JsonDict, StrCollection, UserID, get_domain_from_id
from synapse.types.handlers.policy_server import RECOMMENDATION_OK, RECOMMENDATION_SPAM
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.duration import Duration
from synapse.util.retryutils import NotRetryingDestination
if TYPE_CHECKING:
@@ -132,7 +133,7 @@ class FederationClient(FederationBase):
super().__init__(hs)
self.pdu_destination_tried: dict[str, dict[str, int]] = {}
self._clock.looping_call(self._clear_tried_cache, 60 * 1000)
self._clock.looping_call(self._clear_tried_cache, Duration(minutes=1))
self.state = hs.get_state_handler()
self.transport_layer = hs.get_federation_transport_client()

View File

@@ -89,6 +89,7 @@ from synapse.types import JsonDict, StateMap, UserID, get_domain_from_id
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import Linearizer, concurrently_execute, gather_results
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.duration import Duration
from synapse.util.stringutils import parse_server_name
if TYPE_CHECKING:
@@ -226,7 +227,7 @@ class FederationServer(FederationBase):
)
# We pause a bit so that we don't start handling all rooms at once.
await self._clock.sleep(random.uniform(0, 0.1))
await self._clock.sleep(Duration(seconds=random.uniform(0, 0.1)))
async def on_backfill_request(
self, origin: str, room_id: str, versions: list[str], limit: int
@@ -301,7 +302,9 @@ class FederationServer(FederationBase):
# Start a periodic check for old staged events. This is to handle
# the case where locks time out, e.g. if another process gets killed
# without dropping its locks.
self._clock.looping_call(self._handle_old_staged_events, 60 * 1000)
self._clock.looping_call(
self._handle_old_staged_events, Duration(minutes=1)
)
# keep this as early as possible to make the calculated origin ts as
# accurate as possible.

View File

@@ -53,6 +53,7 @@ from synapse.federation.sender import AbstractFederationSender, FederationSender
from synapse.metrics import SERVER_NAME_LABEL, LaterGauge
from synapse.replication.tcp.streams.federation import FederationStream
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken, StrCollection
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
from .units import Edu
@@ -137,7 +138,7 @@ class FederationRemoteSendQueue(AbstractFederationSender):
assert isinstance(queue, Sized)
register(queue_name, queue=queue)
self.clock.looping_call(self._clear_queue, 30 * 1000)
self.clock.looping_call(self._clear_queue, Duration(seconds=30))
def shutdown(self) -> None:
"""Stops this federation sender instance from sending further transactions."""

View File

@@ -174,6 +174,7 @@ from synapse.types import (
get_domain_from_id,
)
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
from synapse.util.retryutils import filter_destinations_by_retry_limiter
@@ -218,12 +219,12 @@ transaction_queue_pending_edus_gauge = LaterGauge(
# Please note that rate limiting still applies, so while the loop is
# executed every X seconds the destinations may not be woken up because
# they are being rate limited following previous attempt failures.
WAKEUP_RETRY_PERIOD_SEC = 60
WAKEUP_RETRY_PERIOD = Duration(minutes=1)
# Time (in s) to wait in between waking up each destination, i.e. one destination
# Time to wait in between waking up each destination, i.e. one destination
# will be woken up every <x> seconds until we have woken every destination
# has outstanding catch-up.
WAKEUP_INTERVAL_BETWEEN_DESTINATIONS_SEC = 5
WAKEUP_INTERVAL_BETWEEN_DESTINATIONS = Duration(seconds=5)
class AbstractFederationSender(metaclass=abc.ABCMeta):
@@ -379,7 +380,7 @@ class _DestinationWakeupQueue:
queue.attempt_new_transaction()
await self.clock.sleep(current_sleep_seconds)
await self.clock.sleep(Duration(seconds=current_sleep_seconds))
if not self.queue:
break
@@ -468,7 +469,7 @@ class FederationSender(AbstractFederationSender):
# Regularly wake up destinations that have outstanding PDUs to be caught up
self.clock.looping_call_now(
self.hs.run_as_background_process,
WAKEUP_RETRY_PERIOD_SEC * 1000.0,
WAKEUP_RETRY_PERIOD,
"wake_destinations_needing_catchup",
self._wake_destinations_needing_catchup,
)
@@ -1161,4 +1162,4 @@ class FederationSender(AbstractFederationSender):
last_processed,
)
self.wake_destination(destination)
await self.clock.sleep(WAKEUP_INTERVAL_BETWEEN_DESTINATIONS_SEC)
await self.clock.sleep(WAKEUP_INTERVAL_BETWEEN_DESTINATIONS)

View File

@@ -28,6 +28,7 @@ from synapse.metrics.background_process_metrics import wrap_as_background_proces
from synapse.types import UserID
from synapse.util import stringutils
from synapse.util.async_helpers import delay_cancellation
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -73,7 +74,7 @@ class AccountValidityHandler:
# Check the renewal emails to send and send them every 30min.
if hs.config.worker.run_background_tasks:
self.clock.looping_call(self._send_renewal_emails, 30 * 60 * 1000)
self.clock.looping_call(self._send_renewal_emails, Duration(minutes=30))
async def is_user_expired(self, user_id: str) -> bool:
"""Checks if a user has expired against third-party modules.

View File

@@ -74,6 +74,7 @@ from synapse.storage.databases.main.registration import (
from synapse.types import JsonDict, Requester, StrCollection, UserID
from synapse.util import stringutils as stringutils
from synapse.util.async_helpers import delay_cancellation, maybe_awaitable
from synapse.util.duration import Duration
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.stringutils import base62_encode
from synapse.util.threepids import canonicalise_email
@@ -242,7 +243,7 @@ class AuthHandler:
if hs.config.worker.run_background_tasks:
self._clock.looping_call(
run_as_background_process,
5 * 60 * 1000,
Duration(minutes=5),
"expire_old_sessions",
self.server_name,
self._expire_old_sessions,

View File

@@ -42,6 +42,7 @@ from synapse.types import (
UserID,
create_requester,
)
from synapse.util.duration import Duration
from synapse.util.events import generate_fake_event_id
from synapse.util.metrics import Measure
from synapse.util.sentinel import Sentinel
@@ -92,20 +93,22 @@ class DelayedEventsHandler:
# Kick off again (without blocking) to catch any missed notifications
# that may have fired before the callback was added.
self._clock.call_later(
0,
Duration(seconds=0),
self.notify_new_event,
)
# Delayed events that are already marked as processed on startup might not have been
# sent properly on the last run of the server, so unmark them to send them again.
# Now process any delayed events that are due to be sent.
#
# We set `reprocess_events` to True in case any events had been
# marked as processed, but had not yet actually been sent,
# before the homeserver stopped.
#
# Caveat: this will double-send delayed events that successfully persisted, but failed
# to be removed from the DB table of delayed events.
# TODO: To avoid double-sending, scan the timeline to find which of these events were
# already sent. To do so, must store delay_ids in sent events to retrieve them later.
await self._store.unprocess_delayed_events()
events, next_send_ts = await self._store.process_timeout_delayed_events(
self._get_current_ts()
self._get_current_ts(), reprocess_events=True
)
if next_send_ts:
@@ -423,18 +426,23 @@ class DelayedEventsHandler:
Raises:
NotFoundError: if no matching delayed event could be found.
"""
assert self._is_master
await self._delayed_event_mgmt_ratelimiter.ratelimit(
None, request.getClientAddress().host
)
await make_deferred_yieldable(self._initialized_from_db)
# Note: We don't need to wait on `self._initialized_from_db` here as the
# events that deals with are already marked as processed.
#
# `restart_delayed_events` will skip over such events entirely.
next_send_ts = await self._store.restart_delayed_event(
delay_id, self._get_current_ts()
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at(next_send_ts)
# Only the main process handles sending delayed events.
if self._is_master:
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at(next_send_ts)
async def send(self, request: SynapseRequest, delay_id: str) -> None:
"""
@@ -501,17 +509,17 @@ class DelayedEventsHandler:
def _schedule_next_at(self, next_send_ts: Timestamp) -> None:
delay = next_send_ts - self._get_current_ts()
delay_sec = delay / 1000 if delay > 0 else 0
delay_duration = Duration(milliseconds=max(delay, 0))
if self._next_delayed_event_call is None:
self._next_delayed_event_call = self._clock.call_later(
delay_sec,
delay_duration,
self.hs.run_as_background_process,
"_send_on_timeout",
self._send_on_timeout,
)
else:
self._next_delayed_event_call.reset(delay_sec)
self._next_delayed_event_call.reset(delay_duration.as_secs())
async def get_all_for_user(self, requester: Requester) -> list[JsonDict]:
"""Return all pending delayed events requested by the given user."""

View File

@@ -71,6 +71,7 @@ from synapse.util import stringutils
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.cancellation import cancellable
from synapse.util.duration import Duration
from synapse.util.metrics import measure_func
from synapse.util.retryutils import (
NotRetryingDestination,
@@ -85,7 +86,7 @@ logger = logging.getLogger(__name__)
DELETE_DEVICE_MSGS_TASK_NAME = "delete_device_messages"
MAX_DEVICE_DISPLAY_NAME_LEN = 100
DELETE_STALE_DEVICES_INTERVAL_MS = 24 * 60 * 60 * 1000
DELETE_STALE_DEVICES_INTERVAL = Duration(days=1)
def _check_device_name_length(name: str | None) -> None:
@@ -186,7 +187,7 @@ class DeviceHandler:
):
self.clock.looping_call(
self.hs.run_as_background_process,
DELETE_STALE_DEVICES_INTERVAL_MS,
DELETE_STALE_DEVICES_INTERVAL,
desc="delete_stale_devices",
func=self._delete_stale_devices,
)
@@ -915,7 +916,7 @@ class DeviceHandler:
)
DEVICE_MSGS_DELETE_BATCH_LIMIT = 1000
DEVICE_MSGS_DELETE_SLEEP_MS = 100
DEVICE_MSGS_DELETE_SLEEP = Duration(milliseconds=100)
async def _delete_device_messages(
self,
@@ -941,9 +942,7 @@ class DeviceHandler:
if from_stream_id is None:
return TaskStatus.COMPLETE, None, None
await self.clock.sleep(
DeviceWriterHandler.DEVICE_MSGS_DELETE_SLEEP_MS / 1000.0
)
await self.clock.sleep(DeviceWriterHandler.DEVICE_MSGS_DELETE_SLEEP)
class DeviceWriterHandler(DeviceHandler):
@@ -1469,7 +1468,7 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
self._resync_retry_lock = Lock()
self.clock.looping_call(
self.hs.run_as_background_process,
30 * 1000,
Duration(seconds=30),
func=self._maybe_retry_device_resync,
desc="_maybe_retry_device_resync",
)

View File

@@ -46,6 +46,7 @@ from synapse.types import (
)
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.cancellation import cancellable
from synapse.util.duration import Duration
from synapse.util.json import json_decoder
from synapse.util.retryutils import (
NotRetryingDestination,
@@ -1634,7 +1635,7 @@ class E2eKeysHandler:
# matrix.org has about 15M users in the e2e_one_time_keys_json table
# (comprising 20M devices). We want this to take about a week, so we need
# to do about one batch of 100 users every 4 seconds.
await self.clock.sleep(4)
await self.clock.sleep(Duration(seconds=4))
def _check_cross_signing_key(

View File

@@ -72,6 +72,7 @@ from synapse.storage.invite_rule import InviteRule
from synapse.types import JsonDict, StrCollection, get_domain_from_id
from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer
from synapse.util.duration import Duration
from synapse.util.retryutils import NotRetryingDestination
from synapse.visibility import filter_events_for_server
@@ -1972,7 +1973,9 @@ class FederationHandler:
logger.warning(
"%s; waiting for %d ms...", e, e.retry_after_ms
)
await self.clock.sleep(e.retry_after_ms / 1000)
await self.clock.sleep(
Duration(milliseconds=e.retry_after_ms)
)
# Success, no need to try the rest of the destinations.
break

View File

@@ -91,6 +91,7 @@ from synapse.types import (
)
from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter, partition, sorted_topologically
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.stringutils import shortstr
@@ -1802,7 +1803,7 @@ class FederationEventHandler:
# the reactor. For large rooms let's yield to the reactor
# occasionally to ensure we don't block other work.
if (i + 1) % 1000 == 0:
await self._clock.sleep(0)
await self._clock.sleep(Duration(seconds=0))
# Also persist the new event in batches for similar reasons as above.
for batch in batch_iter(events_and_contexts_to_persist, 1000):

View File

@@ -83,6 +83,7 @@ from synapse.types.state import StateFilter
from synapse.util import log_failure, unwrapFirstError
from synapse.util.async_helpers import Linearizer, gather_results
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.duration import Duration
from synapse.util.json import json_decoder, json_encoder
from synapse.util.metrics import measure_func
from synapse.visibility import get_effective_room_visibility_from_state
@@ -433,14 +434,11 @@ class MessageHandler:
# Figure out how many seconds we need to wait before expiring the event.
now_ms = self.clock.time_msec()
delay = (expiry_ts - now_ms) / 1000
delay = Duration(milliseconds=max(expiry_ts - now_ms, 0))
# callLater doesn't support negative delays, so trim the delay to 0 if we're
# in that case.
if delay < 0:
delay = 0
logger.info("Scheduling expiry for event %s in %.3fs", event_id, delay)
logger.info(
"Scheduling expiry for event %s in %.3fs", event_id, delay.as_secs()
)
self._scheduled_expiry = self.clock.call_later(
delay,
@@ -551,7 +549,7 @@ class EventCreationHandler:
"send_dummy_events_to_fill_extremities",
self._send_dummy_events_to_fill_extremities,
),
5 * 60 * 1000,
Duration(minutes=5),
)
self._message_handler = hs.get_message_handler()
@@ -1012,7 +1010,7 @@ class EventCreationHandler:
if not ignore_shadow_ban and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
room_version = None
@@ -1515,7 +1513,7 @@ class EventCreationHandler:
and requester.shadow_banned
):
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
if event.is_state():
@@ -1957,6 +1955,12 @@ class EventCreationHandler:
room_alias_str = event.content.get("alias", None)
directory_handler = self.hs.get_directory_handler()
if room_alias_str and room_alias_str != original_alias:
if not isinstance(room_alias_str, str):
raise SynapseError(
400,
"The alias must be of type string.",
Codes.INVALID_PARAM,
)
await self._validate_canonical_alias(
directory_handler, room_alias_str, event.room_id
)
@@ -1980,6 +1984,12 @@ class EventCreationHandler:
new_alt_aliases = set(alt_aliases) - set(original_alt_aliases)
if new_alt_aliases:
for alias_str in new_alt_aliases:
if not isinstance(alias_str, str):
raise SynapseError(
400,
"Each alt_alias must be of type string.",
Codes.INVALID_PARAM,
)
await self._validate_canonical_alias(
directory_handler, alias_str, event.room_id
)

View File

@@ -42,6 +42,7 @@ from synapse.types import (
from synapse.types.handlers import ShutdownRoomParams, ShutdownRoomResponse
from synapse.types.state import StateFilter
from synapse.util.async_helpers import ReadWriteLock
from synapse.util.duration import Duration
from synapse.visibility import filter_events_for_client
if TYPE_CHECKING:
@@ -116,7 +117,7 @@ class PaginationHandler:
self.clock.looping_call(
self.hs.run_as_background_process,
job.interval,
Duration(milliseconds=job.interval),
"purge_history_for_rooms_in_range",
self.purge_history_for_rooms_in_range,
job.shortest_max_lifetime,

View File

@@ -121,6 +121,7 @@ from synapse.types import (
get_domain_from_id,
)
from synapse.util.async_helpers import Linearizer
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer
@@ -203,7 +204,7 @@ EXTERNAL_PROCESS_EXPIRY = 5 * 60 * 1000
# Delay before a worker tells the presence handler that a user has stopped
# syncing.
UPDATE_SYNCING_USERS_MS = 10 * 1000
UPDATE_SYNCING_USERS = Duration(seconds=10)
assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER
@@ -528,7 +529,7 @@ class WorkerPresenceHandler(BasePresenceHandler):
self._bump_active_client = ReplicationBumpPresenceActiveTime.make_client(hs)
self._set_state_client = ReplicationPresenceSetState.make_client(hs)
self.clock.looping_call(self.send_stop_syncing, UPDATE_SYNCING_USERS_MS)
self.clock.looping_call(self.send_stop_syncing, UPDATE_SYNCING_USERS)
hs.register_async_shutdown_handler(
phase="before",
@@ -581,7 +582,7 @@ class WorkerPresenceHandler(BasePresenceHandler):
for (user_id, device_id), last_sync_ms in list(
self._user_devices_going_offline.items()
):
if now - last_sync_ms > UPDATE_SYNCING_USERS_MS:
if now - last_sync_ms > UPDATE_SYNCING_USERS.as_millis():
self._user_devices_going_offline.pop((user_id, device_id), None)
self.send_user_sync(user_id, device_id, False, last_sync_ms)
@@ -861,20 +862,20 @@ class PresenceHandler(BasePresenceHandler):
# The initial delay is to allow disconnected clients a chance to
# reconnect before we treat them as offline.
self.clock.call_later(
30,
Duration(seconds=30),
self.clock.looping_call,
self._handle_timeouts,
5000,
Duration(seconds=5),
)
# Presence information is persisted, whether or not it is being tracked
# internally.
if self._presence_enabled:
self.clock.call_later(
60,
Duration(minutes=1),
self.clock.looping_call,
self._persist_unpersisted_changes,
60 * 1000,
Duration(minutes=1),
)
presence_wheel_timer_size_gauge.register_hook(
@@ -2430,7 +2431,7 @@ class PresenceFederationQueue:
_KEEP_ITEMS_IN_QUEUE_FOR_MS = 5 * 60 * 1000
# How often to check if we can expire entries from the queue.
_CLEAR_ITEMS_EVERY_MS = 60 * 1000
_CLEAR_ITEMS_EVERY_MS = Duration(minutes=1)
def __init__(self, hs: "HomeServer", presence_handler: BasePresenceHandler):
self._clock = hs.get_clock()

View File

@@ -34,6 +34,7 @@ from synapse.api.errors import (
from synapse.storage.databases.main.media_repository import LocalMedia, RemoteMedia
from synapse.types import JsonDict, JsonValue, Requester, UserID, create_requester
from synapse.util.caches.descriptors import cached
from synapse.util.duration import Duration
from synapse.util.stringutils import parse_and_validate_mxc_uri
if TYPE_CHECKING:
@@ -583,7 +584,7 @@ class ProfileHandler:
# Do not actually update the room state for shadow-banned users.
if requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
return
room_ids = await self.store.get_rooms_for_user(target_user.to_string())

View File

@@ -92,6 +92,7 @@ from synapse.types.state import StateFilter
from synapse.util import stringutils
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
from synapse.util.stringutils import parse_and_validate_server_name
from synapse.visibility import filter_events_for_client
@@ -1179,7 +1180,7 @@ class RoomCreationHandler:
if (invite_list or invite_3pid_list) and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
# Allow the request to go through, but remove any associated invites.
invite_3pid_list = []

View File

@@ -66,6 +66,7 @@ from synapse.types import (
from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer
from synapse.util.distributor import user_left_room
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -642,7 +643,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
if action == Membership.INVITE and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
key = (room_id,)
@@ -1647,7 +1648,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
if requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
# We need to rate limit *before* we send out any 3PID invites, so we
@@ -2190,7 +2191,7 @@ class RoomForgetterHandler(StateDeltasHandler):
# We kick this off to pick up outstanding work from before the last restart.
self._clock.call_later(
0,
Duration(seconds=0),
self.notify_new_event,
)
@@ -2232,7 +2233,7 @@ class RoomForgetterHandler(StateDeltasHandler):
#
# We wait for a short time so that we don't "tight" loop just
# keeping the table up to date.
await self._clock.sleep(0.5)
await self._clock.sleep(Duration(milliseconds=500))
self.pos = self._store.get_room_max_stream_ordering()
await self._store.update_room_forgetter_stream_pos(self.pos)

View File

@@ -761,8 +761,6 @@ class SlidingSyncHandler:
!= Membership.JOIN,
filter_send_to_client=True,
)
# TODO: Filter out `EventTypes.CallInvite` in public rooms,
# see https://github.com/element-hq/synapse/issues/17359
# TODO: Handle timeline gaps (`get_timeline_gaps()`)

View File

@@ -32,6 +32,7 @@ from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.metrics import SERVER_NAME_LABEL, event_processing_positions
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import JsonDict
from synapse.util.duration import Duration
from synapse.util.events import get_plain_text_topic_from_event_content
if TYPE_CHECKING:
@@ -72,7 +73,7 @@ class StatsHandler:
# We kick this off so that we don't have to wait for a change before
# we start populating stats
self.clock.call_later(
0,
Duration(seconds=0),
self.notify_new_event,
)

View File

@@ -36,7 +36,6 @@ from synapse.api.constants import (
Direction,
EventContentFields,
EventTypes,
JoinRules,
Membership,
)
from synapse.api.filtering import FilterCollection
@@ -790,22 +789,13 @@ class SyncHandler:
)
)
filtered_recents = await filter_events_for_client(
loaded_recents = await filter_events_for_client(
self._storage_controllers,
sync_config.user.to_string(),
loaded_recents,
always_include_ids=current_state_ids,
)
loaded_recents = []
for event in filtered_recents:
if event.type == EventTypes.CallInvite:
room_info = await self.store.get_room_with_stats(event.room_id)
assert room_info is not None
if room_info.join_rules == JoinRules.PUBLIC:
continue
loaded_recents.append(event)
log_kv({"loaded_recents_after_client_filtering": len(loaded_recents)})
loaded_recents.extend(recents)

View File

@@ -41,6 +41,7 @@ from synapse.types import (
UserID,
)
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
from synapse.util.retryutils import filter_destinations_by_retry_limiter
from synapse.util.wheel_timer import WheelTimer
@@ -60,15 +61,15 @@ class RoomMember:
# How often we expect remote servers to resend us presence.
FEDERATION_TIMEOUT = 60 * 1000
FEDERATION_TIMEOUT = Duration(minutes=1)
# How often to resend typing across federation.
FEDERATION_PING_INTERVAL = 40 * 1000
FEDERATION_PING_INTERVAL = Duration(seconds=40)
# How long to remember a typing notification happened in a room before
# forgetting about it.
FORGET_TIMEOUT = 10 * 60 * 1000
FORGET_TIMEOUT = Duration(minutes=10)
class FollowerTypingHandler:
@@ -106,7 +107,7 @@ class FollowerTypingHandler:
self._rooms_updated: set[str] = set()
self.clock.looping_call(self._handle_timeouts, 5000)
self.clock.looping_call(self._handle_timeouts, Duration(seconds=5))
self.clock.looping_call(self._prune_old_typing, FORGET_TIMEOUT)
def _reset(self) -> None:
@@ -141,7 +142,10 @@ class FollowerTypingHandler:
# user.
if self.federation and self.is_mine_id(member.user_id):
last_fed_poke = self._member_last_federation_poke.get(member, None)
if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL <= now:
if (
not last_fed_poke
or last_fed_poke + FEDERATION_PING_INTERVAL.as_millis() <= now
):
self.hs.run_as_background_process(
"typing._push_remote",
self._push_remote,
@@ -165,7 +169,7 @@ class FollowerTypingHandler:
now = self.clock.time_msec()
self.wheel_timer.insert(
now=now, obj=member, then=now + FEDERATION_PING_INTERVAL
now=now, obj=member, then=now + FEDERATION_PING_INTERVAL.as_millis()
)
hosts: StrCollection = (
@@ -315,7 +319,7 @@ class TypingWriterHandler(FollowerTypingHandler):
if requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
await self.auth.check_user_in_room(room_id, requester)
@@ -350,7 +354,7 @@ class TypingWriterHandler(FollowerTypingHandler):
if requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
await self.auth.check_user_in_room(room_id, requester)
@@ -428,8 +432,10 @@ class TypingWriterHandler(FollowerTypingHandler):
if user.domain in domains:
logger.info("Got typing update from %s: %r", user_id, content)
now = self.clock.time_msec()
self._member_typing_until[member] = now + FEDERATION_TIMEOUT
self.wheel_timer.insert(now=now, obj=member, then=now + FEDERATION_TIMEOUT)
self._member_typing_until[member] = now + FEDERATION_TIMEOUT.as_millis()
self.wheel_timer.insert(
now=now, obj=member, then=now + FEDERATION_TIMEOUT.as_millis()
)
self._push_update_local(member=member, typing=content["typing"])
def _push_update_local(self, member: RoomMember, typing: bool) -> None:

View File

@@ -40,6 +40,7 @@ from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.storage.databases.main.user_directory import SearchResult
from synapse.storage.roommember import ProfileInfo
from synapse.types import UserID
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.stringutils import non_null_str_or_none
@@ -52,7 +53,7 @@ logger = logging.getLogger(__name__)
# Don't refresh a stale user directory entry, using a Federation /profile request,
# for 60 seconds. This gives time for other state events to arrive (which will
# then be coalesced such that only one /profile request is made).
USER_DIRECTORY_STALE_REFRESH_TIME_MS = 60 * 1000
USER_DIRECTORY_STALE_REFRESH_TIME = Duration(minutes=1)
# Maximum number of remote servers that we will attempt to refresh profiles for
# in one go.
@@ -60,7 +61,7 @@ MAX_SERVERS_TO_REFRESH_PROFILES_FOR_IN_ONE_GO = 5
# As long as we have servers to refresh (without backoff), keep adding more
# every 15 seconds.
INTERVAL_TO_ADD_MORE_SERVERS_TO_REFRESH_PROFILES = 15
INTERVAL_TO_ADD_MORE_SERVERS_TO_REFRESH_PROFILES = Duration(seconds=15)
def calculate_time_of_next_retry(now_ts: int, retry_count: int) -> int:
@@ -137,13 +138,13 @@ class UserDirectoryHandler(StateDeltasHandler):
# We kick this off so that we don't have to wait for a change before
# we start populating the user directory
self.clock.call_later(
0,
Duration(seconds=0),
self.notify_new_event,
)
# Kick off the profile refresh process on startup
self._refresh_remote_profiles_call_later = self.clock.call_later(
10,
Duration(seconds=10),
self.kick_off_remote_profile_refresh_process,
)
@@ -550,7 +551,7 @@ class UserDirectoryHandler(StateDeltasHandler):
now_ts = self.clock.time_msec()
await self.store.set_remote_user_profile_in_user_dir_stale(
user_id,
next_try_at_ms=now_ts + USER_DIRECTORY_STALE_REFRESH_TIME_MS,
next_try_at_ms=now_ts + USER_DIRECTORY_STALE_REFRESH_TIME.as_millis(),
retry_counter=0,
)
# Schedule a wake-up to refresh the user directory for this server.
@@ -558,13 +559,13 @@ class UserDirectoryHandler(StateDeltasHandler):
# other servers ahead of it in the queue to get in the way of updating
# the profile if the server only just sent us an event.
self.clock.call_later(
USER_DIRECTORY_STALE_REFRESH_TIME_MS // 1000 + 1,
USER_DIRECTORY_STALE_REFRESH_TIME + Duration(seconds=1),
self.kick_off_remote_profile_refresh_process_for_remote_server,
UserID.from_string(user_id).domain,
)
# Schedule a wake-up to handle any backoffs that may occur in the future.
self.clock.call_later(
2 * USER_DIRECTORY_STALE_REFRESH_TIME_MS // 1000 + 1,
USER_DIRECTORY_STALE_REFRESH_TIME * 2 + Duration(seconds=1),
self.kick_off_remote_profile_refresh_process,
)
return
@@ -656,7 +657,9 @@ class UserDirectoryHandler(StateDeltasHandler):
if not users:
return
_, _, next_try_at_ts = users[0]
delay = ((next_try_at_ts - self.clock.time_msec()) // 1000) + 2
delay = Duration(
milliseconds=next_try_at_ts - self.clock.time_msec()
) + Duration(seconds=2)
self._refresh_remote_profiles_call_later = self.clock.call_later(
delay,
self.kick_off_remote_profile_refresh_process,

View File

@@ -72,7 +72,7 @@ class WorkerLocksHandler:
# that lock.
self._locks: dict[tuple[str, str], WeakSet[WaitingLock | WaitingMultiLock]] = {}
self._clock.looping_call(self._cleanup_locks, 30_000)
self._clock.looping_call(self._cleanup_locks, Duration(seconds=30))
self._notifier.add_lock_released_callback(self._on_lock_released)
@@ -187,7 +187,7 @@ class WorkerLocksHandler:
lock.release_lock()
self._clock.call_later(
0,
Duration(seconds=0),
_wake_all_locks,
locks,
)

View File

@@ -87,6 +87,7 @@ from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import ISynapseReactor, StrSequence
from synapse.util.async_helpers import timeout_deferred
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.json import json_decoder
if TYPE_CHECKING:
@@ -161,7 +162,9 @@ def _is_ip_blocked(
return False
_EPSILON = 0.00000001
# The delay used by the scheduler to schedule tasks "as soon as possible", while
# still allowing other tasks to run between runs.
_EPSILON = Duration(microseconds=1)
def _make_scheduler(clock: Clock) -> Callable[[Callable[[], object]], IDelayedCall]:

View File

@@ -37,6 +37,7 @@ from synapse.logging.context import make_deferred_yieldable
from synapse.types import ISynapseThreadlessReactor
from synapse.util.caches.ttlcache import TTLCache
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.json import json_decoder
from synapse.util.metrics import Measure
@@ -315,7 +316,7 @@ class WellKnownResolver:
logger.info("Error fetching %s: %s. Retrying", uri_str, e)
# Sleep briefly in the hopes that they come back up
await self._clock.sleep(0.5)
await self._clock.sleep(Duration(milliseconds=500))
def _cache_period_from_headers(

View File

@@ -76,6 +76,7 @@ from synapse.logging.opentracing import active_span, start_active_span, trace_se
from synapse.util.caches import intern_dict
from synapse.util.cancellation import is_function_cancellable
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.iterutils import chunk_seq
from synapse.util.json import json_encoder
@@ -334,7 +335,7 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
callback_return = await self._async_render(request)
except LimitExceededError as e:
if e.pause:
await self._clock.sleep(e.pause)
await self._clock.sleep(Duration(seconds=e.pause))
raise
if callback_return is not None:

View File

@@ -70,6 +70,7 @@ from synapse.media.url_previewer import UrlPreviewer
from synapse.storage.databases.main.media_repository import LocalMedia, RemoteMedia
from synapse.types import UserID
from synapse.util.async_helpers import Linearizer
from synapse.util.duration import Duration
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.stringutils import random_string
@@ -80,10 +81,10 @@ logger = logging.getLogger(__name__)
# How often to run the background job to update the "recently accessed"
# attribute of local and remote media.
UPDATE_RECENTLY_ACCESSED_TS = 60 * 1000 # 1 minute
UPDATE_RECENTLY_ACCESSED_TS = Duration(minutes=1)
# How often to run the background job to check for local and remote media
# that should be purged according to the configured media retention settings.
MEDIA_RETENTION_CHECK_PERIOD_MS = 60 * 60 * 1000 # 1 hour
MEDIA_RETENTION_CHECK_PERIOD = Duration(hours=1)
class MediaRepository:
@@ -166,7 +167,7 @@ class MediaRepository:
# with the duration between runs dictated by the homeserver config.
self.clock.looping_call(
self._start_apply_media_retention_rules,
MEDIA_RETENTION_CHECK_PERIOD_MS,
MEDIA_RETENTION_CHECK_PERIOD,
)
if hs.config.media.url_preview_enabled:
@@ -485,7 +486,7 @@ class MediaRepository:
if now >= wait_until:
break
await self.clock.sleep(0.5)
await self.clock.sleep(Duration(milliseconds=500))
logger.info("Media %s has not yet been uploaded", media_id)
self.respond_not_yet_uploaded(request)

View File

@@ -51,6 +51,7 @@ from synapse.logging.context import defer_to_thread, run_in_background
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
from synapse.media._base import ThreadedFileSender
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.file_consumer import BackgroundFileConsumer
from ..types import JsonDict
@@ -457,7 +458,7 @@ class ReadableFileWrapper:
callback(chunk)
# We yield to the reactor by sleeping for 0 seconds.
await self.clock.sleep(0)
await self.clock.sleep(Duration(seconds=0))
@implementer(interfaces.IConsumer)
@@ -652,7 +653,7 @@ class MultipartFileConsumer:
self.paused = False
while not self.paused:
producer.resumeProducing()
await self.clock.sleep(0)
await self.clock.sleep(Duration(seconds=0))
class Header:

View File

@@ -47,6 +47,7 @@ from synapse.media.preview_html import decode_body, parse_html_to_open_graph
from synapse.types import JsonDict, UserID
from synapse.util.async_helpers import ObservableDeferred
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.duration import Duration
from synapse.util.json import json_encoder
from synapse.util.stringutils import random_string
@@ -208,7 +209,9 @@ class UrlPreviewer:
)
if self._worker_run_media_background_jobs:
self.clock.looping_call(self._start_expire_url_cache_data, 10 * 1000)
self.clock.looping_call(
self._start_expire_url_cache_data, Duration(seconds=10)
)
async def preview(self, url: str, user: UserID, ts: int) -> bytes:
# the in-memory cache:

View File

@@ -23,6 +23,7 @@ from typing import TYPE_CHECKING
import attr
from synapse.metrics import SERVER_NAME_LABEL
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -70,7 +71,7 @@ class CommonUsageMetricsManager:
)
self._clock.looping_call(
self._hs.run_as_background_process,
5 * 60 * 1000,
Duration(minutes=5),
desc="common_usage_metrics_update_gauges",
func=self._update_gauges,
)

View File

@@ -158,6 +158,7 @@ from synapse.types.state import StateFilter
from synapse.util.async_helpers import maybe_awaitable
from synapse.util.caches.descriptors import CachedFunction, cached as _cached
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.frozenutils import freeze
if TYPE_CHECKING:
@@ -1389,7 +1390,7 @@ class ModuleApi:
if self._hs.config.worker.run_background_tasks or run_on_all_instances:
self._clock.looping_call(
self._hs.run_as_background_process,
msec,
Duration(milliseconds=msec),
desc,
lambda: maybe_awaitable(f(*args, **kwargs)),
)
@@ -1444,8 +1445,7 @@ class ModuleApi:
desc = f.__name__
return self._clock.call_later(
# convert ms to seconds as needed by call_later.
msec * 0.001,
Duration(milliseconds=msec),
self._hs.run_as_background_process,
desc,
lambda: maybe_awaitable(f(*args, **kwargs)),
@@ -1457,7 +1457,7 @@ class ModuleApi:
Added in Synapse v1.49.0.
"""
await self._clock.sleep(seconds)
await self._clock.sleep(Duration(seconds=seconds))
async def send_http_push_notification(
self,

View File

@@ -61,6 +61,7 @@ from synapse.types import (
from synapse.util.async_helpers import (
timeout_deferred,
)
from synapse.util.duration import Duration
from synapse.util.stringutils import shortstr
from synapse.visibility import filter_events_for_client
@@ -235,7 +236,7 @@ class Notifier:
Primarily used from the /events stream.
"""
UNUSED_STREAM_EXPIRY_MS = 10 * 60 * 1000
UNUSED_STREAM_EXPIRY = Duration(minutes=10)
def __init__(self, hs: "HomeServer"):
self.user_to_user_stream: dict[str, _NotifierUserStream] = {}
@@ -269,9 +270,7 @@ class Notifier:
self.state_handler = hs.get_state_handler()
self.clock.looping_call(
self.remove_expired_streams, self.UNUSED_STREAM_EXPIRY_MS
)
self.clock.looping_call(self.remove_expired_streams, self.UNUSED_STREAM_EXPIRY)
# This is not a very cheap test to perform, but it's only executed
# when rendering the metrics page, which is likely once per minute at
@@ -861,7 +860,7 @@ class Notifier:
logged = True
# TODO: be better
await self.clock.sleep(0.5)
await self.clock.sleep(Duration(milliseconds=500))
async def _get_room_ids(
self, user: UserID, explicit_room_id: str | None
@@ -889,7 +888,7 @@ class Notifier:
def remove_expired_streams(self) -> None:
time_now_ms = self.clock.time_msec()
expired_streams = []
expire_before_ts = time_now_ms - self.UNUSED_STREAM_EXPIRY_MS
expire_before_ts = time_now_ms - self.UNUSED_STREAM_EXPIRY.as_millis()
for stream in self.user_to_user_stream.values():
if stream.count_listeners():
continue

View File

@@ -29,6 +29,7 @@ from synapse.push import Pusher, PusherConfig, PusherConfigException, ThrottlePa
from synapse.push.mailer import Mailer
from synapse.push.push_types import EmailReason
from synapse.storage.databases.main.event_push_actions import EmailPushAction
from synapse.util.duration import Duration
from synapse.util.threepids import validate_email
if TYPE_CHECKING:
@@ -229,7 +230,7 @@ class EmailPusher(Pusher):
if soonest_due_at is not None:
delay = self.seconds_until(soonest_due_at)
self.timed_call = self.hs.get_clock().call_later(
delay,
Duration(seconds=delay),
self.on_timer,
)

View File

@@ -40,6 +40,7 @@ from . import push_tools
if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.util.duration import Duration
logger = logging.getLogger(__name__)
@@ -336,7 +337,7 @@ class HttpPusher(Pusher):
else:
logger.info("Push failed: delaying for %ds", self.backoff_delay)
self.timed_call = self.hs.get_clock().call_later(
self.backoff_delay,
Duration(seconds=self.backoff_delay),
self.on_timer,
)
self.backoff_delay = min(
@@ -371,7 +372,7 @@ class HttpPusher(Pusher):
delay_ms = random.randint(1, self.push_jitter_delay_ms)
diff_ms = event.origin_server_ts + delay_ms - self.clock.time_msec()
if diff_ms > 0:
await self.clock.sleep(diff_ms / 1000)
await self.clock.sleep(Duration(milliseconds=diff_ms))
rejected = await self.dispatch_push_event(event, tweaks, badge)
if rejected is False:

View File

@@ -42,6 +42,7 @@ from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import JsonDict
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.cancellation import is_function_cancellable
from synapse.util.duration import Duration
from synapse.util.stringutils import random_string
if TYPE_CHECKING:
@@ -317,7 +318,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
# If we timed out we probably don't need to worry about backing
# off too much, but lets just wait a little anyway.
await clock.sleep(1)
await clock.sleep(Duration(seconds=1))
except (ConnectError, DNSLookupError) as e:
if not cls.RETRY_ON_CONNECT_ERROR:
raise
@@ -332,7 +333,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
e,
)
await clock.sleep(delay)
await clock.sleep(Duration(seconds=delay))
attempts += 1
except HttpResponseException as e:
# We convert to SynapseError as we know that it was a SynapseError

View File

@@ -55,6 +55,7 @@ from synapse.replication.tcp.streams.partial_state import (
)
from synapse.types import PersistedEventPosition, ReadReceipt, StreamKeyType, UserID
from synapse.util.async_helpers import Linearizer, timeout_deferred
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
from synapse.util.metrics import Measure
@@ -173,7 +174,7 @@ class ReplicationDataHandler:
)
# Yield to reactor so that we don't block.
await self._clock.sleep(0)
await self._clock.sleep(Duration(seconds=0))
elif stream_name == PushersStream.NAME:
for row in rows:
if row.deleted:

View File

@@ -55,6 +55,7 @@ from synapse.replication.tcp.commands import (
parse_command_from_line,
)
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.stringutils import random_string
if TYPE_CHECKING:
@@ -193,7 +194,9 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
self._send_pending_commands()
# Starts sending pings
self._send_ping_loop = self.clock.looping_call(self.send_ping, 5000)
self._send_ping_loop = self.clock.looping_call(
self.send_ping, Duration(seconds=5)
)
# Always send the initial PING so that the other side knows that they
# can time us out.

View File

@@ -53,6 +53,7 @@ from synapse.replication.tcp.protocol import (
tcp_inbound_commands_counter,
tcp_outbound_commands_counter,
)
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.replication.tcp.handler import ReplicationCommandHandler
@@ -317,7 +318,7 @@ class SynapseRedisFactory(RedisFactory):
self.hs = hs # nb must be called this for @wrap_as_background_process
self.server_name = hs.hostname
hs.get_clock().looping_call(self._send_ping, 30 * 1000)
hs.get_clock().looping_call(self._send_ping, Duration(seconds=30))
@wrap_as_background_process("redis_ping")
async def _send_ping(self) -> None:

View File

@@ -34,6 +34,7 @@ from synapse.replication.tcp.commands import PositionCommand
from synapse.replication.tcp.protocol import ServerReplicationStreamProtocol
from synapse.replication.tcp.streams import EventsStream
from synapse.replication.tcp.streams._base import CachesStream, StreamRow, Token
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
if TYPE_CHECKING:
@@ -116,7 +117,7 @@ class ReplicationStreamer:
#
# Note that if the position hasn't advanced then we won't send anything.
if any(EventsStream.NAME == s.NAME for s in self.streams):
self.clock.looping_call(self.on_notifier_poke, 1000)
self.clock.looping_call(self.on_notifier_poke, Duration(seconds=1))
def on_notifier_poke(self) -> None:
"""Checks if there is actually any new data and sends it to the

View File

@@ -58,6 +58,7 @@ from synapse.types.rest.client import (
EmailRequestTokenBody,
MsisdnRequestTokenBody,
)
from synapse.util.duration import Duration
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.stringutils import assert_valid_client_secret, random_string
from synapse.util.threepids import check_3pid_allowed, validate_email
@@ -125,7 +126,9 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
# comments for request_token_inhibit_3pid_errors.
# Also wait for some random amount of time between 100ms and 1s to make it
# look like we did something.
await self.hs.get_clock().sleep(random.randint(1, 10) / 10)
await self.hs.get_clock().sleep(
Duration(milliseconds=random.randint(100, 1000))
)
return 200, {"sid": random_string(16)}
raise SynapseError(400, "Email not found", Codes.THREEPID_NOT_FOUND)
@@ -383,7 +386,9 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
# comments for request_token_inhibit_3pid_errors.
# Also wait for some random amount of time between 100ms and 1s to make it
# look like we did something.
await self.hs.get_clock().sleep(random.randint(1, 10) / 10)
await self.hs.get_clock().sleep(
Duration(milliseconds=random.randint(100, 1000))
)
return 200, {"sid": random_string(16)}
raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE)
@@ -449,7 +454,9 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
# comments for request_token_inhibit_3pid_errors.
# Also wait for some random amount of time between 100ms and 1s to make it
# look like we did something.
await self.hs.get_clock().sleep(random.randint(1, 10) / 10)
await self.hs.get_clock().sleep(
Duration(milliseconds=random.randint(100, 1000))
)
return 200, {"sid": random_string(16)}
logger.info("MSISDN %s is already in use by %s", msisdn, existing_user_id)

View File

@@ -156,10 +156,10 @@ class DelayedEventsServlet(RestServlet):
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
# The following can't currently be instantiated on workers.
# Most of the following can't currently be instantiated on workers.
if hs.config.worker.worker_app is None:
UpdateDelayedEventServlet(hs).register(http_server)
CancelDelayedEventServlet(hs).register(http_server)
RestartDelayedEventServlet(hs).register(http_server)
SendDelayedEventServlet(hs).register(http_server)
RestartDelayedEventServlet(hs).register(http_server)
DelayedEventsServlet(hs).register(http_server)

View File

@@ -59,6 +59,7 @@ from synapse.http.site import SynapseRequest
from synapse.metrics import SERVER_NAME_LABEL, threepid_send_requests
from synapse.push.mailer import Mailer
from synapse.types import JsonDict
from synapse.util.duration import Duration
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.stringutils import assert_valid_client_secret, random_string
@@ -150,7 +151,9 @@ class EmailRegisterRequestTokenRestServlet(RestServlet):
# Also wait for some random amount of time between 100ms and 1s to make it
# look like we did something.
await self.already_in_use_mailer.send_already_in_use_mail(email)
await self.hs.get_clock().sleep(random.randint(1, 10) / 10)
await self.hs.get_clock().sleep(
Duration(milliseconds=random.randint(100, 1000))
)
return 200, {"sid": random_string(16)}
raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE)
@@ -219,7 +222,9 @@ class MsisdnRegisterRequestTokenRestServlet(RestServlet):
# comments for request_token_inhibit_3pid_errors.
# Also wait for some random amount of time between 100ms and 1s to make it
# look like we did something.
await self.hs.get_clock().sleep(random.randint(1, 10) / 10)
await self.hs.get_clock().sleep(
Duration(milliseconds=random.randint(100, 1000))
)
return 200, {"sid": random_string(16)}
raise SynapseError(

View File

@@ -57,7 +57,7 @@ class HttpTransactionCache:
] = {}
# Try to clean entries every 30 mins. This means entries will exist
# for at *LEAST* 30 mins, and at *MOST* 60 mins.
self.clock.looping_call(self._cleanup, CLEANUP_PERIOD.as_millis())
self.clock.looping_call(self._cleanup, CLEANUP_PERIOD)
def _get_transaction_key(self, request: IRequest, requester: Requester) -> Hashable:
"""A helper function which returns a transaction key that can be used

View File

@@ -182,6 +182,8 @@ class VersionsRestServlet(RestServlet):
"org.matrix.msc4306": self.config.experimental.msc4306_enabled,
# MSC4169: Backwards-compatible redaction sending using `/send`
"com.beeper.msc4169": self.config.experimental.msc4169_enabled,
# MSC4380: Invite blocking
"org.matrix.msc4380": self.config.experimental.msc4380_enabled,
},
},
)

View File

@@ -54,6 +54,7 @@ from synapse.types import StateMap, StrCollection
from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.duration import Duration
from synapse.util.metrics import Measure, measure_func
from synapse.util.stringutils import shortstr
@@ -663,7 +664,7 @@ class StateResolutionHandler:
_StateResMetrics
)
self.clock.looping_call(self._report_metrics, 120 * 1000)
self.clock.looping_call(self._report_metrics, Duration(minutes=2))
async def resolve_state_groups(
self,

View File

@@ -40,6 +40,7 @@ from synapse.api.room_versions import RoomVersion, StateResolutionVersions
from synapse.events import EventBase, is_creator
from synapse.storage.databases.main.event_federation import StateDifference
from synapse.types import MutableStateMap, StateMap, StrCollection
from synapse.util.duration import Duration
logger = logging.getLogger(__name__)
@@ -48,7 +49,7 @@ class Clock(Protocol):
# This is usually synapse.util.Clock, but it's replaced with a FakeClock in tests.
# We only ever sleep(0) though, so that other async functions can make forward
# progress without waiting for stateres to complete.
async def sleep(self, duration_ms: float) -> None: ...
async def sleep(self, duration: Duration) -> None: ...
class StateResolutionStore(Protocol):
@@ -639,7 +640,7 @@ async def _reverse_topological_power_sort(
# We await occasionally when we're working with large data sets to
# ensure that we don't block the reactor loop for too long.
if idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
event_to_pl = {}
for idx, event_id in enumerate(graph, start=1):
@@ -651,7 +652,7 @@ async def _reverse_topological_power_sort(
# We await occasionally when we're working with large data sets to
# ensure that we don't block the reactor loop for too long.
if idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
def _get_power_order(event_id: str) -> tuple[int, int, str]:
ev = event_map[event_id]
@@ -745,7 +746,7 @@ async def _iterative_auth_checks(
# We await occasionally when we're working with large data sets to
# ensure that we don't block the reactor loop for too long.
if idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
return resolved_state
@@ -796,7 +797,7 @@ async def _mainline_sort(
# We await occasionally when we're working with large data sets to
# ensure that we don't block the reactor loop for too long.
if idx != 0 and idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
idx += 1
@@ -814,7 +815,7 @@ async def _mainline_sort(
# We await occasionally when we're working with large data sets to
# ensure that we don't block the reactor loop for too long.
if idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
event_ids.sort(key=lambda ev_id: order_map[ev_id])
@@ -865,7 +866,7 @@ async def _get_mainline_depth_for_event(
idx += 1
if idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
# Didn't find a power level auth event, so we just return 0
return 0

View File

@@ -40,6 +40,7 @@ from synapse.storage.engines import PostgresEngine
from synapse.storage.types import Connection, Cursor
from synapse.types import JsonDict, StrCollection
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.json import json_encoder
from . import engines
@@ -162,7 +163,7 @@ class _BackgroundUpdateContextManager:
async def __aenter__(self) -> int:
if self._sleep:
await self._clock.sleep(self._sleep_duration_ms / 1000)
await self._clock.sleep(Duration(milliseconds=self._sleep_duration_ms))
return self._update_duration_ms

View File

@@ -32,6 +32,7 @@ from synapse.metrics.background_process_metrics import wrap_as_background_proces
from synapse.storage.database import LoggingTransaction
from synapse.storage.databases import Databases
from synapse.types.storage import _BackgroundUpdates
from synapse.util.duration import Duration
from synapse.util.stringutils import shortstr
if TYPE_CHECKING:
@@ -50,7 +51,7 @@ class PurgeEventsStorageController:
if hs.config.worker.run_background_tasks:
self._delete_state_loop_call = hs.get_clock().looping_call(
self._delete_state_groups_loop, 60 * 1000
self._delete_state_groups_loop, Duration(minutes=1)
)
self.stores.state.db_pool.updates.register_background_update_handler(

View File

@@ -683,7 +683,7 @@ class StateStorageController:
# https://github.com/matrix-org/synapse/issues/13008
return await self.stores.main.get_partial_current_state_deltas(
prev_stream_id, max_stream_id
prev_stream_id, max_stream_id, limit=100
)
@trace

View File

@@ -62,6 +62,7 @@ from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3E
from synapse.storage.types import Connection, Cursor, SQLQueryParameters
from synapse.types import StrCollection
from synapse.util.async_helpers import delay_cancellation
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
if TYPE_CHECKING:
@@ -631,7 +632,7 @@ class DatabasePool:
# Check ASAP (and then later, every 1s) to see if we have finished
# background updates of tables that aren't safe to update.
self._clock.call_later(
0.0,
Duration(seconds=0),
self.hs.run_as_background_process,
"upsert_safety_check",
self._check_safe_to_upsert,
@@ -679,7 +680,7 @@ class DatabasePool:
# If there's any updates still running, reschedule to run.
if background_update_names:
self._clock.call_later(
15.0,
Duration(seconds=15),
self.hs.run_as_background_process,
"upsert_safety_check",
self._check_safe_to_upsert,
@@ -706,7 +707,7 @@ class DatabasePool:
"Total database time: %.3f%% {%s}", ratio * 100, top_three_counters
)
self._clock.looping_call(loop, 10000)
self._clock.looping_call(loop, Duration(seconds=10))
def new_transaction(
self,

View File

@@ -40,7 +40,12 @@ from synapse.storage.database import (
)
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
from synapse.storage.invite_rule import InviteRulesConfig
from synapse.storage.invite_rule import (
AllowAllInviteRulesConfig,
InviteRulesConfig,
MSC4155InviteRulesConfig,
MSC4380InviteRulesConfig,
)
from synapse.storage.util.id_generators import MultiWriterIdGenerator
from synapse.types import JsonDict, JsonMapping
from synapse.util.caches.descriptors import cached
@@ -104,6 +109,7 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
)
self._msc4155_enabled = hs.config.experimental.msc4155_enabled
self._msc4380_enabled = hs.config.experimental.msc4380_enabled
def get_max_account_data_stream_id(self) -> int:
"""Get the current max stream ID for account data stream
@@ -562,20 +568,28 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
async def get_invite_config_for_user(self, user_id: str) -> InviteRulesConfig:
"""
Get the invite configuration for the current user.
Get the invite configuration for the given user.
Args:
user_id:
user_id: The user whose invite configuration should be returned.
"""
if self._msc4380_enabled:
data = await self.get_global_account_data_by_type_for_user(
user_id, AccountDataTypes.MSC4380_INVITE_PERMISSION_CONFIG
)
# If the user has an MSC4380-style config setting, prioritise that
# above an MSC4155 one
if data is not None:
return MSC4380InviteRulesConfig.from_account_data(data)
if not self._msc4155_enabled:
# This equates to allowing all invites, as if the setting was off.
return InviteRulesConfig(None)
if self._msc4155_enabled:
data = await self.get_global_account_data_by_type_for_user(
user_id, AccountDataTypes.MSC4155_INVITE_PERMISSION_CONFIG
)
if data is not None:
return MSC4155InviteRulesConfig(data)
data = await self.get_global_account_data_by_type_for_user(
user_id, AccountDataTypes.MSC4155_INVITE_PERMISSION_CONFIG
)
return InviteRulesConfig(data)
return AllowAllInviteRulesConfig()
async def get_admin_client_config_for_user(self, user_id: str) -> AdminClientConfig:
"""

View File

@@ -45,6 +45,7 @@ from synapse.storage.database import (
from synapse.storage.engines import PostgresEngine
from synapse.storage.util.id_generators import MultiWriterIdGenerator
from synapse.util.caches.descriptors import CachedFunction
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
if TYPE_CHECKING:
@@ -71,11 +72,11 @@ GET_E2E_CROSS_SIGNING_SIGNATURES_FOR_DEVICE_CACHE_NAME = (
# How long between cache invalidation table cleanups, once we have caught up
# with the backlog.
REGULAR_CLEANUP_INTERVAL_MS = Config.parse_duration("1h")
REGULAR_CLEANUP_INTERVAL = Duration(hours=1)
# How long between cache invalidation table cleanups, before we have caught
# up with the backlog.
CATCH_UP_CLEANUP_INTERVAL_MS = Config.parse_duration("1m")
CATCH_UP_CLEANUP_INTERVAL = Duration(minutes=1)
# Maximum number of cache invalidation rows to delete at once.
CLEAN_UP_MAX_BATCH_SIZE = 20_000
@@ -139,7 +140,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self.database_engine, PostgresEngine
):
self.hs.get_clock().call_later(
CATCH_UP_CLEANUP_INTERVAL_MS / 1000,
CATCH_UP_CLEANUP_INTERVAL,
self._clean_up_cache_invalidation_wrapper,
)
@@ -825,12 +826,12 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
# Vary how long we wait before calling again depending on whether we
# are still sifting through backlog or we have caught up.
if in_backlog:
next_interval = CATCH_UP_CLEANUP_INTERVAL_MS
next_interval = CATCH_UP_CLEANUP_INTERVAL
else:
next_interval = REGULAR_CLEANUP_INTERVAL_MS
next_interval = REGULAR_CLEANUP_INTERVAL
self.hs.get_clock().call_later(
next_interval / 1000,
next_interval,
self._clean_up_cache_invalidation_wrapper,
)

View File

@@ -32,6 +32,7 @@ from synapse.storage.database import (
)
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.util.duration import Duration
from synapse.util.json import json_encoder
if TYPE_CHECKING:
@@ -54,7 +55,7 @@ class CensorEventsStore(EventsWorkerStore, CacheInvalidationWorkerStore, SQLBase
hs.config.worker.run_background_tasks
and self.hs.config.server.redaction_retention_period is not None
):
hs.get_clock().looping_call(self._censor_redactions, 5 * 60 * 1000)
hs.get_clock().looping_call(self._censor_redactions, Duration(minutes=5))
@wrap_as_background_process("_censor_redactions")
async def _censor_redactions(self) -> None:

View File

@@ -42,6 +42,7 @@ from synapse.storage.databases.main.monthly_active_users import (
)
from synapse.types import JsonDict, UserID
from synapse.util.caches.lrucache import LruCache
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -437,7 +438,7 @@ class ClientIpWorkerStore(ClientIpBackgroundUpdateStore, MonthlyActiveUsersWorke
)
if hs.config.worker.run_background_tasks and self.user_ips_max_age:
self.clock.looping_call(self._prune_old_user_ips, 5 * 1000)
self.clock.looping_call(self._prune_old_user_ips, Duration(seconds=5))
if self._update_on_this_worker:
# This is the designated worker that can write to the client IP
@@ -448,7 +449,7 @@ class ClientIpWorkerStore(ClientIpBackgroundUpdateStore, MonthlyActiveUsersWorke
tuple[str, str, str], tuple[str, str | None, int]
] = {}
self.clock.looping_call(self._update_client_ips_batch, 5 * 1000)
self.clock.looping_call(self._update_client_ips_batch, Duration(seconds=5))
hs.register_async_shutdown_handler(
phase="before",
eventType="shutdown",

View File

@@ -259,7 +259,7 @@ class DelayedEventsStore(SQLBaseStore):
]
async def process_timeout_delayed_events(
self, current_ts: Timestamp
self, current_ts: Timestamp, reprocess_events: bool = False
) -> tuple[
list[DelayedEventDetails],
Timestamp | None,
@@ -268,6 +268,16 @@ class DelayedEventsStore(SQLBaseStore):
Marks for processing all delayed events that should have been sent prior to the provided time
that haven't already been marked as such.
Args:
current_ts: The current timestamp.
reprocess_events: Whether to reprocess already-processed delayed
events. If set to True, events which are marked as processed
will have their `send_ts` re-checked.
This is mainly useful for recovering from a server restart;
which could have occurred between an event being marked as
processed and the event actually being sent.
Returns: The details of all newly-processed delayed events,
and the send time of the next delayed event to be sent, if any.
"""
@@ -292,7 +302,12 @@ class DelayedEventsStore(SQLBaseStore):
)
)
sql_update = "UPDATE delayed_events SET is_processed = TRUE"
sql_where = "WHERE send_ts <= ? AND NOT is_processed"
sql_where = "WHERE send_ts <= ?"
if not reprocess_events:
# Skip already-processed events.
sql_where += " AND NOT is_processed"
sql_args = (current_ts,)
sql_order = "ORDER BY send_ts"
if isinstance(self.database_engine, PostgresEngine):

View File

@@ -152,7 +152,7 @@ class DeviceInboxWorkerStore(SQLBaseStore):
if hs.config.worker.run_background_tasks:
self.clock.looping_call(
run_as_background_process,
DEVICE_FEDERATION_INBOX_CLEANUP_INTERVAL.as_millis(),
DEVICE_FEDERATION_INBOX_CLEANUP_INTERVAL,
"_delete_old_federation_inbox_rows",
self.server_name,
self._delete_old_federation_inbox_rows,
@@ -1029,7 +1029,7 @@ class DeviceInboxWorkerStore(SQLBaseStore):
# We sleep a bit so that we don't hammer the database in a tight
# loop first time we run this.
await self.clock.sleep(1)
await self.clock.sleep(Duration(seconds=1))
async def get_devices_with_messages(
self, user_id: str, device_ids: StrCollection

View File

@@ -62,6 +62,7 @@ from synapse.types import (
from synapse.util.caches.descriptors import cached, cachedList
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.cancellation import cancellable
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
from synapse.util.json import json_decoder, json_encoder
from synapse.util.stringutils import shortstr
@@ -191,7 +192,7 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
if hs.config.worker.run_background_tasks:
self.clock.looping_call(
self._prune_old_outbound_device_pokes, 60 * 60 * 1000
self._prune_old_outbound_device_pokes, Duration(hours=1)
)
def process_replication_rows(

View File

@@ -56,6 +56,7 @@ from synapse.types import JsonDict, StrCollection
from synapse.util.caches.descriptors import cached
from synapse.util.caches.lrucache import LruCache
from synapse.util.cancellation import cancellable
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
from synapse.util.json import json_encoder
@@ -155,7 +156,7 @@ class EventFederationWorkerStore(
if hs.config.worker.run_background_tasks:
hs.get_clock().looping_call(
self._delete_old_forward_extrem_cache, 60 * 60 * 1000
self._delete_old_forward_extrem_cache, Duration(hours=1)
)
# Cache of event ID to list of auth event IDs and their depths.
@@ -171,7 +172,9 @@ class EventFederationWorkerStore(
# index.
self.tests_allow_no_chain_cover_index = True
self.clock.looping_call(self._get_stats_for_federation_staging, 30 * 1000)
self.clock.looping_call(
self._get_stats_for_federation_staging, Duration(seconds=30)
)
if isinstance(self.database_engine, PostgresEngine):
self.db_pool.updates.register_background_validate_constraint_and_delete_rows(

View File

@@ -105,6 +105,7 @@ from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.stream import StreamWorkerStore
from synapse.types import JsonDict, StrCollection
from synapse.util.caches.descriptors import cached
from synapse.util.duration import Duration
from synapse.util.json import json_encoder
if TYPE_CHECKING:
@@ -270,15 +271,17 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
self._find_stream_orderings_for_times_txn(cur)
cur.close()
self.clock.looping_call(self._find_stream_orderings_for_times, 10 * 60 * 1000)
self.clock.looping_call(
self._find_stream_orderings_for_times, Duration(minutes=10)
)
self._rotate_count = 10000
self._doing_notif_rotation = False
if hs.config.worker.run_background_tasks:
self.clock.looping_call(self._rotate_notifs, 30 * 1000)
self.clock.looping_call(self._rotate_notifs, Duration(seconds=30))
self.clock.looping_call(
self._clear_old_push_actions_staging, 30 * 60 * 1000
self._clear_old_push_actions_staging, Duration(minutes=30)
)
self.db_pool.updates.register_background_index_update(
@@ -1817,7 +1820,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
return
# We sleep to ensure that we don't overwhelm the DB.
await self.clock.sleep(1.0)
await self.clock.sleep(Duration(seconds=1))
async def get_push_actions_for_user(
self,

Some files were not shown because too many files have changed in this diff Show More