Compare commits

..

22 Commits

Author SHA1 Message Date
Erik Johnston
901f1fece1 Fix Rust code 2025-11-26 16:15:26 +00:00
Erik Johnston
29ea0d773c Newsfile 2025-11-26 16:15:26 +00:00
Erik Johnston
41d2b490ef Port call_later 2025-11-26 16:15:26 +00:00
Erik Johnston
d470dee438 Port sleep 2025-11-26 15:05:32 +00:00
Erik Johnston
f421230115 Port looping_call 2025-11-26 14:22:19 +00:00
Erik Johnston
7af7bc56a8 Port looping_call_now 2025-11-26 14:21:45 +00:00
Erik Johnston
f8228b5158 as_secs() should return a float 2025-11-26 14:21:45 +00:00
Erik Johnston
b74c29f694 Move towards a dedicated Duration class (#19223)
We have various constants to try and avoid mistyping of durations, e.g.
`ONE_HOUR_SECONDS * MILLISECONDS_PER_SECOND`, however this can get a
little verbose and doesn't help with typing.

Instead, let's move towards a dedicated `Duration` class (basically a
[`timedelta`](https://docs.python.org/3/library/datetime.html#timedelta-objects)
with helper methods).

This PR introduces the new types and converts all usages of the existing
constants with it. Future PRs may work to move the clock methods to also
use it (e.g. `call_later` and `looping_call`).

Reviewable commit-by-commit.
2025-11-26 10:56:59 +00:00
Andrew Morgan
2741ead569 Stop building wheels for MacOS (#19225) 2025-11-26 10:32:39 +00:00
Andrew Morgan
ba65d8c351 Put MSC2666 endpoint behind an experimental flag (#19219) 2025-11-25 18:03:33 +00:00
Devon Hudson
ae98771fea Merge branch 'master' into develop 2025-11-25 09:58:11 -07:00
Andrew Morgan
b7e592a88c Allow ruff to auto-fix trailing spaces in multi-line comments (#19221) 2025-11-25 14:09:48 +00:00
Erik Johnston
db975ea10d Expire sliding sync connections (#19211)
We add some logic to expire sliding sync connections if they get old or
if there is too much pending data to return.

The values of the constants are picked fairly arbitrarily, these are
currently:
1. More than 100 rooms with pending events if the connection hasn't been
used in over an hour
2. The connection hasn't been used for over a week

Reviewable commit-by-commit

---------

Co-authored-by: Eric Eastwood <erice@element.io>
2025-11-25 10:20:47 +00:00
dependabot[bot]
8b79583643 Bump sentry-sdk from 2.44.0 to 2.46.0 (#19218)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from
2.44.0 to 2.46.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-python/releases">sentry-sdk's
releases</a>.</em></p>
<blockquote>
<h2>2.46.0</h2>
<h3>Various fixes &amp; improvements</h3>
<ul>
<li>Preserve metadata on wrapped coroutines (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5105">#5105</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></li>
<li>Make imports defensive to avoid <code>ModuleNotFoundError</code> in
Pydantic AI integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5135">#5135</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></li>
<li>Fix OpenAI agents integration mistakenly enabling itself (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5132">#5132</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Add instrumentation to embedding functions for various backends (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5120">#5120</a>)
by <a
href="https://github.com/constantinius"><code>@​constantinius</code></a></li>
<li>Improve embeddings support for OpenAI (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5121">#5121</a>)
by <a
href="https://github.com/constantinius"><code>@​constantinius</code></a></li>
<li>Enhance input handling for embeddings in LiteLLM integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5127">#5127</a>)
by <a
href="https://github.com/constantinius"><code>@​constantinius</code></a></li>
<li>Expect exceptions when re-raised (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5125">#5125</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></li>
<li>Remove <code>MagicMock</code> from mocked <code>ModelResponse</code>
(<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5126">#5126</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></li>
</ul>
<h2>2.45.0</h2>
<h3>Various fixes &amp; improvements</h3>
<ul>
<li>
<p>OTLPIntegration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/4877">#4877</a>)
by <a
href="https://github.com/sl0thentr0py"><code>@​sl0thentr0py</code></a></p>
<p>Enable the new OTLP integration with the code snippet below, and your
OpenTelemetry instrumentation will be automatically sent to Sentry's
OTLP ingestion endpoint.</p>
<pre lang="python"><code>  import sentry_sdk
  from sentry_sdk.integrations.otlp import OTLPIntegration
<p>sentry_sdk.init(<br />
dsn=&quot;&lt;your-dsn&gt;&quot;,<br />
# Add data like inputs and responses;<br />
# see <a
href="https://docs.sentry.io/platforms/python/data-management/data-collected/">https://docs.sentry.io/platforms/python/data-management/data-collected/</a>
for more info<br />
send_default_pii=True,<br />
integrations=[<br />
OTLPIntegration(),<br />
],<br />
)<br />
</code></pre></p>
<p>Under the hood, this will setup:</p>
<ul>
<li>A <code>SpanExporter</code> that will automatically set up the OTLP
ingestion endpoint from your DSN</li>
<li>A <code>Propagator</code> that ensures Distributed Tracing
works</li>
<li>Trace/Span linking for all other Sentry events such as Errors, Logs,
Crons and Metrics</li>
</ul>
<p>If you were using the <code>SentrySpanProcessor</code> before, we
recommend migrating over to <code>OTLPIntegration</code> since it's a
much simpler setup.</p>
</li>
<li>
<p>feat(integrations): implement context management for invoke_agent
spans (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5089">#5089</a>)
by <a
href="https://github.com/constantinius"><code>@​constantinius</code></a></p>
</li>
<li>
<p>feat(loguru): Capture extra (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5096">#5096</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></p>
</li>
<li>
<p>feat: Attach <code>server.address</code> to metrics (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5113">#5113</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></p>
</li>
<li>
<p>fix: Cast message and detail attributes before appending exception
notes (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5114">#5114</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></p>
</li>
<li>
<p>fix(integrations): ensure that GEN_AI_AGENT_NAME is properly set for
GEN_AI spans under an invoke_agent span (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5030">#5030</a>)
by <a
href="https://github.com/constantinius"><code>@​constantinius</code></a></p>
</li>
<li>
<p>fix(logs): Update <code>sentry.origin</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5112">#5112</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></p>
</li>
<li>
<p>chore: Deprecate description truncation option for Redis spans (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5073">#5073</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></p>
</li>
<li>
<p>chore: Deprecate <code>max_spans</code> LangChain parameter (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5074">#5074</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></p>
</li>
<li>
<p>chore(toxgen): Check availability of pip and add detail to exceptions
(<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5076">#5076</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md">sentry-sdk's
changelog</a>.</em></p>
<blockquote>
<h2>2.46.0</h2>
<h3>Various fixes &amp; improvements</h3>
<ul>
<li>Preserve metadata on wrapped coroutines (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5105">#5105</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></li>
<li>Make imports defensive to avoid <code>ModuleNotFoundError</code> in
Pydantic AI integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5135">#5135</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></li>
<li>Fix OpenAI agents integration mistakenly enabling itself (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5132">#5132</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></li>
<li>Add instrumentation to embedding functions for various backends (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5120">#5120</a>)
by <a
href="https://github.com/constantinius"><code>@​constantinius</code></a></li>
<li>Improve embeddings support for OpenAI (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5121">#5121</a>)
by <a
href="https://github.com/constantinius"><code>@​constantinius</code></a></li>
<li>Enhance input handling for embeddings in LiteLLM integration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5127">#5127</a>)
by <a
href="https://github.com/constantinius"><code>@​constantinius</code></a></li>
<li>Expect exceptions when re-raised (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5125">#5125</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></li>
<li>Remove <code>MagicMock</code> from mocked <code>ModelResponse</code>
(<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5126">#5126</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></li>
</ul>
<h2>2.45.0</h2>
<h3>Various fixes &amp; improvements</h3>
<ul>
<li>
<p>OTLPIntegration (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/4877">#4877</a>)
by <a
href="https://github.com/sl0thentr0py"><code>@​sl0thentr0py</code></a></p>
<p>Enable the new OTLP integration with the code snippet below, and your
OpenTelemetry instrumentation will be automatically sent to Sentry's
OTLP ingestion endpoint.</p>
<pre lang="python"><code>  import sentry_sdk
  from sentry_sdk.integrations.otlp import OTLPIntegration
<p>sentry_sdk.init(<br />
dsn=&quot;&lt;your-dsn&gt;&quot;,<br />
# Add data like inputs and responses;<br />
# see <a
href="https://docs.sentry.io/platforms/python/data-management/data-collected/">https://docs.sentry.io/platforms/python/data-management/data-collected/</a>
for more info<br />
send_default_pii=True,<br />
integrations=[<br />
OTLPIntegration(),<br />
],<br />
)<br />
</code></pre></p>
<p>Under the hood, this will setup:</p>
<ul>
<li>A <code>SpanExporter</code> that will automatically set up the OTLP
ingestion endpoint from your DSN</li>
<li>A <code>Propagator</code> that ensures Distributed Tracing
works</li>
<li>Trace/Span linking for all other Sentry events such as Errors, Logs,
Crons and Metrics</li>
</ul>
<p>If you were using the <code>SentrySpanProcessor</code> before, we
recommend migrating over to <code>OTLPIntegration</code> since it's a
much simpler setup.</p>
</li>
<li>
<p>feat(integrations): implement context management for invoke_agent
spans (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5089">#5089</a>)
by <a
href="https://github.com/constantinius"><code>@​constantinius</code></a></p>
</li>
<li>
<p>feat(loguru): Capture extra (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5096">#5096</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></p>
</li>
<li>
<p>feat: Attach <code>server.address</code> to metrics (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5113">#5113</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></p>
</li>
<li>
<p>fix: Cast message and detail attributes before appending exception
notes (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5114">#5114</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></p>
</li>
<li>
<p>fix(integrations): ensure that GEN_AI_AGENT_NAME is properly set for
GEN_AI spans under an invoke_agent span (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5030">#5030</a>)
by <a
href="https://github.com/constantinius"><code>@​constantinius</code></a></p>
</li>
<li>
<p>fix(logs): Update <code>sentry.origin</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5112">#5112</a>)
by <a
href="https://github.com/sentrivana"><code>@​sentrivana</code></a></p>
</li>
<li>
<p>chore: Deprecate description truncation option for Redis spans (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5073">#5073</a>)
by <a
href="https://github.com/alexander-alderman-webb"><code>@​alexander-alderman-webb</code></a></p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d3375bc37b"><code>d3375bc</code></a>
Update CHANGELOG.md</li>
<li><a
href="23abfe2996"><code>23abfe2</code></a>
release: 2.46.0</li>
<li><a
href="ca19d6300f"><code>ca19d63</code></a>
feat: Preserve metadata on wrapped coroutines (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5105">#5105</a>)</li>
<li><a
href="cf165e332b"><code>cf165e3</code></a>
build(deps): bump actions/checkout from 5.0.0 to 6.0.0 (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5136">#5136</a>)</li>
<li><a
href="b8d6a57d78"><code>b8d6a57</code></a>
build(deps): bump actions/create-github-app-token from 2.1.4 to 2.2.0
(<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5137">#5137</a>)</li>
<li><a
href="c0c28b842d"><code>c0c28b8</code></a>
build(deps): bump supercharge/redis-github-action from 1.8.0 to 1.8.1
(<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5138">#5138</a>)</li>
<li><a
href="fb18c2164e"><code>fb18c21</code></a>
fix(pydantic-ai): Make imports defensive to avoid
<code>ModuleNotFoundError</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5135">#5135</a>)</li>
<li><a
href="f945e382ee"><code>f945e38</code></a>
Fix openai-agents import (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5132">#5132</a>)</li>
<li><a
href="8596f89490"><code>8596f89</code></a>
fix(integrations): enhance input handling for embeddings in LiteLLM
integrati...</li>
<li><a
href="0e6e808882"><code>0e6e808</code></a>
test(openai-agents): Remove <code>MagicMock</code> from mocked
<code>ModelResponse</code> (<a
href="https://redirect.github.com/getsentry/sentry-python/issues/5126">#5126</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-python/compare/2.44.0...2.46.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=sentry-sdk&package-manager=pip&previous-version=2.44.0&new-version=2.46.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 18:04:07 +00:00
dependabot[bot]
9fb2b1731b Bump actions/checkout from 5.0.0 to 6.0.0 (#19213)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5.0.0
to 6.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>v6-beta by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2298">actions/checkout#2298</a></li>
<li>update readme/changelog for v6 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2311">actions/checkout#2311</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5.0.0...v6.0.0">https://github.com/actions/checkout/compare/v5.0.0...v6.0.0</a></p>
<h2>v6-beta</h2>
<h2>What's Changed</h2>
<p>Updated persist-credentials to store the credentials under
<code>$RUNNER_TEMP</code> instead of directly in the local git
config.</p>
<p>This requires a minimum Actions Runner version of <a
href="https://github.com/actions/runner/releases/tag/v2.329.0">v2.329.0</a>
to access the persisted credentials for <a
href="https://docs.github.com/en/actions/tutorials/use-containerized-services/create-a-docker-container-action">Docker
container action</a> scenarios.</p>
<h2>v5.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5...v5.0.1">https://github.com/actions/checkout/compare/v5...v5.0.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V6.0.0</h2>
<ul>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
</ul>
<h2>V5.0.1</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.1</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1af3b93b68"><code>1af3b93</code></a>
update readme/changelog for v6 (<a
href="https://redirect.github.com/actions/checkout/issues/2311">#2311</a>)</li>
<li><a
href="71cf2267d8"><code>71cf226</code></a>
v6-beta (<a
href="https://redirect.github.com/actions/checkout/issues/2298">#2298</a>)</li>
<li><a
href="069c695914"><code>069c695</code></a>
Persist creds to a separate file (<a
href="https://redirect.github.com/actions/checkout/issues/2286">#2286</a>)</li>
<li><a
href="ff7abcd0c3"><code>ff7abcd</code></a>
Update README to include Node.js 24 support details and requirements (<a
href="https://redirect.github.com/actions/checkout/issues/2248">#2248</a>)</li>
<li>See full diff in <a
href="08c6903cd8...1af3b93b68">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=5.0.0&new-version=6.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 16:37:55 +00:00
dependabot[bot]
f3975ce247 Bump actions/setup-go from 6.0.0 to 6.1.0 (#19214)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 6.0.0
to 6.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-go/releases">actions/setup-go's
releases</a>.</em></p>
<blockquote>
<h2>v6.1.0</h2>
<h2>What's Changed</h2>
<h3>Enhancements</h3>
<ul>
<li>Fall back to downloading from go.dev/dl instead of
storage.googleapis.com/golang by <a
href="https://github.com/nicholasngai"><code>@​nicholasngai</code></a>
in <a
href="https://redirect.github.com/actions/setup-go/pull/665">actions/setup-go#665</a></li>
<li>Add support for .tool-versions file and update workflow by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-go/pull/673">actions/setup-go#673</a></li>
<li>Add comprehensive breaking changes documentation for v6 by <a
href="https://github.com/mahabaleshwars"><code>@​mahabaleshwars</code></a>
in <a
href="https://redirect.github.com/actions/setup-go/pull/674">actions/setup-go#674</a></li>
</ul>
<h3>Dependency updates</h3>
<ul>
<li>Upgrade eslint-config-prettier from 10.0.1 to 10.1.8 and document
breaking changes in v6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-go/pull/617">actions/setup-go#617</a></li>
<li>Upgrade actions/publish-action from 0.3.0 to 0.4.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-go/pull/641">actions/setup-go#641</a></li>
<li>Upgrade semver and <code>@​types/semver</code> by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-go/pull/652">actions/setup-go#652</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/nicholasngai"><code>@​nicholasngai</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-go/pull/665">actions/setup-go#665</a></li>
<li><a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-go/pull/673">actions/setup-go#673</a></li>
<li><a
href="https://github.com/mahabaleshwars"><code>@​mahabaleshwars</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-go/pull/674">actions/setup-go#674</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-go/compare/v6...v6.1.0">https://github.com/actions/setup-go/compare/v6...v6.1.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4dc6199c7b"><code>4dc6199</code></a>
Bump semver and <code>@​types/semver</code> (<a
href="https://redirect.github.com/actions/setup-go/issues/652">#652</a>)</li>
<li><a
href="f3787be646"><code>f3787be</code></a>
Add comprehensive breaking changes documentation for v6 (<a
href="https://redirect.github.com/actions/setup-go/issues/674">#674</a>)</li>
<li><a
href="3a0c2c8245"><code>3a0c2c8</code></a>
Bump actions/publish-action from 0.3.0 to 0.4.0 (<a
href="https://redirect.github.com/actions/setup-go/issues/641">#641</a>)</li>
<li><a
href="faf52423ec"><code>faf5242</code></a>
Add support for .tool-versions file in setup-go, update workflow (<a
href="https://redirect.github.com/actions/setup-go/issues/673">#673</a>)</li>
<li><a
href="7bc60db215"><code>7bc60db</code></a>
Fall back to downloading from go.dev/dl instead of
storage.googleapis.com/gol...</li>
<li><a
href="c0137caad7"><code>c0137ca</code></a>
Bump eslint-config-prettier from 10.0.1 to 10.1.8 and document breaking
chang...</li>
<li>See full diff in <a
href="4469467582...4dc6199c7b">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-go&package-manager=github_actions&previous-version=6.0.0&new-version=6.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 16:30:41 +00:00
dependabot[bot]
1b78f0318a Bump rpds-py from 0.28.0 to 0.29.0 (#19216)
Bumps [rpds-py](https://github.com/crate-py/rpds) from 0.28.0 to 0.29.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/crate-py/rpds/releases">rpds-py's
releases</a>.</em></p>
<blockquote>
<h2>v0.29.0</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<ul>
<li>Bump actions/download-artifact from 5 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/crate-py/rpds/pull/195">crate-py/rpds#195</a></li>
<li>Bump github/codeql-action from 4.30.9 to 4.31.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/crate-py/rpds/pull/194">crate-py/rpds#194</a></li>
<li>Bump actions/upload-artifact from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/crate-py/rpds/pull/192">crate-py/rpds#192</a></li>
<li>Bump astral-sh/setup-uv from 7.1.1 to 7.1.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/crate-py/rpds/pull/193">crate-py/rpds#193</a></li>
<li>Bump github/codeql-action from 4.31.0 to 4.31.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/crate-py/rpds/pull/196">crate-py/rpds#196</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/crate-py/rpds/pull/199">crate-py/rpds#199</a></li>
<li>Bump softprops/action-gh-release from 2.4.1 to 2.4.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/crate-py/rpds/pull/198">crate-py/rpds#198</a></li>
<li>Bump rpds from 1.1.2 to 1.2.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/crate-py/rpds/pull/197">crate-py/rpds#197</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/crate-py/rpds/compare/v0.28.0...v0.29.0">https://github.com/crate-py/rpds/compare/v0.28.0...v0.29.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5fb6f35abb"><code>5fb6f35</code></a>
Prepare for 0.29.0</li>
<li><a
href="d17dbd1d84"><code>d17dbd1</code></a>
Add rpds's Stack.</li>
<li><a
href="74707afd0c"><code>74707af</code></a>
Follow the rpds API more closely for Queue.</li>
<li><a
href="41455f3794"><code>41455f3</code></a>
-&gt; native uv for dpeendency groups.</li>
<li><a
href="e93532daa5"><code>e93532d</code></a>
Use 3.14 by default in nox.</li>
<li><a
href="020c41fb88"><code>020c41f</code></a>
Remove dead hooks.</li>
<li><a
href="6e08b759e4"><code>6e08b75</code></a>
Accept zizmor's cooldown suggestions for dependabot.</li>
<li><a
href="a5d40a9fd3"><code>a5d40a9</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-py/rpds/issues/197">#197</a>
from crate-py/dependabot/cargo/rpds-1.2.0</li>
<li><a
href="b830be1416"><code>b830be1</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-py/rpds/issues/198">#198</a>
from crate-py/dependabot/github_actions/softprops/act...</li>
<li><a
href="e7ac33078a"><code>e7ac330</code></a>
Merge pull request <a
href="https://redirect.github.com/crate-py/rpds/issues/199">#199</a>
from crate-py/pre-commit-ci-update-config</li>
<li>Additional commits viewable in <a
href="https://github.com/crate-py/rpds/compare/v0.28.0...v0.29.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rpds-py&package-manager=pip&previous-version=0.28.0&new-version=0.29.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 16:30:09 +00:00
dependabot[bot]
a5d946bfcb Bump types-bleach from 6.2.0.20250809 to 6.3.0.20251115 (#19217)
Bumps [types-bleach](https://github.com/typeshed-internal/stub_uploader)
from 6.2.0.20250809 to 6.3.0.20251115.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/typeshed-internal/stub_uploader/commits">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=types-bleach&package-manager=pip&previous-version=6.2.0.20250809&new-version=6.3.0.20251115)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 16:29:32 +00:00
dependabot[bot]
ea3e08c49c Bump attrs from 25.3.0 to 25.4.0 (#19215)
Bumps [attrs](https://github.com/sponsors/hynek) from 25.3.0 to 25.4.0.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/sponsors/hynek/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=attrs&package-manager=pip&previous-version=25.3.0&new-version=25.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 15:58:26 +00:00
Eric Eastwood
54c93a1372 Export SYNAPSE_SUPPORTED_COMPLEMENT_TEST_PACKAGES from scripts-dev/complement.sh (#19208)
This is useful as someone downstream can source the
`scripts-dev/complement.sh` script and run the same set of tests as
Synapse:

```bash
# Grab the test packages supported by Synapse.
#
# --fast: Skip rebuilding the docker images,
# --build-only: Will only build Docker images but because we also used `--fast`, it won't do anything.
# `>/dev/null` to redirect stdout to `/dev/null` to get rid of the `echo` logs from the script.
test_packages=$(source ${SYNAPSE_DIR}/scripts-dev/complement.sh --fast --build-only >/dev/null && echo "$SYNAPSE_SUPPORTED_COMPLEMENT_TEST_PACKAGES")
echo $test_packages
```

This is spawning from wanting to run the same set of Complement tests in
the https://github.com/element-hq/synapse-rust-apps project.
2025-11-21 19:01:43 -06:00
Eric Eastwood
e39fba61a7 Refactor scripts-dev/complement.sh logic to avoid exit (#19209)
This is useful so that the script can be sourced by other scripts
without exiting the calling subshell (composable).

This is split out from https://github.com/element-hq/synapse/pull/19208
to make easy to understand PR's and build up to where we want to go.
2025-11-21 10:51:19 -06:00
Devon Hudson
7a9660367a Capitalize Synapse in CHANGES.md 2025-11-18 18:04:26 -07:00
133 changed files with 1257 additions and 699 deletions

View File

@@ -31,7 +31,7 @@ jobs:
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Extract version from pyproject.toml
# Note: explicitly requesting bash will mean bash is invoked with `-eo pipefail`, see

View File

@@ -13,7 +13,7 @@ jobs:
name: GitHub Pages
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
@@ -50,7 +50,7 @@ jobs:
name: Check links in documentation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2.0.0

View File

@@ -50,7 +50,7 @@ jobs:
needs:
- pre
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0

View File

@@ -18,7 +18,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master

View File

@@ -42,7 +42,7 @@ jobs:
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
@@ -77,7 +77,7 @@ jobs:
postgres-version: "14"
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
@@ -152,7 +152,7 @@ jobs:
BLACKLIST: ${{ matrix.workers && 'synapse-blacklist-with-workers' }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
@@ -202,14 +202,14 @@ jobs:
steps:
- name: Check out synapse codebase
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
@@ -234,7 +234,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -16,7 +16,7 @@ jobs:
name: "Check locked dependencies have sdists"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: '3.x'

View File

@@ -33,17 +33,17 @@ jobs:
packages: write
steps:
- name: Checkout specific branch (debug build)
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
if: github.event_name == 'workflow_dispatch'
with:
ref: ${{ inputs.branch }}
- name: Checkout clean copy of develop (scheduled build)
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
if: github.event_name == 'schedule'
with:
ref: develop
- name: Checkout clean copy of master (on-push)
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
if: github.event_name == 'push'
with:
ref: master

View File

@@ -27,7 +27,7 @@ jobs:
name: "Calculate list of debian distros"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: "3.x"
@@ -55,7 +55,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
path: src
@@ -132,7 +132,7 @@ jobs:
os: "ubuntu-24.04-arm"
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
@@ -170,7 +170,7 @@ jobs:
if: ${{ !startsWith(github.ref, 'refs/pull/') }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: "3.10"

View File

@@ -14,7 +14,7 @@ jobs:
name: Ensure Synapse config schema is valid
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: "3.x"
@@ -40,7 +40,7 @@ jobs:
name: Ensure generated documentation is up-to-date
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: "3.x"

View File

@@ -86,7 +86,7 @@ jobs:
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
@@ -106,7 +106,7 @@ jobs:
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: "3.x"
@@ -116,7 +116,7 @@ jobs:
check-lockfile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: "3.x"
@@ -129,7 +129,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
@@ -151,7 +151,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
@@ -187,7 +187,7 @@ jobs:
lint-crlf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Check line endings
run: scripts-dev/check_line_terminators.sh
@@ -195,7 +195,7 @@ jobs:
if: ${{ (github.base_ref == 'develop' || contains(github.base_ref, 'release-')) && github.actor != 'dependabot[bot]' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -213,7 +213,7 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
@@ -232,7 +232,7 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
@@ -250,7 +250,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
@@ -286,7 +286,7 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
@@ -306,7 +306,7 @@ jobs:
needs: changes
if: ${{ needs.changes.outputs.linting_readme == 'true' }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: "3.x"
@@ -354,7 +354,7 @@ jobs:
needs: linting-done
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: "3.x"
@@ -375,7 +375,7 @@ jobs:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.trial_test_matrix) }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.job.postgres-version }}
if: ${{ matrix.job.postgres-version }}
@@ -431,7 +431,7 @@ jobs:
- changes
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
@@ -496,7 +496,7 @@ jobs:
extras: ["all"]
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
# Install libs necessary for PyPy to build binary wheels for dependencies
- run: sudo apt-get -qq install xmlsec1 libxml2-dev libxslt-dev
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
@@ -546,7 +546,7 @@ jobs:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.sytest_test_matrix) }}
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Prepare test blacklist
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
@@ -593,7 +593,7 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
@@ -637,7 +637,7 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Add PostgreSQL apt repository
# We need a version of pg_dump that can handle the version of
# PostgreSQL being tested against. The Ubuntu package repository lags
@@ -692,7 +692,7 @@ jobs:
steps:
- name: Checkout synapse codebase
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
path: synapse
@@ -705,7 +705,7 @@ jobs:
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
@@ -728,7 +728,7 @@ jobs:
- changes
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
@@ -748,7 +748,7 @@ jobs:
- changes
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master

View File

@@ -22,7 +22,7 @@ jobs:
# This field is case-sensitive.
TARGET_STATUS: Needs info
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
# Only clone the script file we care about, instead of the whole repo.
sparse-checkout: .ci/scripts/triage_labelled_issue.sh

View File

@@ -43,7 +43,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
@@ -70,7 +70,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- run: sudo apt-get -qq install xmlsec1
- name: Install Rust
@@ -117,7 +117,7 @@ jobs:
- ${{ github.workspace }}:/src
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
@@ -175,14 +175,14 @@ jobs:
steps:
- name: Run actions/checkout@v4 for synapse
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@44694675825211faa026b3c33043df3e48a5fa00 # v6.0.0
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
@@ -217,7 +217,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -11,7 +11,7 @@ No significant changes since 1.143.0rc2.
# synapse 1.143.0rc2 (2025-11-18)
# Synapse 1.143.0rc2 (2025-11-18)
## Dropping support for PostgreSQL 13

1
changelog.d/19208.misc Normal file
View File

@@ -0,0 +1 @@
Export `SYNAPSE_SUPPORTED_COMPLEMENT_TEST_PACKAGES` environment variable from `scripts-dev/complement.sh`.

1
changelog.d/19209.misc Normal file
View File

@@ -0,0 +1 @@
Refactor `scripts-dev/complement.sh` logic to avoid `exit` to facilitate being able to source it from other scripts (composable).

1
changelog.d/19211.misc Normal file
View File

@@ -0,0 +1 @@
Expire sliding sync connections that are too old or have too much pending data.

1
changelog.d/19219.misc Normal file
View File

@@ -0,0 +1 @@
Require an experimental feature flag to be enabled in order for the unstable [MSC2666](https://github.com/matrix-org/matrix-spec-proposals/pull/2666) endpoint (`/_matrix/client/unstable/uk.half-shot.msc2666/user/mutual_rooms`) to be available.

1
changelog.d/19221.misc Normal file
View File

@@ -0,0 +1 @@
Auto-fix trailing spaces in multi-line strings and comments when running the lint script.

1
changelog.d/19223.misc Normal file
View File

@@ -0,0 +1 @@
Move towards using a dedicated `Duration` type.

View File

@@ -0,0 +1 @@
Stop building release wheels for MacOS.

1
changelog.d/19229.misc Normal file
View File

@@ -0,0 +1 @@
Move towards using a dedicated `Duration` type.

View File

@@ -117,6 +117,17 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.144.0
## Unstable mutual rooms endpoint is now behind an experimental feature flag
The unstable mutual rooms endpoint from
[MSC2666](https://github.com/matrix-org/matrix-spec-proposals/pull/2666)
(`/_matrix/client/unstable/uk.half-shot.msc2666/user/mutual_rooms`) is now
disabled by default. If you rely on this unstable endpoint, you must now set
`experimental_features.msc2666_enabled: true` in your configuration to keep
using it.
# Upgrading to v1.143.0
## Dropping support for PostgreSQL 13

261
poetry.lock generated
View File

@@ -14,24 +14,16 @@ files = [
[[package]]
name = "attrs"
version = "25.3.0"
version = "25.4.0"
description = "Classes Without Boilerplate"
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "attrs-25.3.0-py3-none-any.whl", hash = "sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3"},
{file = "attrs-25.3.0.tar.gz", hash = "sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b"},
{file = "attrs-25.4.0-py3-none-any.whl", hash = "sha256:adcf7e2a1fb3b36ac48d97835bb6d8ade15b8dcce26aba8bf1d14847b57a3373"},
{file = "attrs-25.4.0.tar.gz", hash = "sha256:16d5969b87f0859ef33a48b35d55ac1be6e42ae49d5e853b597db70c35c57e11"},
]
[package.extras]
benchmark = ["cloudpickle ; platform_python_implementation == \"CPython\"", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pympler", "pytest (>=4.3.0)", "pytest-codspeed", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
cov = ["cloudpickle ; platform_python_implementation == \"CPython\"", "coverage[toml] (>=5.3)", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
dev = ["cloudpickle ; platform_python_implementation == \"CPython\"", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pre-commit-uv", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier"]
tests = ["cloudpickle ; platform_python_implementation == \"CPython\"", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
tests-mypy = ["mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\""]
[[package]]
name = "authlib"
version = "1.6.5"
@@ -2364,127 +2356,127 @@ jupyter = ["ipywidgets (>=7.5.1,<9)"]
[[package]]
name = "rpds-py"
version = "0.28.0"
version = "0.29.0"
description = "Python bindings to Rust's persistent data structures (rpds)"
optional = false
python-versions = ">=3.10"
groups = ["main", "dev"]
files = [
{file = "rpds_py-0.28.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:7b6013db815417eeb56b2d9d7324e64fcd4fa289caeee6e7a78b2e11fc9b438a"},
{file = "rpds_py-0.28.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:1a4c6b05c685c0c03f80dabaeb73e74218c49deea965ca63f76a752807397207"},
{file = "rpds_py-0.28.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f4794c6c3fbe8f9ac87699b131a1f26e7b4abcf6d828da46a3a52648c7930eba"},
{file = "rpds_py-0.28.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2e8456b6ee5527112ff2354dd9087b030e3429e43a74f480d4a5ca79d269fd85"},
{file = "rpds_py-0.28.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:beb880a9ca0a117415f241f66d56025c02037f7c4efc6fe59b5b8454f1eaa50d"},
{file = "rpds_py-0.28.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6897bebb118c44b38c9cb62a178e09f1593c949391b9a1a6fe777ccab5934ee7"},
{file = "rpds_py-0.28.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b1b553dd06e875249fd43efd727785efb57a53180e0fde321468222eabbeaafa"},
{file = "rpds_py-0.28.0-cp310-cp310-manylinux_2_31_riscv64.whl", hash = "sha256:f0b2044fdddeea5b05df832e50d2a06fe61023acb44d76978e1b060206a8a476"},
{file = "rpds_py-0.28.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:05cf1e74900e8da73fa08cc76c74a03345e5a3e37691d07cfe2092d7d8e27b04"},
{file = "rpds_py-0.28.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:efd489fec7c311dae25e94fe7eeda4b3d06be71c68f2cf2e8ef990ffcd2cd7e8"},
{file = "rpds_py-0.28.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:ada7754a10faacd4f26067e62de52d6af93b6d9542f0df73c57b9771eb3ba9c4"},
{file = "rpds_py-0.28.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:c2a34fd26588949e1e7977cfcbb17a9a42c948c100cab890c6d8d823f0586457"},
{file = "rpds_py-0.28.0-cp310-cp310-win32.whl", hash = "sha256:f9174471d6920cbc5e82a7822de8dfd4dcea86eb828b04fc8c6519a77b0ee51e"},
{file = "rpds_py-0.28.0-cp310-cp310-win_amd64.whl", hash = "sha256:6e32dd207e2c4f8475257a3540ab8a93eff997abfa0a3fdb287cae0d6cd874b8"},
{file = "rpds_py-0.28.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:03065002fd2e287725d95fbc69688e0c6daf6c6314ba38bdbaa3895418e09296"},
{file = "rpds_py-0.28.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:28ea02215f262b6d078daec0b45344c89e161eab9526b0d898221d96fdda5f27"},
{file = "rpds_py-0.28.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25dbade8fbf30bcc551cb352376c0ad64b067e4fc56f90e22ba70c3ce205988c"},
{file = "rpds_py-0.28.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3c03002f54cc855860bfdc3442928ffdca9081e73b5b382ed0b9e8efe6e5e205"},
{file = "rpds_py-0.28.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b9699fa7990368b22032baf2b2dce1f634388e4ffc03dfefaaac79f4695edc95"},
{file = "rpds_py-0.28.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b9b06fe1a75e05e0713f06ea0c89ecb6452210fd60e2f1b6ddc1067b990e08d9"},
{file = "rpds_py-0.28.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac9f83e7b326a3f9ec3ef84cda98fb0a74c7159f33e692032233046e7fd15da2"},
{file = "rpds_py-0.28.0-cp311-cp311-manylinux_2_31_riscv64.whl", hash = "sha256:0d3259ea9ad8743a75a43eb7819324cdab393263c91be86e2d1901ee65c314e0"},
{file = "rpds_py-0.28.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9a7548b345f66f6695943b4ef6afe33ccd3f1b638bd9afd0f730dd255c249c9e"},
{file = "rpds_py-0.28.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c9a40040aa388b037eb39416710fbcce9443498d2eaab0b9b45ae988b53f5c67"},
{file = "rpds_py-0.28.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:8f60c7ea34e78c199acd0d3cda37a99be2c861dd2b8cf67399784f70c9f8e57d"},
{file = "rpds_py-0.28.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:1571ae4292649100d743b26d5f9c63503bb1fedf538a8f29a98dce2d5ba6b4e6"},
{file = "rpds_py-0.28.0-cp311-cp311-win32.whl", hash = "sha256:5cfa9af45e7c1140af7321fa0bef25b386ee9faa8928c80dc3a5360971a29e8c"},
{file = "rpds_py-0.28.0-cp311-cp311-win_amd64.whl", hash = "sha256:dd8d86b5d29d1b74100982424ba53e56033dc47720a6de9ba0259cf81d7cecaa"},
{file = "rpds_py-0.28.0-cp311-cp311-win_arm64.whl", hash = "sha256:4e27d3a5709cc2b3e013bf93679a849213c79ae0573f9b894b284b55e729e120"},
{file = "rpds_py-0.28.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:6b4f28583a4f247ff60cd7bdda83db8c3f5b05a7a82ff20dd4b078571747708f"},
{file = "rpds_py-0.28.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d678e91b610c29c4b3d52a2c148b641df2b4676ffe47c59f6388d58b99cdc424"},
{file = "rpds_py-0.28.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e819e0e37a44a78e1383bf1970076e2ccc4dc8c2bbaa2f9bd1dc987e9afff628"},
{file = "rpds_py-0.28.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5ee514e0f0523db5d3fb171f397c54875dbbd69760a414dccf9d4d7ad628b5bd"},
{file = "rpds_py-0.28.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5f3fa06d27fdcee47f07a39e02862da0100cb4982508f5ead53ec533cd5fe55e"},
{file = "rpds_py-0.28.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:46959ef2e64f9e4a41fc89aa20dbca2b85531f9a72c21099a3360f35d10b0d5a"},
{file = "rpds_py-0.28.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8455933b4bcd6e83fde3fefc987a023389c4b13f9a58c8d23e4b3f6d13f78c84"},
{file = "rpds_py-0.28.0-cp312-cp312-manylinux_2_31_riscv64.whl", hash = "sha256:ad50614a02c8c2962feebe6012b52f9802deec4263946cddea37aaf28dd25a66"},
{file = "rpds_py-0.28.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e5deca01b271492553fdb6c7fd974659dce736a15bae5dad7ab8b93555bceb28"},
{file = "rpds_py-0.28.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:735f8495a13159ce6a0d533f01e8674cec0c57038c920495f87dcb20b3ddb48a"},
{file = "rpds_py-0.28.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:961ca621ff10d198bbe6ba4957decca61aa2a0c56695384c1d6b79bf61436df5"},
{file = "rpds_py-0.28.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2374e16cc9131022e7d9a8f8d65d261d9ba55048c78f3b6e017971a4f5e6353c"},
{file = "rpds_py-0.28.0-cp312-cp312-win32.whl", hash = "sha256:d15431e334fba488b081d47f30f091e5d03c18527c325386091f31718952fe08"},
{file = "rpds_py-0.28.0-cp312-cp312-win_amd64.whl", hash = "sha256:a410542d61fc54710f750d3764380b53bf09e8c4edbf2f9141a82aa774a04f7c"},
{file = "rpds_py-0.28.0-cp312-cp312-win_arm64.whl", hash = "sha256:1f0cfd1c69e2d14f8c892b893997fa9a60d890a0c8a603e88dca4955f26d1edd"},
{file = "rpds_py-0.28.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:e9e184408a0297086f880556b6168fa927d677716f83d3472ea333b42171ee3b"},
{file = "rpds_py-0.28.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:edd267266a9b0448f33dc465a97cfc5d467594b600fe28e7fa2f36450e03053a"},
{file = "rpds_py-0.28.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:85beb8b3f45e4e32f6802fb6cd6b17f615ef6c6a52f265371fb916fae02814aa"},
{file = "rpds_py-0.28.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d2412be8d00a1b895f8ad827cc2116455196e20ed994bb704bf138fe91a42724"},
{file = "rpds_py-0.28.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cf128350d384b777da0e68796afdcebc2e9f63f0e9f242217754e647f6d32491"},
{file = "rpds_py-0.28.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a2036d09b363aa36695d1cc1a97b36865597f4478470b0697b5ee9403f4fe399"},
{file = "rpds_py-0.28.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8e1e9be4fa6305a16be628959188e4fd5cd6f1b0e724d63c6d8b2a8adf74ea6"},
{file = "rpds_py-0.28.0-cp313-cp313-manylinux_2_31_riscv64.whl", hash = "sha256:0a403460c9dd91a7f23fc3188de6d8977f1d9603a351d5db6cf20aaea95b538d"},
{file = "rpds_py-0.28.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d7366b6553cdc805abcc512b849a519167db8f5e5c3472010cd1228b224265cb"},
{file = "rpds_py-0.28.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:5b43c6a3726efd50f18d8120ec0551241c38785b68952d240c45ea553912ac41"},
{file = "rpds_py-0.28.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:0cb7203c7bc69d7c1585ebb33a2e6074492d2fc21ad28a7b9d40457ac2a51ab7"},
{file = "rpds_py-0.28.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:7a52a5169c664dfb495882adc75c304ae1d50df552fbd68e100fdc719dee4ff9"},
{file = "rpds_py-0.28.0-cp313-cp313-win32.whl", hash = "sha256:2e42456917b6687215b3e606ab46aa6bca040c77af7df9a08a6dcfe8a4d10ca5"},
{file = "rpds_py-0.28.0-cp313-cp313-win_amd64.whl", hash = "sha256:e0a0311caedc8069d68fc2bf4c9019b58a2d5ce3cd7cb656c845f1615b577e1e"},
{file = "rpds_py-0.28.0-cp313-cp313-win_arm64.whl", hash = "sha256:04c1b207ab8b581108801528d59ad80aa83bb170b35b0ddffb29c20e411acdc1"},
{file = "rpds_py-0.28.0-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:f296ea3054e11fc58ad42e850e8b75c62d9a93a9f981ad04b2e5ae7d2186ff9c"},
{file = "rpds_py-0.28.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:5a7306c19b19005ad98468fcefeb7100b19c79fc23a5f24a12e06d91181193fa"},
{file = "rpds_py-0.28.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e5d9b86aa501fed9862a443c5c3116f6ead8bc9296185f369277c42542bd646b"},
{file = "rpds_py-0.28.0-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e5bbc701eff140ba0e872691d573b3d5d30059ea26e5785acba9132d10c8c31d"},
{file = "rpds_py-0.28.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9a5690671cd672a45aa8616d7374fdf334a1b9c04a0cac3c854b1136e92374fe"},
{file = "rpds_py-0.28.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9f1d92ecea4fa12f978a367c32a5375a1982834649cdb96539dcdc12e609ab1a"},
{file = "rpds_py-0.28.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d252db6b1a78d0a3928b6190156042d54c93660ce4d98290d7b16b5296fb7cc"},
{file = "rpds_py-0.28.0-cp313-cp313t-manylinux_2_31_riscv64.whl", hash = "sha256:d61b355c3275acb825f8777d6c4505f42b5007e357af500939d4a35b19177259"},
{file = "rpds_py-0.28.0-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:acbe5e8b1026c0c580d0321c8aae4b0a1e1676861d48d6e8c6586625055b606a"},
{file = "rpds_py-0.28.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:8aa23b6f0fc59b85b4c7d89ba2965af274346f738e8d9fc2455763602e62fd5f"},
{file = "rpds_py-0.28.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:7b14b0c680286958817c22d76fcbca4800ddacef6f678f3a7c79a1fe7067fe37"},
{file = "rpds_py-0.28.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:bcf1d210dfee61a6c86551d67ee1031899c0fdbae88b2d44a569995d43797712"},
{file = "rpds_py-0.28.0-cp313-cp313t-win32.whl", hash = "sha256:3aa4dc0fdab4a7029ac63959a3ccf4ed605fee048ba67ce89ca3168da34a1342"},
{file = "rpds_py-0.28.0-cp313-cp313t-win_amd64.whl", hash = "sha256:7b7d9d83c942855e4fdcfa75d4f96f6b9e272d42fffcb72cd4bb2577db2e2907"},
{file = "rpds_py-0.28.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:dcdcb890b3ada98a03f9f2bb108489cdc7580176cb73b4f2d789e9a1dac1d472"},
{file = "rpds_py-0.28.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:f274f56a926ba2dc02976ca5b11c32855cbd5925534e57cfe1fda64e04d1add2"},
{file = "rpds_py-0.28.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4fe0438ac4a29a520ea94c8c7f1754cdd8feb1bc490dfda1bfd990072363d527"},
{file = "rpds_py-0.28.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8a358a32dd3ae50e933347889b6af9a1bdf207ba5d1a3f34e1a38cd3540e6733"},
{file = "rpds_py-0.28.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e80848a71c78aa328fefaba9c244d588a342c8e03bda518447b624ea64d1ff56"},
{file = "rpds_py-0.28.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f586db2e209d54fe177e58e0bc4946bea5fb0102f150b1b2f13de03e1f0976f8"},
{file = "rpds_py-0.28.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ae8ee156d6b586e4292491e885d41483136ab994e719a13458055bec14cf370"},
{file = "rpds_py-0.28.0-cp314-cp314-manylinux_2_31_riscv64.whl", hash = "sha256:a805e9b3973f7e27f7cab63a6b4f61d90f2e5557cff73b6e97cd5b8540276d3d"},
{file = "rpds_py-0.28.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5d3fd16b6dc89c73a4da0b4ac8b12a7ecc75b2864b95c9e5afed8003cb50a728"},
{file = "rpds_py-0.28.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:6796079e5d24fdaba6d49bda28e2c47347e89834678f2bc2c1b4fc1489c0fb01"},
{file = "rpds_py-0.28.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:76500820c2af232435cbe215e3324c75b950a027134e044423f59f5b9a1ba515"},
{file = "rpds_py-0.28.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:bbdc5640900a7dbf9dd707fe6388972f5bbd883633eb68b76591044cfe346f7e"},
{file = "rpds_py-0.28.0-cp314-cp314-win32.whl", hash = "sha256:adc8aa88486857d2b35d75f0640b949759f79dc105f50aa2c27816b2e0dd749f"},
{file = "rpds_py-0.28.0-cp314-cp314-win_amd64.whl", hash = "sha256:66e6fa8e075b58946e76a78e69e1a124a21d9a48a5b4766d15ba5b06869d1fa1"},
{file = "rpds_py-0.28.0-cp314-cp314-win_arm64.whl", hash = "sha256:a6fe887c2c5c59413353b7c0caff25d0e566623501ccfff88957fa438a69377d"},
{file = "rpds_py-0.28.0-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:7a69df082db13c7070f7b8b1f155fa9e687f1d6aefb7b0e3f7231653b79a067b"},
{file = "rpds_py-0.28.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:b1cde22f2c30ebb049a9e74c5374994157b9b70a16147d332f89c99c5960737a"},
{file = "rpds_py-0.28.0-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5338742f6ba7a51012ea470bd4dc600a8c713c0c72adaa0977a1b1f4327d6592"},
{file = "rpds_py-0.28.0-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e1460ebde1bcf6d496d80b191d854adedcc619f84ff17dc1c6d550f58c9efbba"},
{file = "rpds_py-0.28.0-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e3eb248f2feba84c692579257a043a7699e28a77d86c77b032c1d9fbb3f0219c"},
{file = "rpds_py-0.28.0-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bd3bbba5def70b16cd1c1d7255666aad3b290fbf8d0fe7f9f91abafb73611a91"},
{file = "rpds_py-0.28.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3114f4db69ac5a1f32e7e4d1cbbe7c8f9cf8217f78e6e002cedf2d54c2a548ed"},
{file = "rpds_py-0.28.0-cp314-cp314t-manylinux_2_31_riscv64.whl", hash = "sha256:4b0cb8a906b1a0196b863d460c0222fb8ad0f34041568da5620f9799b83ccf0b"},
{file = "rpds_py-0.28.0-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:cf681ac76a60b667106141e11a92a3330890257e6f559ca995fbb5265160b56e"},
{file = "rpds_py-0.28.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:1e8ee6413cfc677ce8898d9cde18cc3a60fc2ba756b0dec5b71eb6eb21c49fa1"},
{file = "rpds_py-0.28.0-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:b3072b16904d0b5572a15eb9d31c1954e0d3227a585fc1351aa9878729099d6c"},
{file = "rpds_py-0.28.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:b670c30fd87a6aec281c3c9896d3bae4b205fd75d79d06dc87c2503717e46092"},
{file = "rpds_py-0.28.0-cp314-cp314t-win32.whl", hash = "sha256:8014045a15b4d2b3476f0a287fcc93d4f823472d7d1308d47884ecac9e612be3"},
{file = "rpds_py-0.28.0-cp314-cp314t-win_amd64.whl", hash = "sha256:7a4e59c90d9c27c561eb3160323634a9ff50b04e4f7820600a2beb0ac90db578"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:f5e7101145427087e493b9c9b959da68d357c28c562792300dd21a095118ed16"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:31eb671150b9c62409a888850aaa8e6533635704fe2b78335f9aaf7ff81eec4d"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:48b55c1f64482f7d8bd39942f376bfdf2f6aec637ee8c805b5041e14eeb771db"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:24743a7b372e9a76171f6b69c01aedf927e8ac3e16c474d9fe20d552a8cb45c7"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:389c29045ee8bbb1627ea190b4976a310a295559eaf9f1464a1a6f2bf84dde78"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:23690b5827e643150cf7b49569679ec13fe9a610a15949ed48b85eb7f98f34ec"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f0c9266c26580e7243ad0d72fc3e01d6b33866cfab5084a6da7576bcf1c4f72"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-manylinux_2_31_riscv64.whl", hash = "sha256:4c6c4db5d73d179746951486df97fd25e92396be07fc29ee8ff9a8f5afbdfb27"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a3b695a8fa799dd2cfdb4804b37096c5f6dba1ac7f48a7fbf6d0485bcd060316"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:6aa1bfce3f83baf00d9c5fcdbba93a3ab79958b4c7d7d1f55e7fe68c20e63912"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-musllinux_1_2_i686.whl", hash = "sha256:7b0f9dceb221792b3ee6acb5438eb1f02b0cb2c247796a72b016dcc92c6de829"},
{file = "rpds_py-0.28.0-pp311-pypy311_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:5d0145edba8abd3db0ab22b5300c99dc152f5c9021fab861be0f0544dc3cbc5f"},
{file = "rpds_py-0.28.0.tar.gz", hash = "sha256:abd4df20485a0983e2ca334a216249b6186d6e3c1627e106651943dbdb791aea"},
{file = "rpds_py-0.29.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:4ae4b88c6617e1b9e5038ab3fccd7bac0842fdda2b703117b2aa99bc85379113"},
{file = "rpds_py-0.29.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:7d9128ec9d8cecda6f044001fde4fb71ea7c24325336612ef8179091eb9596b9"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d37812c3da8e06f2bb35b3cf10e4a7b68e776a706c13058997238762b4e07f4f"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:66786c3fb1d8de416a7fa8e1cb1ec6ba0a745b2b0eee42f9b7daa26f1a495545"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b58f5c77f1af888b5fd1876c9a0d9858f6f88a39c9dd7c073a88e57e577da66d"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:799156ef1f3529ed82c36eb012b5d7a4cf4b6ef556dd7cc192148991d07206ae"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:453783477aa4f2d9104c4b59b08c871431647cb7af51b549bbf2d9eb9c827756"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_31_riscv64.whl", hash = "sha256:24a7231493e3c4a4b30138b50cca089a598e52c34cf60b2f35cebf62f274fdea"},
{file = "rpds_py-0.29.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7033c1010b1f57bb44d8067e8c25aa6fa2e944dbf46ccc8c92b25043839c3fd2"},
{file = "rpds_py-0.29.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:0248b19405422573621172ab8e3a1f29141362d13d9f72bafa2e28ea0cdca5a2"},
{file = "rpds_py-0.29.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:f9f436aee28d13b9ad2c764fc273e0457e37c2e61529a07b928346b219fcde3b"},
{file = "rpds_py-0.29.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:24a16cb7163933906c62c272de20ea3c228e4542c8c45c1d7dc2b9913e17369a"},
{file = "rpds_py-0.29.0-cp310-cp310-win32.whl", hash = "sha256:1a409b0310a566bfd1be82119891fefbdce615ccc8aa558aff7835c27988cbef"},
{file = "rpds_py-0.29.0-cp310-cp310-win_amd64.whl", hash = "sha256:c5523b0009e7c3c1263471b69d8da1c7d41b3ecb4cb62ef72be206b92040a950"},
{file = "rpds_py-0.29.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:9b9c764a11fd637e0322a488560533112837f5334ffeb48b1be20f6d98a7b437"},
{file = "rpds_py-0.29.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3fd2164d73812026ce970d44c3ebd51e019d2a26a4425a5dcbdfa93a34abc383"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a097b7f7f7274164566ae90a221fd725363c0e9d243e2e9ed43d195ccc5495c"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7cdc0490374e31cedefefaa1520d5fe38e82fde8748cbc926e7284574c714d6b"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:89ca2e673ddd5bde9b386da9a0aac0cab0e76f40c8f0aaf0d6311b6bbf2aa311"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a5d9da3ff5af1ca1249b1adb8ef0573b94c76e6ae880ba1852f033bf429d4588"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8238d1d310283e87376c12f658b61e1ee23a14c0e54c7c0ce953efdbdc72deed"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_31_riscv64.whl", hash = "sha256:2d6fb2ad1c36f91c4646989811e84b1ea5e0c3cf9690b826b6e32b7965853a63"},
{file = "rpds_py-0.29.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:534dc9df211387547267ccdb42253aa30527482acb38dd9b21c5c115d66a96d2"},
{file = "rpds_py-0.29.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d456e64724a075441e4ed648d7f154dc62e9aabff29bcdf723d0c00e9e1d352f"},
{file = "rpds_py-0.29.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:a738f2da2f565989401bd6fd0b15990a4d1523c6d7fe83f300b7e7d17212feca"},
{file = "rpds_py-0.29.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a110e14508fd26fd2e472bb541f37c209409876ba601cf57e739e87d8a53cf95"},
{file = "rpds_py-0.29.0-cp311-cp311-win32.whl", hash = "sha256:923248a56dd8d158389a28934f6f69ebf89f218ef96a6b216a9be6861804d3f4"},
{file = "rpds_py-0.29.0-cp311-cp311-win_amd64.whl", hash = "sha256:539eb77eb043afcc45314d1be09ea6d6cafb3addc73e0547c171c6d636957f60"},
{file = "rpds_py-0.29.0-cp311-cp311-win_arm64.whl", hash = "sha256:bdb67151ea81fcf02d8f494703fb728d4d34d24556cbff5f417d74f6f5792e7c"},
{file = "rpds_py-0.29.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a0891cfd8db43e085c0ab93ab7e9b0c8fee84780d436d3b266b113e51e79f954"},
{file = "rpds_py-0.29.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3897924d3f9a0361472d884051f9a2460358f9a45b1d85a39a158d2f8f1ad71c"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2a21deb8e0d1571508c6491ce5ea5e25669b1dd4adf1c9d64b6314842f708b5d"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:9efe71687d6427737a0a2de9ca1c0a216510e6cd08925c44162be23ed7bed2d5"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:40f65470919dc189c833e86b2c4bd21bd355f98436a2cef9e0a9a92aebc8e57e"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:def48ff59f181130f1a2cb7c517d16328efac3ec03951cca40c1dc2049747e83"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ad7bd570be92695d89285a4b373006930715b78d96449f686af422debb4d3949"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_31_riscv64.whl", hash = "sha256:5a572911cd053137bbff8e3a52d31c5d2dba51d3a67ad902629c70185f3f2181"},
{file = "rpds_py-0.29.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d583d4403bcbf10cffc3ab5cee23d7643fcc960dff85973fd3c2d6c86e8dbb0c"},
{file = "rpds_py-0.29.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:070befbb868f257d24c3bb350dbd6e2f645e83731f31264b19d7231dd5c396c7"},
{file = "rpds_py-0.29.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:fc935f6b20b0c9f919a8ff024739174522abd331978f750a74bb68abd117bd19"},
{file = "rpds_py-0.29.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:8c5a8ecaa44ce2d8d9d20a68a2483a74c07f05d72e94a4dff88906c8807e77b0"},
{file = "rpds_py-0.29.0-cp312-cp312-win32.whl", hash = "sha256:ba5e1aeaf8dd6d8f6caba1f5539cddda87d511331714b7b5fc908b6cfc3636b7"},
{file = "rpds_py-0.29.0-cp312-cp312-win_amd64.whl", hash = "sha256:b5f6134faf54b3cb83375db0f113506f8b7770785be1f95a631e7e2892101977"},
{file = "rpds_py-0.29.0-cp312-cp312-win_arm64.whl", hash = "sha256:b016eddf00dca7944721bf0cd85b6af7f6c4efaf83ee0b37c4133bd39757a8c7"},
{file = "rpds_py-0.29.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1585648d0760b88292eecab5181f5651111a69d90eff35d6b78aa32998886a61"},
{file = "rpds_py-0.29.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:521807963971a23996ddaf764c682b3e46459b3c58ccd79fefbe16718db43154"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a8896986efaa243ab713c69e6491a4138410f0fe36f2f4c71e18bd5501e8014"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1d24564a700ef41480a984c5ebed62b74e6ce5860429b98b1fede76049e953e6"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e6596b93c010d386ae46c9fba9bfc9fc5965fa8228edeac51576299182c2e31c"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5cc58aac218826d054c7da7f95821eba94125d88be673ff44267bb89d12a5866"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de73e40ebc04dd5d9556f50180395322193a78ec247e637e741c1b954810f295"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_31_riscv64.whl", hash = "sha256:295ce5ac7f0cf69a651ea75c8f76d02a31f98e5698e82a50a5f4d4982fbbae3b"},
{file = "rpds_py-0.29.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1ea59b23ea931d494459c8338056fe7d93458c0bf3ecc061cd03916505369d55"},
{file = "rpds_py-0.29.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f49d41559cebd608042fdcf54ba597a4a7555b49ad5c1c0c03e0af82692661cd"},
{file = "rpds_py-0.29.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:05a2bd42768ea988294ca328206efbcc66e220d2d9b7836ee5712c07ad6340ea"},
{file = "rpds_py-0.29.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:33ca7bdfedd83339ca55da3a5e1527ee5870d4b8369456b5777b197756f3ca22"},
{file = "rpds_py-0.29.0-cp313-cp313-win32.whl", hash = "sha256:20c51ae86a0bb9accc9ad4e6cdeec58d5ebb7f1b09dd4466331fc65e1766aae7"},
{file = "rpds_py-0.29.0-cp313-cp313-win_amd64.whl", hash = "sha256:6410e66f02803600edb0b1889541f4b5cc298a5ccda0ad789cc50ef23b54813e"},
{file = "rpds_py-0.29.0-cp313-cp313-win_arm64.whl", hash = "sha256:56838e1cd9174dc23c5691ee29f1d1be9eab357f27efef6bded1328b23e1ced2"},
{file = "rpds_py-0.29.0-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:37d94eadf764d16b9a04307f2ab1d7af6dc28774bbe0535c9323101e14877b4c"},
{file = "rpds_py-0.29.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d472cf73efe5726a067dce63eebe8215b14beabea7c12606fd9994267b3cfe2b"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:72fdfd5ff8992e4636621826371e3ac5f3e3b8323e9d0e48378e9c13c3dac9d0"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2549d833abdf8275c901313b9e8ff8fba57e50f6a495035a2a4e30621a2f7cc4"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4448dad428f28a6a767c3e3b80cde3446a22a0efbddaa2360f4bb4dc836d0688"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:115f48170fd4296a33938d8c11f697f5f26e0472e43d28f35624764173a60e4d"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8e5bb73ffc029820f4348e9b66b3027493ae00bca6629129cd433fd7a76308ee"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_31_riscv64.whl", hash = "sha256:b1581fcde18fcdf42ea2403a16a6b646f8eb1e58d7f90a0ce693da441f76942e"},
{file = "rpds_py-0.29.0-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:16e9da2bda9eb17ea318b4c335ec9ac1818e88922cbe03a5743ea0da9ecf74fb"},
{file = "rpds_py-0.29.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:28fd300326dd21198f311534bdb6d7e989dd09b3418b3a91d54a0f384c700967"},
{file = "rpds_py-0.29.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:2aba991e041d031c7939e1358f583ae405a7bf04804ca806b97a5c0e0af1ea5e"},
{file = "rpds_py-0.29.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:7f437026dbbc3f08c99cc41a5b2570c6e1a1ddbe48ab19a9b814254128d4ea7a"},
{file = "rpds_py-0.29.0-cp313-cp313t-win32.whl", hash = "sha256:6e97846e9800a5d0fe7be4d008f0c93d0feeb2700da7b1f7528dabafb31dfadb"},
{file = "rpds_py-0.29.0-cp313-cp313t-win_amd64.whl", hash = "sha256:f49196aec7c4b406495f60e6f947ad71f317a765f956d74bbd83996b9edc0352"},
{file = "rpds_py-0.29.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:394d27e4453d3b4d82bb85665dc1fcf4b0badc30fc84282defed71643b50e1a1"},
{file = "rpds_py-0.29.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:55d827b2ae95425d3be9bc9a5838b6c29d664924f98146557f7715e331d06df8"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fc31a07ed352e5462d3ee1b22e89285f4ce97d5266f6d1169da1142e78045626"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c4695dd224212f6105db7ea62197144230b808d6b2bba52238906a2762f1d1e7"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fcae1770b401167f8b9e1e3f566562e6966ffa9ce63639916248a9e25fa8a244"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:90f30d15f45048448b8da21c41703b31c61119c06c216a1bf8c245812a0f0c17"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:44a91e0ab77bdc0004b43261a4b8cd6d6b451e8d443754cfda830002b5745b32"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_31_riscv64.whl", hash = "sha256:4aa195e5804d32c682e453b34474f411ca108e4291c6a0f824ebdc30a91c973c"},
{file = "rpds_py-0.29.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7971bdb7bf4ee0f7e6f67fa4c7fbc6019d9850cc977d126904392d363f6f8318"},
{file = "rpds_py-0.29.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:8ae33ad9ce580c7a47452c3b3f7d8a9095ef6208e0a0c7e4e2384f9fc5bf8212"},
{file = "rpds_py-0.29.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:c661132ab2fb4eeede2ef69670fd60da5235209874d001a98f1542f31f2a8a94"},
{file = "rpds_py-0.29.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:bb78b3a0d31ac1bde132c67015a809948db751cb4e92cdb3f0b242e430b6ed0d"},
{file = "rpds_py-0.29.0-cp314-cp314-win32.whl", hash = "sha256:f475f103488312e9bd4000bc890a95955a07b2d0b6e8884aef4be56132adbbf1"},
{file = "rpds_py-0.29.0-cp314-cp314-win_amd64.whl", hash = "sha256:b9cf2359a4fca87cfb6801fae83a76aedf66ee1254a7a151f1341632acf67f1b"},
{file = "rpds_py-0.29.0-cp314-cp314-win_arm64.whl", hash = "sha256:9ba8028597e824854f0f1733d8b964e914ae3003b22a10c2c664cb6927e0feb9"},
{file = "rpds_py-0.29.0-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:e71136fd0612556b35c575dc2726ae04a1669e6a6c378f2240312cf5d1a2ab10"},
{file = "rpds_py-0.29.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:76fe96632d53f3bf0ea31ede2f53bbe3540cc2736d4aec3b3801b0458499ef3a"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9459a33f077130dbb2c7c3cea72ee9932271fb3126404ba2a2661e4fe9eb7b79"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5c9546cfdd5d45e562cc0444b6dddc191e625c62e866bf567a2c69487c7ad28a"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12597d11d97b8f7e376c88929a6e17acb980e234547c92992f9f7c058f1a7310"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28de03cf48b8a9e6ec10318f2197b83946ed91e2891f651a109611be4106ac4b"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd7951c964069039acc9d67a8ff1f0a7f34845ae180ca542b17dc1456b1f1808"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_31_riscv64.whl", hash = "sha256:c07d107b7316088f1ac0177a7661ca0c6670d443f6fe72e836069025e6266761"},
{file = "rpds_py-0.29.0-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1de2345af363d25696969befc0c1688a6cb5e8b1d32b515ef84fc245c6cddba3"},
{file = "rpds_py-0.29.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:00e56b12d2199ca96068057e1ae7f9998ab6e99cda82431afafd32f3ec98cca9"},
{file = "rpds_py-0.29.0-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:3919a3bbecee589300ed25000b6944174e07cd20db70552159207b3f4bbb45b8"},
{file = "rpds_py-0.29.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:e7fa2ccc312bbd91e43aa5e0869e46bc03278a3dddb8d58833150a18b0f0283a"},
{file = "rpds_py-0.29.0-cp314-cp314t-win32.whl", hash = "sha256:97c817863ffc397f1e6a6e9d2d89fe5408c0a9922dac0329672fb0f35c867ea5"},
{file = "rpds_py-0.29.0-cp314-cp314t-win_amd64.whl", hash = "sha256:2023473f444752f0f82a58dfcbee040d0a1b3d1b3c2ec40e884bd25db6d117d2"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:acd82a9e39082dc5f4492d15a6b6c8599aa21db5c35aaf7d6889aea16502c07d"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:715b67eac317bf1c7657508170a3e011a1ea6ccb1c9d5f296e20ba14196be6b3"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3b1b87a237cb2dba4db18bcfaaa44ba4cd5936b91121b62292ff21df577fc43"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1c3c3e8101bb06e337c88eb0c0ede3187131f19d97d43ea0e1c5407ea74c0cbf"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2b8e54d6e61f3ecd3abe032065ce83ea63417a24f437e4a3d73d2f85ce7b7cfe"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3fbd4e9aebf110473a420dea85a238b254cf8a15acb04b22a5a6b5ce8925b760"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80fdf53d36e6c72819993e35d1ebeeb8e8fc688d0c6c2b391b55e335b3afba5a"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_31_riscv64.whl", hash = "sha256:ea7173df5d86f625f8dde6d5929629ad811ed8decda3b60ae603903839ac9ac0"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:76054d540061eda273274f3d13a21a4abdde90e13eaefdc205db37c05230efce"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:9f84c549746a5be3bc7415830747a3a0312573afc9f95785eb35228bb17742ec"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-musllinux_1_2_i686.whl", hash = "sha256:0ea962671af5cb9a260489e311fa22b2e97103e3f9f0caaea6f81390af96a9ed"},
{file = "rpds_py-0.29.0-pp311-pypy311_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:f7728653900035fb7b8d06e1e5900545d8088efc9d5d4545782da7df03ec803f"},
{file = "rpds_py-0.29.0.tar.gz", hash = "sha256:fe55fe686908f50154d1dc599232016e50c243b438c3b7432f24e2895b0e5359"},
]
[[package]]
@@ -2551,15 +2543,15 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
[[package]]
name = "sentry-sdk"
version = "2.44.0"
version = "2.46.0"
description = "Python client for Sentry (https://sentry.io)"
optional = true
python-versions = ">=3.6"
groups = ["main"]
markers = "extra == \"all\" or extra == \"sentry\""
files = [
{file = "sentry_sdk-2.44.0-py2.py3-none-any.whl", hash = "sha256:9e36a0372b881e8f92fdbff4564764ce6cec4b7f25424d0a3a8d609c9e4651a7"},
{file = "sentry_sdk-2.44.0.tar.gz", hash = "sha256:5b1fe54dfafa332e900b07dd8f4dfe35753b64e78e7d9b1655a28fd3065e2493"},
{file = "sentry_sdk-2.46.0-py2.py3-none-any.whl", hash = "sha256:4eeeb60198074dff8d066ea153fa6f241fef1668c10900ea53a4200abc8da9b1"},
{file = "sentry_sdk-2.46.0.tar.gz", hash = "sha256:91821a23460725734b7741523021601593f35731808afc0bb2ba46c27b8acd91"},
]
[package.dependencies]
@@ -2598,6 +2590,7 @@ openai = ["openai (>=1.0.0)", "tiktoken (>=0.3.0)"]
openfeature = ["openfeature-sdk (>=0.7.1)"]
opentelemetry = ["opentelemetry-distro (>=0.35b0)"]
opentelemetry-experimental = ["opentelemetry-distro"]
opentelemetry-otlp = ["opentelemetry-distro[otlp] (>=0.35b0)"]
pure-eval = ["asttokens", "executing", "pure_eval"]
pydantic-ai = ["pydantic-ai (>=1.0.0)"]
pymongo = ["pymongo (>=3.1)"]
@@ -2984,14 +2977,14 @@ twisted = "*"
[[package]]
name = "types-bleach"
version = "6.2.0.20250809"
version = "6.3.0.20251115"
description = "Typing stubs for bleach"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_bleach-6.2.0.20250809-py3-none-any.whl", hash = "sha256:0b372a75117947d9ac8a31ae733fd0f8d92ec75c4772e7b37093ba3fa5b48fb9"},
{file = "types_bleach-6.2.0.20250809.tar.gz", hash = "sha256:188d7a1119f6c953140b513ed57ba4213755695815472c19d0c22ac09c79b90b"},
{file = "types_bleach-6.3.0.20251115-py3-none-any.whl", hash = "sha256:f81e7cf4ebac3f3d60b66b3fd5236c324e65037d1b28d22c94d5b457f0b98f42"},
{file = "types_bleach-6.3.0.20251115.tar.gz", hash = "sha256:96911b20f169a18524d03b61fa7e98a08c411292f7cdb5dc191057f55dad9ae3"},
]
[package.dependencies]

View File

@@ -269,6 +269,8 @@ extend-safe-fixes = [
"UP007",
# pyupgrade rules compatible with Python >= 3.10
"UP045",
# Allow ruff to automatically fix trailing spaces within a multi-line string/comment.
"W293"
]
[tool.ruff.lint.isort]
@@ -393,7 +395,8 @@ build-backend = "poetry.core.masonry.api"
# We skip:
# - free-threaded cpython builds: these are not currently supported.
# - i686: We don't support 32-bit platforms.
skip = "cp3??t-* *i686*"
# - *macosx*: we don't support building wheels for MacOS.
skip = "cp3??t-* *i686* *macosx*"
# Enable non-default builds. See the list of available options:
# https://cibuildwheel.pypa.io/en/stable/options#enable
#

View File

@@ -28,7 +28,7 @@ use mime::Mime;
use pyo3::{
exceptions::PyValueError,
pyclass, pymethods,
types::{PyAnyMethods, PyModule, PyModuleMethods},
types::{IntoPyDict, PyAnyMethods, PyModule, PyModuleMethods},
Bound, IntoPyObject, Py, PyAny, PyResult, Python,
};
use ulid::Ulid;
@@ -132,6 +132,11 @@ impl RendezvousHandler {
.unwrap_infallible()
.unbind();
let duration_module = py.import("synapse.util.duration")?;
let kwargs = [("milliseconds", eviction_interval)].into_py_dict(py)?;
let eviction_duration = duration_module.call_method("Duration", (), Some(&kwargs))?;
// Construct a Python object so that we can get a reference to the
// evict method and schedule it to run.
let self_ = Py::new(
@@ -149,7 +154,7 @@ impl RendezvousHandler {
let evict = self_.getattr(py, "_evict")?;
homeserver.call_method0("get_clock")?.call_method(
"looping_call",
(evict, eviction_interval),
(evict, eviction_duration),
None,
)?;

View File

@@ -72,153 +72,151 @@ For help on arguments to 'go test', run 'go help testflag'.
EOF
}
# parse our arguments
skip_docker_build=""
skip_complement_run=""
while [ $# -ge 1 ]; do
# We use a function to wrap the script logic so that we can use `return` to exit early
# if needed. This is particularly useful so that this script can be sourced by other
# scripts without exiting the calling subshell (composable). This allows us to share
# variables like `SYNAPSE_SUPPORTED_COMPLEMENT_TEST_PACKAGES` with other scripts.
#
# Returns an exit code of 0 on success, or 1 on failure.
main() {
# parse our arguments
skip_docker_build=""
skip_complement_run=""
while [ $# -ge 1 ]; do
arg=$1
case "$arg" in
"-h")
usage
exit 1
;;
"-f"|"--fast")
skip_docker_build=1
;;
"--build-only")
skip_complement_run=1
;;
"-e"|"--editable")
use_editable_synapse=1
;;
"--rebuild-editable")
rebuild_editable_synapse=1
;;
*)
# unknown arg: presumably an argument to gotest. break the loop.
break
"-h")
usage
return 1
;;
"-f"|"--fast")
skip_docker_build=1
;;
"--build-only")
skip_complement_run=1
;;
"-e"|"--editable")
use_editable_synapse=1
;;
"--rebuild-editable")
rebuild_editable_synapse=1
;;
*)
# unknown arg: presumably an argument to gotest. break the loop.
break
esac
shift
done
done
# enable buildkit for the docker builds
export DOCKER_BUILDKIT=1
# enable buildkit for the docker builds
export DOCKER_BUILDKIT=1
# Determine whether to use the docker or podman container runtime.
if [ -n "$PODMAN" ]; then
export CONTAINER_RUNTIME=podman
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
export BUILDAH_FORMAT=docker
export COMPLEMENT_HOSTNAME_RUNNING_COMPLEMENT=host.containers.internal
else
export CONTAINER_RUNTIME=docker
fi
# Determine whether to use the docker or podman container runtime.
if [ -n "$PODMAN" ]; then
export CONTAINER_RUNTIME=podman
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
export BUILDAH_FORMAT=docker
export COMPLEMENT_HOSTNAME_RUNNING_COMPLEMENT=host.containers.internal
else
export CONTAINER_RUNTIME=docker
fi
# Change to the repository root
cd "$(dirname $0)/.."
# Change to the repository root
cd "$(dirname $0)/.."
# Check for a user-specified Complement checkout
if [[ -z "$COMPLEMENT_DIR" ]]; then
COMPLEMENT_REF=${COMPLEMENT_REF:-main}
echo "COMPLEMENT_DIR not set. Fetching Complement checkout from ${COMPLEMENT_REF}..."
wget -Nq https://github.com/matrix-org/complement/archive/${COMPLEMENT_REF}.tar.gz
tar -xzf ${COMPLEMENT_REF}.tar.gz
COMPLEMENT_DIR=complement-${COMPLEMENT_REF}
echo "Checkout available at 'complement-${COMPLEMENT_REF}'"
fi
# Check for a user-specified Complement checkout
if [[ -z "$COMPLEMENT_DIR" ]]; then
COMPLEMENT_REF=${COMPLEMENT_REF:-main}
echo "COMPLEMENT_DIR not set. Fetching Complement checkout from ${COMPLEMENT_REF}..."
wget -Nq https://github.com/matrix-org/complement/archive/${COMPLEMENT_REF}.tar.gz
tar -xzf ${COMPLEMENT_REF}.tar.gz
COMPLEMENT_DIR=complement-${COMPLEMENT_REF}
echo "Checkout available at 'complement-${COMPLEMENT_REF}'"
fi
if [ -n "$use_editable_synapse" ]; then
if [ -n "$use_editable_synapse" ]; then
if [[ -e synapse/synapse_rust.abi3.so ]]; then
# In an editable install, back up the host's compiled Rust module to prevent
# inconvenience; the container will overwrite the module with its own copy.
mv -n synapse/synapse_rust.abi3.so synapse/synapse_rust.abi3.so~host
# And restore it on exit:
synapse_pkg=`realpath synapse`
trap "mv -f '$synapse_pkg/synapse_rust.abi3.so~host' '$synapse_pkg/synapse_rust.abi3.so'" EXIT
# In an editable install, back up the host's compiled Rust module to prevent
# inconvenience; the container will overwrite the module with its own copy.
mv -n synapse/synapse_rust.abi3.so synapse/synapse_rust.abi3.so~host
# And restore it on exit:
synapse_pkg=`realpath synapse`
trap "mv -f '$synapse_pkg/synapse_rust.abi3.so~host' '$synapse_pkg/synapse_rust.abi3.so'" EXIT
fi
editable_mount="$(realpath .):/editable-src:z"
if [ -n "$rebuild_editable_synapse" ]; then
unset skip_docker_build
unset skip_docker_build
elif $CONTAINER_RUNTIME inspect complement-synapse-editable &>/dev/null; then
# complement-synapse-editable already exists: see if we can still use it:
# - The Rust module must still be importable; it will fail to import if the Rust source has changed.
# - The Poetry lock file must be the same (otherwise we assume dependencies have changed)
# complement-synapse-editable already exists: see if we can still use it:
# - The Rust module must still be importable; it will fail to import if the Rust source has changed.
# - The Poetry lock file must be the same (otherwise we assume dependencies have changed)
# First set up the module in the right place for an editable installation.
$CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'cp' complement-synapse-editable -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
# First set up the module in the right place for an editable installation.
$CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'cp' complement-synapse-editable -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
if ($CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'python' complement-synapse-editable -c 'import synapse.synapse_rust' \
&& $CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'diff' complement-synapse-editable --brief /editable-src/poetry.lock /poetry.lock.bak); then
skip_docker_build=1
else
echo "Editable Synapse image is stale. Will rebuild."
unset skip_docker_build
fi
if ($CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'python' complement-synapse-editable -c 'import synapse.synapse_rust' \
&& $CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'diff' complement-synapse-editable --brief /editable-src/poetry.lock /poetry.lock.bak); then
skip_docker_build=1
else
echo "Editable Synapse image is stale. Will rebuild."
unset skip_docker_build
fi
fi
fi
fi
if [ -z "$skip_docker_build" ]; then
if [ -z "$skip_docker_build" ]; then
if [ -n "$use_editable_synapse" ]; then
# Build a special image designed for use in development with editable
# installs.
$CONTAINER_RUNTIME build -t synapse-editable \
-f "docker/editable.Dockerfile" .
# Build a special image designed for use in development with editable
# installs.
$CONTAINER_RUNTIME build -t synapse-editable \
-f "docker/editable.Dockerfile" .
$CONTAINER_RUNTIME build -t synapse-workers-editable \
--build-arg FROM=synapse-editable \
-f "docker/Dockerfile-workers" .
$CONTAINER_RUNTIME build -t synapse-workers-editable \
--build-arg FROM=synapse-editable \
-f "docker/Dockerfile-workers" .
$CONTAINER_RUNTIME build -t complement-synapse-editable \
--build-arg FROM=synapse-workers-editable \
-f "docker/complement/Dockerfile" "docker/complement"
$CONTAINER_RUNTIME build -t complement-synapse-editable \
--build-arg FROM=synapse-workers-editable \
-f "docker/complement/Dockerfile" "docker/complement"
# Prepare the Rust module
$CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'cp' complement-synapse-editable -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
# Prepare the Rust module
$CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'cp' complement-synapse-editable -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
else
# Build the base Synapse image from the local checkout
echo_if_github "::group::Build Docker image: matrixdotorg/synapse"
$CONTAINER_RUNTIME build -t matrixdotorg/synapse \
--build-arg TEST_ONLY_SKIP_DEP_HASH_VERIFICATION \
--build-arg TEST_ONLY_IGNORE_POETRY_LOCKFILE \
-f "docker/Dockerfile" .
echo_if_github "::endgroup::"
# Build the base Synapse image from the local checkout
echo_if_github "::group::Build Docker image: matrixdotorg/synapse"
$CONTAINER_RUNTIME build -t matrixdotorg/synapse \
--build-arg TEST_ONLY_SKIP_DEP_HASH_VERIFICATION \
--build-arg TEST_ONLY_IGNORE_POETRY_LOCKFILE \
-f "docker/Dockerfile" .
echo_if_github "::endgroup::"
# Build the workers docker image (from the base Synapse image we just built).
echo_if_github "::group::Build Docker image: matrixdotorg/synapse-workers"
$CONTAINER_RUNTIME build -t matrixdotorg/synapse-workers -f "docker/Dockerfile-workers" .
echo_if_github "::endgroup::"
# Build the workers docker image (from the base Synapse image we just built).
echo_if_github "::group::Build Docker image: matrixdotorg/synapse-workers"
$CONTAINER_RUNTIME build -t matrixdotorg/synapse-workers -f "docker/Dockerfile-workers" .
echo_if_github "::endgroup::"
# Build the unified Complement image (from the worker Synapse image we just built).
echo_if_github "::group::Build Docker image: complement/Dockerfile"
$CONTAINER_RUNTIME build -t complement-synapse \
`# This is the tag we end up pushing to the registry (see` \
`# .github/workflows/push_complement_image.yml) so let's just label it now` \
`# so people can reference it by the same name locally.` \
-t ghcr.io/element-hq/synapse/complement-synapse \
-f "docker/complement/Dockerfile" "docker/complement"
echo_if_github "::endgroup::"
# Build the unified Complement image (from the worker Synapse image we just built).
echo_if_github "::group::Build Docker image: complement/Dockerfile"
$CONTAINER_RUNTIME build -t complement-synapse \
`# This is the tag we end up pushing to the registry (see` \
`# .github/workflows/push_complement_image.yml) so let's just label it now` \
`# so people can reference it by the same name locally.` \
-t ghcr.io/element-hq/synapse/complement-synapse \
-f "docker/complement/Dockerfile" "docker/complement"
echo_if_github "::endgroup::"
fi
fi
echo "Docker images built."
else
echo "Skipping Docker image build as requested."
fi
if [ -n "$skip_complement_run" ]; then
echo "Skipping Complement run as requested."
exit
fi
export COMPLEMENT_BASE_IMAGE=complement-synapse
if [ -n "$use_editable_synapse" ]; then
export COMPLEMENT_BASE_IMAGE=complement-synapse-editable
export COMPLEMENT_HOST_MOUNTS="$editable_mount"
fi
extra_test_args=()
test_packages=(
test_packages=(
./tests/csapi
./tests
./tests/msc3874
@@ -231,71 +229,104 @@ test_packages=(
./tests/msc4140
./tests/msc4155
./tests/msc4306
)
)
# Enable dirty runs, so tests will reuse the same container where possible.
# This significantly speeds up tests, but increases the possibility of test pollution.
export COMPLEMENT_ENABLE_DIRTY_RUNS=1
# Export the list of test packages as a space-separated environment variable, so other
# scripts can use it.
export SYNAPSE_SUPPORTED_COMPLEMENT_TEST_PACKAGES="${test_packages[@]}"
# All environment variables starting with PASS_ will be shared.
# (The prefix is stripped off before reaching the container.)
export COMPLEMENT_SHARE_ENV_PREFIX=PASS_
# It takes longer than 10m to run the whole suite.
extra_test_args+=("-timeout=60m")
if [[ -n "$WORKERS" ]]; then
# Use workers.
export PASS_SYNAPSE_COMPLEMENT_USE_WORKERS=true
# Pass through the workers defined. If none, it will be an empty string
export PASS_SYNAPSE_WORKER_TYPES="$WORKER_TYPES"
# Workers can only use Postgres as a database.
export PASS_SYNAPSE_COMPLEMENT_DATABASE=postgres
# And provide some more configuration to complement.
# It can take quite a while to spin up a worker-mode Synapse for the first
# time (the main problem is that we start 14 python processes for each test,
# and complement likes to do two of them in parallel).
export COMPLEMENT_SPAWN_HS_TIMEOUT_SECS=120
else
export PASS_SYNAPSE_COMPLEMENT_USE_WORKERS=
if [[ -n "$POSTGRES" ]]; then
export PASS_SYNAPSE_COMPLEMENT_DATABASE=postgres
else
export PASS_SYNAPSE_COMPLEMENT_DATABASE=sqlite
export COMPLEMENT_BASE_IMAGE=complement-synapse
if [ -n "$use_editable_synapse" ]; then
export COMPLEMENT_BASE_IMAGE=complement-synapse-editable
export COMPLEMENT_HOST_MOUNTS="$editable_mount"
fi
# Enable dirty runs, so tests will reuse the same container where possible.
# This significantly speeds up tests, but increases the possibility of test pollution.
export COMPLEMENT_ENABLE_DIRTY_RUNS=1
# All environment variables starting with PASS_ will be shared.
# (The prefix is stripped off before reaching the container.)
export COMPLEMENT_SHARE_ENV_PREFIX=PASS_
# * -count=1: Only run tests once, and disable caching for tests.
# * -v: Output test logs, even if those tests pass.
# * -tags=synapse_blacklist: Enable the `synapse_blacklist` build tag, which is
# necessary for `runtime.Synapse` checks/skips to work in the tests
test_args=(
-v
-tags="synapse_blacklist"
-count=1
)
# It takes longer than 10m to run the whole suite.
test_args+=("-timeout=60m")
if [[ -n "$WORKERS" ]]; then
# Use workers.
export PASS_SYNAPSE_COMPLEMENT_USE_WORKERS=true
# Pass through the workers defined. If none, it will be an empty string
export PASS_SYNAPSE_WORKER_TYPES="$WORKER_TYPES"
# Workers can only use Postgres as a database.
export PASS_SYNAPSE_COMPLEMENT_DATABASE=postgres
# And provide some more configuration to complement.
# It can take quite a while to spin up a worker-mode Synapse for the first
# time (the main problem is that we start 14 python processes for each test,
# and complement likes to do two of them in parallel).
export COMPLEMENT_SPAWN_HS_TIMEOUT_SECS=120
else
export PASS_SYNAPSE_COMPLEMENT_USE_WORKERS=
if [[ -n "$POSTGRES" ]]; then
export PASS_SYNAPSE_COMPLEMENT_DATABASE=postgres
else
export PASS_SYNAPSE_COMPLEMENT_DATABASE=sqlite
fi
fi
if [[ -n "$ASYNCIO_REACTOR" ]]; then
# Enable the Twisted asyncio reactor
export PASS_SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR=true
fi
if [[ -n "$UNIX_SOCKETS" ]]; then
# Enable full on Unix socket mode for Synapse, Redis and Postgresql
export PASS_SYNAPSE_USE_UNIX_SOCKET=1
fi
if [[ -n "$SYNAPSE_TEST_LOG_LEVEL" ]]; then
# Set the log level to what is desired
export PASS_SYNAPSE_LOG_LEVEL="$SYNAPSE_TEST_LOG_LEVEL"
# Allow logging sensitive things (currently SQL queries & parameters).
# (This won't have any effect if we're not logging at DEBUG level overall.)
# Since this is just a test suite, this is fine and won't reveal anyone's
# personal information
export PASS_SYNAPSE_LOG_SENSITIVE=1
fi
# Log a few more useful things for a developer attempting to debug something
# particularly tricky.
export PASS_SYNAPSE_LOG_TESTING=1
if [ -n "$skip_complement_run" ]; then
echo "Skipping Complement run as requested."
return 0
fi
# Run the tests!
echo "Running Complement with ${test_args[@]} $@ ${test_packages[@]}"
cd "$COMPLEMENT_DIR"
go test "${test_args[@]}" "$@" "${test_packages[@]}"
}
main "$@"
# For any non-zero exit code (indicating some sort of error happened), we want to exit
# with that code.
exit_code=$?
if [ $exit_code -ne 0 ]; then
exit $exit_code
fi
if [[ -n "$ASYNCIO_REACTOR" ]]; then
# Enable the Twisted asyncio reactor
export PASS_SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR=true
fi
if [[ -n "$UNIX_SOCKETS" ]]; then
# Enable full on Unix socket mode for Synapse, Redis and Postgresql
export PASS_SYNAPSE_USE_UNIX_SOCKET=1
fi
if [[ -n "$SYNAPSE_TEST_LOG_LEVEL" ]]; then
# Set the log level to what is desired
export PASS_SYNAPSE_LOG_LEVEL="$SYNAPSE_TEST_LOG_LEVEL"
# Allow logging sensitive things (currently SQL queries & parameters).
# (This won't have any effect if we're not logging at DEBUG level overall.)
# Since this is just a test suite, this is fine and won't reveal anyone's
# personal information
export PASS_SYNAPSE_LOG_SENSITIVE=1
fi
# Log a few more useful things for a developer attempting to debug something
# particularly tricky.
export PASS_SYNAPSE_LOG_TESTING=1
# Run the tests!
echo "Images built; running complement with ${extra_test_args[@]} $@ ${test_packages[@]}"
cd "$COMPLEMENT_DIR"
go test -v -tags "synapse_blacklist" -count=1 "${extra_test_args[@]}" "$@" "${test_packages[@]}"

View File

@@ -27,6 +27,7 @@ from synapse.config.ratelimiting import RatelimitSettings
from synapse.storage.databases.main import DataStore
from synapse.types import Requester
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.wheel_timer import WheelTimer
if TYPE_CHECKING:
@@ -100,7 +101,7 @@ class Ratelimiter:
# and doesn't affect correctness.
self._timer: WheelTimer[Hashable] = WheelTimer()
self.clock.looping_call(self._prune_message_counts, 15 * 1000)
self.clock.looping_call(self._prune_message_counts, Duration(seconds=15))
def _get_key(self, requester: Requester | None, key: Hashable | None) -> Hashable:
"""Use the requester's MXID as a fallback key if no key is provided."""

View File

@@ -30,24 +30,20 @@ from twisted.internet import defer
from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import JsonDict
from synapse.util.constants import (
MILLISECONDS_PER_SECOND,
ONE_HOUR_SECONDS,
ONE_MINUTE_SECONDS,
)
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger("synapse.app.homeserver")
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME_SECONDS = 5 * ONE_MINUTE_SECONDS
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME = Duration(minutes=5)
"""
We wait 5 minutes to send the first set of stats as the server can be quite busy the
first few minutes
"""
PHONE_HOME_INTERVAL_SECONDS = 3 * ONE_HOUR_SECONDS
PHONE_HOME_INTERVAL = Duration(hours=3)
"""
Phone home stats are sent every 3 hours
"""
@@ -222,13 +218,13 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
# table will decrease
clock.looping_call(
hs.get_datastores().main.generate_user_daily_visits,
5 * ONE_MINUTE_SECONDS * MILLISECONDS_PER_SECOND,
Duration(minutes=5),
)
# monthly active user limiting functionality
clock.looping_call(
hs.get_datastores().main.reap_monthly_active_users,
ONE_HOUR_SECONDS * MILLISECONDS_PER_SECOND,
Duration(hours=1),
)
hs.get_datastores().main.reap_monthly_active_users()
@@ -267,14 +263,14 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
if hs.config.server.limit_usage_by_mau or hs.config.server.mau_stats_only:
generate_monthly_active_users()
clock.looping_call(generate_monthly_active_users, 5 * 60 * 1000)
clock.looping_call(generate_monthly_active_users, Duration(minutes=5))
# End of monthly active user settings
if hs.config.metrics.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(
phone_stats_home,
PHONE_HOME_INTERVAL_SECONDS * MILLISECONDS_PER_SECOND,
PHONE_HOME_INTERVAL,
hs,
stats,
)
@@ -282,14 +278,14 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
# We need to defer this init for the cases that we daemonize
# otherwise the process ID we get is that of the non-daemon process
clock.call_later(
0,
Duration(seconds=0),
performance_stats_init,
)
# We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes
clock.call_later(
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME_SECONDS,
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME,
phone_stats_home,
hs,
stats,

View File

@@ -77,6 +77,7 @@ from synapse.logging.context import run_in_background
from synapse.storage.databases.main import DataStore
from synapse.types import DeviceListUpdates, JsonMapping
from synapse.util.clock import Clock, DelayedCallWrapper
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -504,7 +505,7 @@ class _Recoverer:
self.scheduled_recovery: DelayedCallWrapper | None = None
def recover(self) -> None:
delay = 2**self.backoff_counter
delay = Duration(seconds=2**self.backoff_counter)
logger.info("Scheduling retries on %s in %fs", self.service.id, delay)
self.scheduled_recovery = self.clock.call_later(
delay,

View File

@@ -438,6 +438,9 @@ class ExperimentalConfig(Config):
# previously calculated push actions.
self.msc2654_enabled: bool = experimental.get("msc2654_enabled", False)
# MSC2666: Query mutual rooms between two users.
self.msc2666_enabled: bool = experimental.get("msc2666_enabled", False)
# MSC2815 (allow room moderators to view redacted event content)
self.msc2815_enabled: bool = experimental.get("msc2815_enabled", False)

View File

@@ -75,6 +75,7 @@ from synapse.types import JsonDict, StrCollection, UserID, get_domain_from_id
from synapse.types.handlers.policy_server import RECOMMENDATION_OK, RECOMMENDATION_SPAM
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.duration import Duration
from synapse.util.retryutils import NotRetryingDestination
if TYPE_CHECKING:
@@ -132,7 +133,7 @@ class FederationClient(FederationBase):
super().__init__(hs)
self.pdu_destination_tried: dict[str, dict[str, int]] = {}
self._clock.looping_call(self._clear_tried_cache, 60 * 1000)
self._clock.looping_call(self._clear_tried_cache, Duration(minutes=1))
self.state = hs.get_state_handler()
self.transport_layer = hs.get_federation_transport_client()

View File

@@ -89,6 +89,7 @@ from synapse.types import JsonDict, StateMap, UserID, get_domain_from_id
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import Linearizer, concurrently_execute, gather_results
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.duration import Duration
from synapse.util.stringutils import parse_server_name
if TYPE_CHECKING:
@@ -226,7 +227,7 @@ class FederationServer(FederationBase):
)
# We pause a bit so that we don't start handling all rooms at once.
await self._clock.sleep(random.uniform(0, 0.1))
await self._clock.sleep(Duration(seconds=random.uniform(0, 0.1)))
async def on_backfill_request(
self, origin: str, room_id: str, versions: list[str], limit: int
@@ -301,7 +302,9 @@ class FederationServer(FederationBase):
# Start a periodic check for old staged events. This is to handle
# the case where locks time out, e.g. if another process gets killed
# without dropping its locks.
self._clock.looping_call(self._handle_old_staged_events, 60 * 1000)
self._clock.looping_call(
self._handle_old_staged_events, Duration(minutes=1)
)
# keep this as early as possible to make the calculated origin ts as
# accurate as possible.

View File

@@ -53,6 +53,7 @@ from synapse.federation.sender import AbstractFederationSender, FederationSender
from synapse.metrics import SERVER_NAME_LABEL, LaterGauge
from synapse.replication.tcp.streams.federation import FederationStream
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken, StrCollection
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
from .units import Edu
@@ -137,7 +138,7 @@ class FederationRemoteSendQueue(AbstractFederationSender):
assert isinstance(queue, Sized)
register(queue_name, queue=queue)
self.clock.looping_call(self._clear_queue, 30 * 1000)
self.clock.looping_call(self._clear_queue, Duration(seconds=30))
def shutdown(self) -> None:
"""Stops this federation sender instance from sending further transactions."""

View File

@@ -174,6 +174,7 @@ from synapse.types import (
get_domain_from_id,
)
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
from synapse.util.retryutils import filter_destinations_by_retry_limiter
@@ -218,12 +219,12 @@ transaction_queue_pending_edus_gauge = LaterGauge(
# Please note that rate limiting still applies, so while the loop is
# executed every X seconds the destinations may not be woken up because
# they are being rate limited following previous attempt failures.
WAKEUP_RETRY_PERIOD_SEC = 60
WAKEUP_RETRY_PERIOD = Duration(minutes=1)
# Time (in s) to wait in between waking up each destination, i.e. one destination
# Time to wait in between waking up each destination, i.e. one destination
# will be woken up every <x> seconds until we have woken every destination
# has outstanding catch-up.
WAKEUP_INTERVAL_BETWEEN_DESTINATIONS_SEC = 5
WAKEUP_INTERVAL_BETWEEN_DESTINATIONS = Duration(seconds=5)
class AbstractFederationSender(metaclass=abc.ABCMeta):
@@ -379,7 +380,7 @@ class _DestinationWakeupQueue:
queue.attempt_new_transaction()
await self.clock.sleep(current_sleep_seconds)
await self.clock.sleep(Duration(seconds=current_sleep_seconds))
if not self.queue:
break
@@ -468,7 +469,7 @@ class FederationSender(AbstractFederationSender):
# Regularly wake up destinations that have outstanding PDUs to be caught up
self.clock.looping_call_now(
self.hs.run_as_background_process,
WAKEUP_RETRY_PERIOD_SEC * 1000.0,
WAKEUP_RETRY_PERIOD,
"wake_destinations_needing_catchup",
self._wake_destinations_needing_catchup,
)
@@ -1161,4 +1162,4 @@ class FederationSender(AbstractFederationSender):
last_processed,
)
self.wake_destination(destination)
await self.clock.sleep(WAKEUP_INTERVAL_BETWEEN_DESTINATIONS_SEC)
await self.clock.sleep(WAKEUP_INTERVAL_BETWEEN_DESTINATIONS)

View File

@@ -28,6 +28,7 @@ from synapse.metrics.background_process_metrics import wrap_as_background_proces
from synapse.types import UserID
from synapse.util import stringutils
from synapse.util.async_helpers import delay_cancellation
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -73,7 +74,7 @@ class AccountValidityHandler:
# Check the renewal emails to send and send them every 30min.
if hs.config.worker.run_background_tasks:
self.clock.looping_call(self._send_renewal_emails, 30 * 60 * 1000)
self.clock.looping_call(self._send_renewal_emails, Duration(minutes=30))
async def is_user_expired(self, user_id: str) -> bool:
"""Checks if a user has expired against third-party modules.

View File

@@ -74,6 +74,7 @@ from synapse.storage.databases.main.registration import (
from synapse.types import JsonDict, Requester, StrCollection, UserID
from synapse.util import stringutils as stringutils
from synapse.util.async_helpers import delay_cancellation, maybe_awaitable
from synapse.util.duration import Duration
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.stringutils import base62_encode
from synapse.util.threepids import canonicalise_email
@@ -242,7 +243,7 @@ class AuthHandler:
if hs.config.worker.run_background_tasks:
self._clock.looping_call(
run_as_background_process,
5 * 60 * 1000,
Duration(minutes=5),
"expire_old_sessions",
self.server_name,
self._expire_old_sessions,

View File

@@ -42,6 +42,7 @@ from synapse.types import (
UserID,
create_requester,
)
from synapse.util.duration import Duration
from synapse.util.events import generate_fake_event_id
from synapse.util.metrics import Measure
from synapse.util.sentinel import Sentinel
@@ -92,7 +93,7 @@ class DelayedEventsHandler:
# Kick off again (without blocking) to catch any missed notifications
# that may have fired before the callback was added.
self._clock.call_later(
0,
Duration(seconds=0),
self.notify_new_event,
)
@@ -501,17 +502,17 @@ class DelayedEventsHandler:
def _schedule_next_at(self, next_send_ts: Timestamp) -> None:
delay = next_send_ts - self._get_current_ts()
delay_sec = delay / 1000 if delay > 0 else 0
delay_duration = Duration(milliseconds=max(delay, 0))
if self._next_delayed_event_call is None:
self._next_delayed_event_call = self._clock.call_later(
delay_sec,
delay_duration,
self.hs.run_as_background_process,
"_send_on_timeout",
self._send_on_timeout,
)
else:
self._next_delayed_event_call.reset(delay_sec)
self._next_delayed_event_call.reset(delay_duration.as_secs())
async def get_all_for_user(self, requester: Requester) -> list[JsonDict]:
"""Return all pending delayed events requested by the given user."""

View File

@@ -71,6 +71,7 @@ from synapse.util import stringutils
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.cancellation import cancellable
from synapse.util.duration import Duration
from synapse.util.metrics import measure_func
from synapse.util.retryutils import (
NotRetryingDestination,
@@ -85,7 +86,7 @@ logger = logging.getLogger(__name__)
DELETE_DEVICE_MSGS_TASK_NAME = "delete_device_messages"
MAX_DEVICE_DISPLAY_NAME_LEN = 100
DELETE_STALE_DEVICES_INTERVAL_MS = 24 * 60 * 60 * 1000
DELETE_STALE_DEVICES_INTERVAL = Duration(days=1)
def _check_device_name_length(name: str | None) -> None:
@@ -186,7 +187,7 @@ class DeviceHandler:
):
self.clock.looping_call(
self.hs.run_as_background_process,
DELETE_STALE_DEVICES_INTERVAL_MS,
DELETE_STALE_DEVICES_INTERVAL,
desc="delete_stale_devices",
func=self._delete_stale_devices,
)
@@ -915,7 +916,7 @@ class DeviceHandler:
)
DEVICE_MSGS_DELETE_BATCH_LIMIT = 1000
DEVICE_MSGS_DELETE_SLEEP_MS = 100
DEVICE_MSGS_DELETE_SLEEP = Duration(milliseconds=100)
async def _delete_device_messages(
self,
@@ -941,9 +942,7 @@ class DeviceHandler:
if from_stream_id is None:
return TaskStatus.COMPLETE, None, None
await self.clock.sleep(
DeviceWriterHandler.DEVICE_MSGS_DELETE_SLEEP_MS / 1000.0
)
await self.clock.sleep(DeviceWriterHandler.DEVICE_MSGS_DELETE_SLEEP)
class DeviceWriterHandler(DeviceHandler):
@@ -1469,7 +1468,7 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
self._resync_retry_lock = Lock()
self.clock.looping_call(
self.hs.run_as_background_process,
30 * 1000,
Duration(seconds=30),
func=self._maybe_retry_device_resync,
desc="_maybe_retry_device_resync",
)

View File

@@ -46,6 +46,7 @@ from synapse.types import (
)
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.cancellation import cancellable
from synapse.util.duration import Duration
from synapse.util.json import json_decoder
from synapse.util.retryutils import (
NotRetryingDestination,
@@ -1634,7 +1635,7 @@ class E2eKeysHandler:
# matrix.org has about 15M users in the e2e_one_time_keys_json table
# (comprising 20M devices). We want this to take about a week, so we need
# to do about one batch of 100 users every 4 seconds.
await self.clock.sleep(4)
await self.clock.sleep(Duration(seconds=4))
def _check_cross_signing_key(

View File

@@ -72,6 +72,7 @@ from synapse.storage.invite_rule import InviteRule
from synapse.types import JsonDict, StrCollection, get_domain_from_id
from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer
from synapse.util.duration import Duration
from synapse.util.retryutils import NotRetryingDestination
from synapse.visibility import filter_events_for_server
@@ -1972,7 +1973,9 @@ class FederationHandler:
logger.warning(
"%s; waiting for %d ms...", e, e.retry_after_ms
)
await self.clock.sleep(e.retry_after_ms / 1000)
await self.clock.sleep(
Duration(milliseconds=e.retry_after_ms)
)
# Success, no need to try the rest of the destinations.
break

View File

@@ -91,6 +91,7 @@ from synapse.types import (
)
from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter, partition, sorted_topologically
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.stringutils import shortstr
@@ -1802,7 +1803,7 @@ class FederationEventHandler:
# the reactor. For large rooms let's yield to the reactor
# occasionally to ensure we don't block other work.
if (i + 1) % 1000 == 0:
await self._clock.sleep(0)
await self._clock.sleep(Duration(seconds=0))
# Also persist the new event in batches for similar reasons as above.
for batch in batch_iter(events_and_contexts_to_persist, 1000):

View File

@@ -83,6 +83,7 @@ from synapse.types.state import StateFilter
from synapse.util import log_failure, unwrapFirstError
from synapse.util.async_helpers import Linearizer, gather_results
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.duration import Duration
from synapse.util.json import json_decoder, json_encoder
from synapse.util.metrics import measure_func
from synapse.visibility import get_effective_room_visibility_from_state
@@ -433,14 +434,11 @@ class MessageHandler:
# Figure out how many seconds we need to wait before expiring the event.
now_ms = self.clock.time_msec()
delay = (expiry_ts - now_ms) / 1000
delay = Duration(milliseconds=max(expiry_ts - now_ms, 0))
# callLater doesn't support negative delays, so trim the delay to 0 if we're
# in that case.
if delay < 0:
delay = 0
logger.info("Scheduling expiry for event %s in %.3fs", event_id, delay)
logger.info(
"Scheduling expiry for event %s in %.3fs", event_id, delay.as_secs()
)
self._scheduled_expiry = self.clock.call_later(
delay,
@@ -551,7 +549,7 @@ class EventCreationHandler:
"send_dummy_events_to_fill_extremities",
self._send_dummy_events_to_fill_extremities,
),
5 * 60 * 1000,
Duration(minutes=5),
)
self._message_handler = hs.get_message_handler()
@@ -1012,7 +1010,7 @@ class EventCreationHandler:
if not ignore_shadow_ban and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
room_version = None
@@ -1515,7 +1513,7 @@ class EventCreationHandler:
and requester.shadow_banned
):
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
if event.is_state():

View File

@@ -42,6 +42,7 @@ from synapse.types import (
from synapse.types.handlers import ShutdownRoomParams, ShutdownRoomResponse
from synapse.types.state import StateFilter
from synapse.util.async_helpers import ReadWriteLock
from synapse.util.duration import Duration
from synapse.visibility import filter_events_for_client
if TYPE_CHECKING:
@@ -116,7 +117,7 @@ class PaginationHandler:
self.clock.looping_call(
self.hs.run_as_background_process,
job.interval,
Duration(milliseconds=job.interval),
"purge_history_for_rooms_in_range",
self.purge_history_for_rooms_in_range,
job.shortest_max_lifetime,

View File

@@ -121,6 +121,7 @@ from synapse.types import (
get_domain_from_id,
)
from synapse.util.async_helpers import Linearizer
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer
@@ -203,7 +204,7 @@ EXTERNAL_PROCESS_EXPIRY = 5 * 60 * 1000
# Delay before a worker tells the presence handler that a user has stopped
# syncing.
UPDATE_SYNCING_USERS_MS = 10 * 1000
UPDATE_SYNCING_USERS = Duration(seconds=10)
assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER
@@ -528,7 +529,7 @@ class WorkerPresenceHandler(BasePresenceHandler):
self._bump_active_client = ReplicationBumpPresenceActiveTime.make_client(hs)
self._set_state_client = ReplicationPresenceSetState.make_client(hs)
self.clock.looping_call(self.send_stop_syncing, UPDATE_SYNCING_USERS_MS)
self.clock.looping_call(self.send_stop_syncing, UPDATE_SYNCING_USERS)
hs.register_async_shutdown_handler(
phase="before",
@@ -581,7 +582,7 @@ class WorkerPresenceHandler(BasePresenceHandler):
for (user_id, device_id), last_sync_ms in list(
self._user_devices_going_offline.items()
):
if now - last_sync_ms > UPDATE_SYNCING_USERS_MS:
if now - last_sync_ms > UPDATE_SYNCING_USERS.as_millis():
self._user_devices_going_offline.pop((user_id, device_id), None)
self.send_user_sync(user_id, device_id, False, last_sync_ms)
@@ -861,20 +862,20 @@ class PresenceHandler(BasePresenceHandler):
# The initial delay is to allow disconnected clients a chance to
# reconnect before we treat them as offline.
self.clock.call_later(
30,
Duration(seconds=30),
self.clock.looping_call,
self._handle_timeouts,
5000,
Duration(seconds=5),
)
# Presence information is persisted, whether or not it is being tracked
# internally.
if self._presence_enabled:
self.clock.call_later(
60,
Duration(seconds=60),
self.clock.looping_call,
self._persist_unpersisted_changes,
60 * 1000,
Duration(minutes=1),
)
presence_wheel_timer_size_gauge.register_hook(
@@ -2430,7 +2431,7 @@ class PresenceFederationQueue:
_KEEP_ITEMS_IN_QUEUE_FOR_MS = 5 * 60 * 1000
# How often to check if we can expire entries from the queue.
_CLEAR_ITEMS_EVERY_MS = 60 * 1000
_CLEAR_ITEMS_EVERY_MS = Duration(minutes=1)
def __init__(self, hs: "HomeServer", presence_handler: BasePresenceHandler):
self._clock = hs.get_clock()

View File

@@ -34,6 +34,7 @@ from synapse.api.errors import (
from synapse.storage.databases.main.media_repository import LocalMedia, RemoteMedia
from synapse.types import JsonDict, JsonValue, Requester, UserID, create_requester
from synapse.util.caches.descriptors import cached
from synapse.util.duration import Duration
from synapse.util.stringutils import parse_and_validate_mxc_uri
if TYPE_CHECKING:
@@ -583,7 +584,7 @@ class ProfileHandler:
# Do not actually update the room state for shadow-banned users.
if requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
return
room_ids = await self.store.get_rooms_for_user(target_user.to_string())

View File

@@ -92,6 +92,7 @@ from synapse.types.state import StateFilter
from synapse.util import stringutils
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
from synapse.util.stringutils import parse_and_validate_server_name
from synapse.visibility import filter_events_for_client
@@ -1179,7 +1180,7 @@ class RoomCreationHandler:
if (invite_list or invite_3pid_list) and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
# Allow the request to go through, but remove any associated invites.
invite_3pid_list = []

View File

@@ -66,6 +66,7 @@ from synapse.types import (
from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer
from synapse.util.distributor import user_left_room
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -642,7 +643,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
if action == Membership.INVITE and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
key = (room_id,)
@@ -1647,7 +1648,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
if requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
# We need to rate limit *before* we send out any 3PID invites, so we
@@ -2190,7 +2191,7 @@ class RoomForgetterHandler(StateDeltasHandler):
# We kick this off to pick up outstanding work from before the last restart.
self._clock.call_later(
0,
Duration(seconds=0),
self.notify_new_event,
)
@@ -2232,7 +2233,7 @@ class RoomForgetterHandler(StateDeltasHandler):
#
# We wait for a short time so that we don't "tight" loop just
# keeping the table up to date.
await self._clock.sleep(0.5)
await self._clock.sleep(Duration(milliseconds=500))
self.pos = self._store.get_room_max_stream_ordering()
await self._store.update_room_forgetter_stream_pos(self.pos)

View File

@@ -34,10 +34,12 @@ from synapse.api.constants import (
EventTypes,
Membership,
)
from synapse.api.errors import SlidingSyncUnknownPosition
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import StrippedStateEvent
from synapse.events.utils import parse_stripped_state_event
from synapse.logging.opentracing import start_active_span, trace
from synapse.storage.databases.main.sliding_sync import UPDATE_INTERVAL_LAST_USED_TS
from synapse.storage.databases.main.state import (
ROOM_UNKNOWN_SENTINEL,
Sentinel as StateSentinel,
@@ -68,6 +70,7 @@ from synapse.types.handlers.sliding_sync import (
)
from synapse.types.state import StateFilter
from synapse.util import MutableOverlayMapping
from synapse.util.duration import Duration
from synapse.util.sentinel import Sentinel
if TYPE_CHECKING:
@@ -77,6 +80,27 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# Minimum time in milliseconds since the last sync before we consider expiring
# the connection due to too many rooms to send. This stops from getting into
# tight loops with clients that request lots of data at once.
#
# c.f. `NUM_ROOMS_THRESHOLD`. These values are somewhat arbitrary picked.
MINIMUM_NOT_USED_AGE_EXPIRY = Duration(hours=1)
# How many rooms with updates we allow before we consider the connection expired
# due to too many rooms to send.
#
# c.f. `MINIMUM_NOT_USED_AGE_EXPIRY_MS`. These values are somewhat arbitrary
# picked.
NUM_ROOMS_THRESHOLD = 100
# Sanity check that our minimum age is sensible compared to the update interval,
# i.e. if `MINIMUM_NOT_USED_AGE_EXPIRY_MS` is too small then we might expire the
# connection even if it is actively being used (and we're just not updating the
# DB frequently enough). We arbitrarily double the update interval to give some
# wiggle room.
assert 2 * UPDATE_INTERVAL_LAST_USED_TS < MINIMUM_NOT_USED_AGE_EXPIRY
# Helper definition for the types that we might return. We do this to avoid
# copying data between types (which can be expensive for many rooms).
RoomsForUserType = RoomsForUserStateReset | RoomsForUser | RoomsForUserSlidingSync
@@ -176,6 +200,7 @@ class SlidingSyncRoomLists:
self.storage_controllers = hs.get_storage_controllers()
self.rooms_to_exclude_globally = hs.config.server.rooms_to_exclude_from_sync
self.is_mine_id = hs.is_mine_id
self._clock = hs.get_clock()
async def compute_interested_rooms(
self,
@@ -857,11 +882,41 @@ class SlidingSyncRoomLists:
# We only need to check for new events since any state changes
# will also come down as new events.
rooms_that_have_updates = (
self.store.get_rooms_that_might_have_updates(
rooms_that_have_updates = await (
self.store.get_rooms_that_have_updates_since_sliding_sync_table(
relevant_room_map.keys(), from_token.room_key
)
)
# Check if we have lots of updates to send, if so then its
# better for us to tell the client to do a full resync
# instead (to try and avoid long SSS response times when
# there is new data).
#
# Due to the construction of the SSS API, the client is in
# charge of setting the range of rooms to request updates
# for. Generally, it will start with a small range and then
# expand (and occasionally it may contract the range again
# if its been offline for a while). If we know there are a
# lot of updates, it's better to reset the connection and
# wait for the client to start again (with a much smaller
# range) than to try and send down a large number of updates
# (which can take a long time).
#
# We only do this if the last sync was over
# `MINIMUM_NOT_USED_AGE_EXPIRY_MS` to ensure we don't get
# into tight loops with clients that keep requesting large
# sliding sync windows.
if len(rooms_that_have_updates) > NUM_ROOMS_THRESHOLD:
last_sync_ts = previous_connection_state.last_used_ts
if (
last_sync_ts is not None
and (self._clock.time_msec() - last_sync_ts)
> MINIMUM_NOT_USED_AGE_EXPIRY.as_millis()
):
raise SlidingSyncUnknownPosition()
rooms_should_send.update(rooms_that_have_updates)
relevant_rooms_to_send_map = {
room_id: room_sync_config

View File

@@ -75,7 +75,7 @@ class SlidingSyncConnectionStore:
"""
# If this is our first request, there is no previous connection state to fetch out of the database
if from_token is None or from_token.connection_position == 0:
return PerConnectionState()
return PerConnectionState(last_used_ts=None)
conn_id = sync_config.conn_id or ""

View File

@@ -32,6 +32,7 @@ from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.metrics import SERVER_NAME_LABEL, event_processing_positions
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import JsonDict
from synapse.util.duration import Duration
from synapse.util.events import get_plain_text_topic_from_event_content
if TYPE_CHECKING:
@@ -72,7 +73,7 @@ class StatsHandler:
# We kick this off so that we don't have to wait for a change before
# we start populating stats
self.clock.call_later(
0,
Duration(seconds=0),
self.notify_new_event,
)

View File

@@ -41,6 +41,7 @@ from synapse.types import (
UserID,
)
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
from synapse.util.retryutils import filter_destinations_by_retry_limiter
from synapse.util.wheel_timer import WheelTimer
@@ -60,15 +61,15 @@ class RoomMember:
# How often we expect remote servers to resend us presence.
FEDERATION_TIMEOUT = 60 * 1000
FEDERATION_TIMEOUT = Duration(seconds=60)
# How often to resend typing across federation.
FEDERATION_PING_INTERVAL = 40 * 1000
FEDERATION_PING_INTERVAL = Duration(seconds=40)
# How long to remember a typing notification happened in a room before
# forgetting about it.
FORGET_TIMEOUT = 10 * 60 * 1000
FORGET_TIMEOUT = Duration(minutes=10)
class FollowerTypingHandler:
@@ -106,7 +107,7 @@ class FollowerTypingHandler:
self._rooms_updated: set[str] = set()
self.clock.looping_call(self._handle_timeouts, 5000)
self.clock.looping_call(self._handle_timeouts, Duration(seconds=5))
self.clock.looping_call(self._prune_old_typing, FORGET_TIMEOUT)
def _reset(self) -> None:
@@ -141,7 +142,10 @@ class FollowerTypingHandler:
# user.
if self.federation and self.is_mine_id(member.user_id):
last_fed_poke = self._member_last_federation_poke.get(member, None)
if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL <= now:
if (
not last_fed_poke
or last_fed_poke + FEDERATION_PING_INTERVAL.as_millis() <= now
):
self.hs.run_as_background_process(
"typing._push_remote",
self._push_remote,
@@ -165,7 +169,7 @@ class FollowerTypingHandler:
now = self.clock.time_msec()
self.wheel_timer.insert(
now=now, obj=member, then=now + FEDERATION_PING_INTERVAL
now=now, obj=member, then=now + FEDERATION_PING_INTERVAL.as_millis()
)
hosts: StrCollection = (
@@ -315,7 +319,7 @@ class TypingWriterHandler(FollowerTypingHandler):
if requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
await self.auth.check_user_in_room(room_id, requester)
@@ -350,7 +354,7 @@ class TypingWriterHandler(FollowerTypingHandler):
if requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
await self.clock.sleep(Duration(seconds=random.randint(1, 10)))
raise ShadowBanError()
await self.auth.check_user_in_room(room_id, requester)
@@ -428,8 +432,10 @@ class TypingWriterHandler(FollowerTypingHandler):
if user.domain in domains:
logger.info("Got typing update from %s: %r", user_id, content)
now = self.clock.time_msec()
self._member_typing_until[member] = now + FEDERATION_TIMEOUT
self.wheel_timer.insert(now=now, obj=member, then=now + FEDERATION_TIMEOUT)
self._member_typing_until[member] = now + FEDERATION_TIMEOUT.as_millis()
self.wheel_timer.insert(
now=now, obj=member, then=now + FEDERATION_TIMEOUT.as_millis()
)
self._push_update_local(member=member, typing=content["typing"])
def _push_update_local(self, member: RoomMember, typing: bool) -> None:

View File

@@ -40,6 +40,7 @@ from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.storage.databases.main.user_directory import SearchResult
from synapse.storage.roommember import ProfileInfo
from synapse.types import UserID
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.stringutils import non_null_str_or_none
@@ -52,7 +53,7 @@ logger = logging.getLogger(__name__)
# Don't refresh a stale user directory entry, using a Federation /profile request,
# for 60 seconds. This gives time for other state events to arrive (which will
# then be coalesced such that only one /profile request is made).
USER_DIRECTORY_STALE_REFRESH_TIME_MS = 60 * 1000
USER_DIRECTORY_STALE_REFRESH_TIME = Duration(seconds=60)
# Maximum number of remote servers that we will attempt to refresh profiles for
# in one go.
@@ -60,7 +61,7 @@ MAX_SERVERS_TO_REFRESH_PROFILES_FOR_IN_ONE_GO = 5
# As long as we have servers to refresh (without backoff), keep adding more
# every 15 seconds.
INTERVAL_TO_ADD_MORE_SERVERS_TO_REFRESH_PROFILES = 15
INTERVAL_TO_ADD_MORE_SERVERS_TO_REFRESH_PROFILES = Duration(seconds=15)
def calculate_time_of_next_retry(now_ts: int, retry_count: int) -> int:
@@ -137,13 +138,13 @@ class UserDirectoryHandler(StateDeltasHandler):
# We kick this off so that we don't have to wait for a change before
# we start populating the user directory
self.clock.call_later(
0,
Duration(seconds=0),
self.notify_new_event,
)
# Kick off the profile refresh process on startup
self._refresh_remote_profiles_call_later = self.clock.call_later(
10,
Duration(seconds=10),
self.kick_off_remote_profile_refresh_process,
)
@@ -550,7 +551,7 @@ class UserDirectoryHandler(StateDeltasHandler):
now_ts = self.clock.time_msec()
await self.store.set_remote_user_profile_in_user_dir_stale(
user_id,
next_try_at_ms=now_ts + USER_DIRECTORY_STALE_REFRESH_TIME_MS,
next_try_at_ms=now_ts + USER_DIRECTORY_STALE_REFRESH_TIME.as_millis(),
retry_counter=0,
)
# Schedule a wake-up to refresh the user directory for this server.
@@ -558,13 +559,13 @@ class UserDirectoryHandler(StateDeltasHandler):
# other servers ahead of it in the queue to get in the way of updating
# the profile if the server only just sent us an event.
self.clock.call_later(
USER_DIRECTORY_STALE_REFRESH_TIME_MS // 1000 + 1,
USER_DIRECTORY_STALE_REFRESH_TIME + Duration(seconds=1),
self.kick_off_remote_profile_refresh_process_for_remote_server,
UserID.from_string(user_id).domain,
)
# Schedule a wake-up to handle any backoffs that may occur in the future.
self.clock.call_later(
2 * USER_DIRECTORY_STALE_REFRESH_TIME_MS // 1000 + 1,
USER_DIRECTORY_STALE_REFRESH_TIME * 2 + Duration(seconds=1),
self.kick_off_remote_profile_refresh_process,
)
return
@@ -656,7 +657,9 @@ class UserDirectoryHandler(StateDeltasHandler):
if not users:
return
_, _, next_try_at_ts = users[0]
delay = ((next_try_at_ts - self.clock.time_msec()) // 1000) + 2
delay = Duration(
milliseconds=next_try_at_ts - self.clock.time_msec()
) + Duration(seconds=2)
self._refresh_remote_profiles_call_later = self.clock.call_later(
delay,
self.kick_off_remote_profile_refresh_process,

View File

@@ -39,7 +39,7 @@ from synapse.metrics.background_process_metrics import wrap_as_background_proces
from synapse.storage.databases.main.lock import Lock, LockStore
from synapse.util.async_helpers import timeout_deferred
from synapse.util.clock import Clock
from synapse.util.constants import ONE_MINUTE_SECONDS
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.logging.opentracing import opentracing
@@ -72,7 +72,7 @@ class WorkerLocksHandler:
# that lock.
self._locks: dict[tuple[str, str], WeakSet[WaitingLock | WaitingMultiLock]] = {}
self._clock.looping_call(self._cleanup_locks, 30_000)
self._clock.looping_call(self._cleanup_locks, Duration(seconds=30))
self._notifier.add_lock_released_callback(self._on_lock_released)
@@ -187,7 +187,7 @@ class WorkerLocksHandler:
lock.release_lock()
self._clock.call_later(
0,
Duration(seconds=0),
_wake_all_locks,
locks,
)
@@ -276,7 +276,7 @@ class WaitingLock:
def _get_next_retry_interval(self) -> float:
next = self._retry_interval
self._retry_interval = max(5, next * 2)
if self._retry_interval > 10 * ONE_MINUTE_SECONDS: # >7 iterations
if self._retry_interval > Duration(minutes=10).as_secs(): # >7 iterations
logger.warning(
"Lock timeout is getting excessive: %ss. There may be a deadlock.",
self._retry_interval,
@@ -363,7 +363,7 @@ class WaitingMultiLock:
def _get_next_retry_interval(self) -> float:
next = self._retry_interval
self._retry_interval = max(5, next * 2)
if self._retry_interval > 10 * ONE_MINUTE_SECONDS: # >7 iterations
if self._retry_interval > Duration(minutes=10).as_secs(): # >7 iterations
logger.warning(
"Lock timeout is getting excessive: %ss. There may be a deadlock.",
self._retry_interval,

View File

@@ -87,6 +87,7 @@ from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import ISynapseReactor, StrSequence
from synapse.util.async_helpers import timeout_deferred
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.json import json_decoder
if TYPE_CHECKING:
@@ -172,7 +173,7 @@ def _make_scheduler(clock: Clock) -> Callable[[Callable[[], object]], IDelayedCa
def _scheduler(x: Callable[[], object]) -> IDelayedCall:
return clock.call_later(
_EPSILON,
Duration(seconds=_EPSILON),
x,
)

View File

@@ -37,6 +37,7 @@ from synapse.logging.context import make_deferred_yieldable
from synapse.types import ISynapseThreadlessReactor
from synapse.util.caches.ttlcache import TTLCache
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.json import json_decoder
from synapse.util.metrics import Measure
@@ -315,7 +316,7 @@ class WellKnownResolver:
logger.info("Error fetching %s: %s. Retrying", uri_str, e)
# Sleep briefly in the hopes that they come back up
await self._clock.sleep(0.5)
await self._clock.sleep(Duration(milliseconds=500))
def _cache_period_from_headers(

View File

@@ -76,6 +76,7 @@ from synapse.logging.opentracing import active_span, start_active_span, trace_se
from synapse.util.caches import intern_dict
from synapse.util.cancellation import is_function_cancellable
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.iterutils import chunk_seq
from synapse.util.json import json_encoder
@@ -334,7 +335,7 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
callback_return = await self._async_render(request)
except LimitExceededError as e:
if e.pause:
await self._clock.sleep(e.pause)
await self._clock.sleep(Duration(seconds=e.pause))
raise
if callback_return is not None:

View File

@@ -70,6 +70,7 @@ from synapse.media.url_previewer import UrlPreviewer
from synapse.storage.databases.main.media_repository import LocalMedia, RemoteMedia
from synapse.types import UserID
from synapse.util.async_helpers import Linearizer
from synapse.util.duration import Duration
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.stringutils import random_string
@@ -80,10 +81,10 @@ logger = logging.getLogger(__name__)
# How often to run the background job to update the "recently accessed"
# attribute of local and remote media.
UPDATE_RECENTLY_ACCESSED_TS = 60 * 1000 # 1 minute
UPDATE_RECENTLY_ACCESSED_TS = Duration(minutes=1)
# How often to run the background job to check for local and remote media
# that should be purged according to the configured media retention settings.
MEDIA_RETENTION_CHECK_PERIOD_MS = 60 * 60 * 1000 # 1 hour
MEDIA_RETENTION_CHECK_PERIOD = Duration(hours=1)
class MediaRepository:
@@ -166,7 +167,7 @@ class MediaRepository:
# with the duration between runs dictated by the homeserver config.
self.clock.looping_call(
self._start_apply_media_retention_rules,
MEDIA_RETENTION_CHECK_PERIOD_MS,
MEDIA_RETENTION_CHECK_PERIOD,
)
if hs.config.media.url_preview_enabled:
@@ -485,7 +486,7 @@ class MediaRepository:
if now >= wait_until:
break
await self.clock.sleep(0.5)
await self.clock.sleep(Duration(milliseconds=500))
logger.info("Media %s has not yet been uploaded", media_id)
self.respond_not_yet_uploaded(request)

View File

@@ -51,6 +51,7 @@ from synapse.logging.context import defer_to_thread, run_in_background
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
from synapse.media._base import ThreadedFileSender
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.file_consumer import BackgroundFileConsumer
from ..types import JsonDict
@@ -457,7 +458,7 @@ class ReadableFileWrapper:
callback(chunk)
# We yield to the reactor by sleeping for 0 seconds.
await self.clock.sleep(0)
await self.clock.sleep(Duration(seconds=0))
@implementer(interfaces.IConsumer)
@@ -652,7 +653,7 @@ class MultipartFileConsumer:
self.paused = False
while not self.paused:
producer.resumeProducing()
await self.clock.sleep(0)
await self.clock.sleep(Duration(seconds=0))
class Header:

View File

@@ -47,6 +47,7 @@ from synapse.media.preview_html import decode_body, parse_html_to_open_graph
from synapse.types import JsonDict, UserID
from synapse.util.async_helpers import ObservableDeferred
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.duration import Duration
from synapse.util.json import json_encoder
from synapse.util.stringutils import random_string
@@ -208,7 +209,9 @@ class UrlPreviewer:
)
if self._worker_run_media_background_jobs:
self.clock.looping_call(self._start_expire_url_cache_data, 10 * 1000)
self.clock.looping_call(
self._start_expire_url_cache_data, Duration(seconds=10)
)
async def preview(self, url: str, user: UserID, ts: int) -> bytes:
# the in-memory cache:

View File

@@ -23,6 +23,7 @@ from typing import TYPE_CHECKING
import attr
from synapse.metrics import SERVER_NAME_LABEL
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -70,7 +71,7 @@ class CommonUsageMetricsManager:
)
self._clock.looping_call(
self._hs.run_as_background_process,
5 * 60 * 1000,
Duration(minutes=5),
desc="common_usage_metrics_update_gauges",
func=self._update_gauges,
)

View File

@@ -158,6 +158,7 @@ from synapse.types.state import StateFilter
from synapse.util.async_helpers import maybe_awaitable
from synapse.util.caches.descriptors import CachedFunction, cached as _cached
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.frozenutils import freeze
if TYPE_CHECKING:
@@ -1389,7 +1390,7 @@ class ModuleApi:
if self._hs.config.worker.run_background_tasks or run_on_all_instances:
self._clock.looping_call(
self._hs.run_as_background_process,
msec,
Duration(milliseconds=msec),
desc,
lambda: maybe_awaitable(f(*args, **kwargs)),
)
@@ -1444,8 +1445,7 @@ class ModuleApi:
desc = f.__name__
return self._clock.call_later(
# convert ms to seconds as needed by call_later.
msec * 0.001,
Duration(milliseconds=msec),
self._hs.run_as_background_process,
desc,
lambda: maybe_awaitable(f(*args, **kwargs)),
@@ -1457,7 +1457,7 @@ class ModuleApi:
Added in Synapse v1.49.0.
"""
await self._clock.sleep(seconds)
await self._clock.sleep(Duration(seconds=seconds))
async def send_http_push_notification(
self,

View File

@@ -61,6 +61,7 @@ from synapse.types import (
from synapse.util.async_helpers import (
timeout_deferred,
)
from synapse.util.duration import Duration
from synapse.util.stringutils import shortstr
from synapse.visibility import filter_events_for_client
@@ -235,7 +236,7 @@ class Notifier:
Primarily used from the /events stream.
"""
UNUSED_STREAM_EXPIRY_MS = 10 * 60 * 1000
UNUSED_STREAM_EXPIRY = Duration(minutes=10)
def __init__(self, hs: "HomeServer"):
self.user_to_user_stream: dict[str, _NotifierUserStream] = {}
@@ -269,9 +270,7 @@ class Notifier:
self.state_handler = hs.get_state_handler()
self.clock.looping_call(
self.remove_expired_streams, self.UNUSED_STREAM_EXPIRY_MS
)
self.clock.looping_call(self.remove_expired_streams, self.UNUSED_STREAM_EXPIRY)
# This is not a very cheap test to perform, but it's only executed
# when rendering the metrics page, which is likely once per minute at
@@ -861,7 +860,7 @@ class Notifier:
logged = True
# TODO: be better
await self.clock.sleep(0.5)
await self.clock.sleep(Duration(milliseconds=500))
async def _get_room_ids(
self, user: UserID, explicit_room_id: str | None
@@ -889,7 +888,7 @@ class Notifier:
def remove_expired_streams(self) -> None:
time_now_ms = self.clock.time_msec()
expired_streams = []
expire_before_ts = time_now_ms - self.UNUSED_STREAM_EXPIRY_MS
expire_before_ts = time_now_ms - self.UNUSED_STREAM_EXPIRY.as_millis()
for stream in self.user_to_user_stream.values():
if stream.count_listeners():
continue

View File

@@ -29,6 +29,7 @@ from synapse.push import Pusher, PusherConfig, PusherConfigException, ThrottlePa
from synapse.push.mailer import Mailer
from synapse.push.push_types import EmailReason
from synapse.storage.databases.main.event_push_actions import EmailPushAction
from synapse.util.duration import Duration
from synapse.util.threepids import validate_email
if TYPE_CHECKING:
@@ -229,7 +230,7 @@ class EmailPusher(Pusher):
if soonest_due_at is not None:
delay = self.seconds_until(soonest_due_at)
self.timed_call = self.hs.get_clock().call_later(
delay,
Duration(seconds=delay),
self.on_timer,
)

View File

@@ -40,6 +40,7 @@ from . import push_tools
if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.util.duration import Duration
logger = logging.getLogger(__name__)
@@ -336,7 +337,7 @@ class HttpPusher(Pusher):
else:
logger.info("Push failed: delaying for %ds", self.backoff_delay)
self.timed_call = self.hs.get_clock().call_later(
self.backoff_delay,
Duration(seconds=self.backoff_delay),
self.on_timer,
)
self.backoff_delay = min(
@@ -371,7 +372,7 @@ class HttpPusher(Pusher):
delay_ms = random.randint(1, self.push_jitter_delay_ms)
diff_ms = event.origin_server_ts + delay_ms - self.clock.time_msec()
if diff_ms > 0:
await self.clock.sleep(diff_ms / 1000)
await self.clock.sleep(Duration(milliseconds=diff_ms))
rejected = await self.dispatch_push_event(event, tweaks, badge)
if rejected is False:

View File

@@ -42,6 +42,7 @@ from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import JsonDict
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.cancellation import is_function_cancellable
from synapse.util.duration import Duration
from synapse.util.stringutils import random_string
if TYPE_CHECKING:
@@ -317,7 +318,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
# If we timed out we probably don't need to worry about backing
# off too much, but lets just wait a little anyway.
await clock.sleep(1)
await clock.sleep(Duration(seconds=1))
except (ConnectError, DNSLookupError) as e:
if not cls.RETRY_ON_CONNECT_ERROR:
raise
@@ -332,7 +333,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
e,
)
await clock.sleep(delay)
await clock.sleep(Duration(seconds=delay))
attempts += 1
except HttpResponseException as e:
# We convert to SynapseError as we know that it was a SynapseError

View File

@@ -55,6 +55,7 @@ from synapse.replication.tcp.streams.partial_state import (
)
from synapse.types import PersistedEventPosition, ReadReceipt, StreamKeyType, UserID
from synapse.util.async_helpers import Linearizer, timeout_deferred
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
from synapse.util.metrics import Measure
@@ -173,7 +174,7 @@ class ReplicationDataHandler:
)
# Yield to reactor so that we don't block.
await self._clock.sleep(0)
await self._clock.sleep(Duration(seconds=0))
elif stream_name == PushersStream.NAME:
for row in rows:
if row.deleted:

View File

@@ -55,6 +55,7 @@ from synapse.replication.tcp.commands import (
parse_command_from_line,
)
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.stringutils import random_string
if TYPE_CHECKING:
@@ -193,7 +194,9 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
self._send_pending_commands()
# Starts sending pings
self._send_ping_loop = self.clock.looping_call(self.send_ping, 5000)
self._send_ping_loop = self.clock.looping_call(
self.send_ping, Duration(seconds=5)
)
# Always send the initial PING so that the other side knows that they
# can time us out.

View File

@@ -53,6 +53,7 @@ from synapse.replication.tcp.protocol import (
tcp_inbound_commands_counter,
tcp_outbound_commands_counter,
)
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.replication.tcp.handler import ReplicationCommandHandler
@@ -317,7 +318,7 @@ class SynapseRedisFactory(RedisFactory):
self.hs = hs # nb must be called this for @wrap_as_background_process
self.server_name = hs.hostname
hs.get_clock().looping_call(self._send_ping, 30 * 1000)
hs.get_clock().looping_call(self._send_ping, Duration(seconds=30))
@wrap_as_background_process("redis_ping")
async def _send_ping(self) -> None:

View File

@@ -34,6 +34,7 @@ from synapse.replication.tcp.commands import PositionCommand
from synapse.replication.tcp.protocol import ServerReplicationStreamProtocol
from synapse.replication.tcp.streams import EventsStream
from synapse.replication.tcp.streams._base import CachesStream, StreamRow, Token
from synapse.util.duration import Duration
from synapse.util.metrics import Measure
if TYPE_CHECKING:
@@ -116,7 +117,7 @@ class ReplicationStreamer:
#
# Note that if the position hasn't advanced then we won't send anything.
if any(EventsStream.NAME == s.NAME for s in self.streams):
self.clock.looping_call(self.on_notifier_poke, 1000)
self.clock.looping_call(self.on_notifier_poke, Duration(seconds=1))
def on_notifier_poke(self) -> None:
"""Checks if there is actually any new data and sends it to the

View File

@@ -58,6 +58,7 @@ from synapse.types.rest.client import (
EmailRequestTokenBody,
MsisdnRequestTokenBody,
)
from synapse.util.duration import Duration
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.stringutils import assert_valid_client_secret, random_string
from synapse.util.threepids import check_3pid_allowed, validate_email
@@ -125,7 +126,9 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
# comments for request_token_inhibit_3pid_errors.
# Also wait for some random amount of time between 100ms and 1s to make it
# look like we did something.
await self.hs.get_clock().sleep(random.randint(1, 10) / 10)
await self.hs.get_clock().sleep(
Duration(milliseconds=random.randint(100, 1000))
)
return 200, {"sid": random_string(16)}
raise SynapseError(400, "Email not found", Codes.THREEPID_NOT_FOUND)
@@ -383,7 +386,9 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
# comments for request_token_inhibit_3pid_errors.
# Also wait for some random amount of time between 100ms and 1s to make it
# look like we did something.
await self.hs.get_clock().sleep(random.randint(1, 10) / 10)
await self.hs.get_clock().sleep(
Duration(milliseconds=random.randint(100, 1000))
)
return 200, {"sid": random_string(16)}
raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE)
@@ -449,7 +454,9 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
# comments for request_token_inhibit_3pid_errors.
# Also wait for some random amount of time between 100ms and 1s to make it
# look like we did something.
await self.hs.get_clock().sleep(random.randint(1, 10) / 10)
await self.hs.get_clock().sleep(
Duration(milliseconds=random.randint(100, 1000))
)
return 200, {"sid": random_string(16)}
logger.info("MSISDN %s is already in use by %s", msisdn, existing_user_id)

View File

@@ -90,4 +90,5 @@ class UserMutualRoomsServlet(RestServlet):
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
UserMutualRoomsServlet(hs).register(http_server)
if hs.config.experimental.msc2666_enabled:
UserMutualRoomsServlet(hs).register(http_server)

View File

@@ -59,6 +59,7 @@ from synapse.http.site import SynapseRequest
from synapse.metrics import SERVER_NAME_LABEL, threepid_send_requests
from synapse.push.mailer import Mailer
from synapse.types import JsonDict
from synapse.util.duration import Duration
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.stringutils import assert_valid_client_secret, random_string
@@ -150,7 +151,9 @@ class EmailRegisterRequestTokenRestServlet(RestServlet):
# Also wait for some random amount of time between 100ms and 1s to make it
# look like we did something.
await self.already_in_use_mailer.send_already_in_use_mail(email)
await self.hs.get_clock().sleep(random.randint(1, 10) / 10)
await self.hs.get_clock().sleep(
Duration(milliseconds=random.randint(100, 1000))
)
return 200, {"sid": random_string(16)}
raise SynapseError(400, "Email is already in use", Codes.THREEPID_IN_USE)
@@ -219,7 +222,9 @@ class MsisdnRegisterRequestTokenRestServlet(RestServlet):
# comments for request_token_inhibit_3pid_errors.
# Also wait for some random amount of time between 100ms and 1s to make it
# look like we did something.
await self.hs.get_clock().sleep(random.randint(1, 10) / 10)
await self.hs.get_clock().sleep(
Duration(milliseconds=random.randint(100, 1000))
)
return 200, {"sid": random_string(16)}
raise SynapseError(

View File

@@ -34,13 +34,14 @@ from twisted.web.iweb import IRequest
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.types import JsonDict, Requester
from synapse.util.async_helpers import ObservableDeferred
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
CLEANUP_PERIOD_MS = 1000 * 60 * 30 # 30 mins
CLEANUP_PERIOD = Duration(minutes=30)
P = ParamSpec("P")
@@ -56,7 +57,7 @@ class HttpTransactionCache:
] = {}
# Try to clean entries every 30 mins. This means entries will exist
# for at *LEAST* 30 mins, and at *MOST* 60 mins.
self.clock.looping_call(self._cleanup, CLEANUP_PERIOD_MS)
self.clock.looping_call(self._cleanup, CLEANUP_PERIOD)
def _get_transaction_key(self, request: IRequest, requester: Requester) -> Hashable:
"""A helper function which returns a transaction key that can be used
@@ -145,5 +146,5 @@ class HttpTransactionCache:
now = self.clock.time_msec()
for key in list(self.transactions):
ts = self.transactions[key][1]
if now > (ts + CLEANUP_PERIOD_MS): # after cleanup period
if now > (ts + CLEANUP_PERIOD.as_millis()): # after cleanup period
del self.transactions[key]

View File

@@ -124,7 +124,7 @@ class VersionsRestServlet(RestServlet):
# Implements additional endpoints as described in MSC2432
"org.matrix.msc2432": True,
# Implements additional endpoints as described in MSC2666
"uk.half-shot.msc2666.query_mutual_rooms": True,
"uk.half-shot.msc2666.query_mutual_rooms": self.config.experimental.msc2666_enabled,
# Whether new rooms will be set to encrypted or not (based on presets).
"io.element.e2ee_forced.public": self.e2ee_forced_public,
"io.element.e2ee_forced.private": self.e2ee_forced_private,

View File

@@ -54,6 +54,7 @@ from synapse.types import StateMap, StrCollection
from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.duration import Duration
from synapse.util.metrics import Measure, measure_func
from synapse.util.stringutils import shortstr
@@ -663,7 +664,7 @@ class StateResolutionHandler:
_StateResMetrics
)
self.clock.looping_call(self._report_metrics, 120 * 1000)
self.clock.looping_call(self._report_metrics, Duration(minutes=2))
async def resolve_state_groups(
self,

View File

@@ -40,6 +40,7 @@ from synapse.api.room_versions import RoomVersion, StateResolutionVersions
from synapse.events import EventBase, is_creator
from synapse.storage.databases.main.event_federation import StateDifference
from synapse.types import MutableStateMap, StateMap, StrCollection
from synapse.util.duration import Duration
logger = logging.getLogger(__name__)
@@ -48,7 +49,7 @@ class Clock(Protocol):
# This is usually synapse.util.Clock, but it's replaced with a FakeClock in tests.
# We only ever sleep(0) though, so that other async functions can make forward
# progress without waiting for stateres to complete.
async def sleep(self, duration_ms: float) -> None: ...
async def sleep(self, duration: Duration) -> None: ...
class StateResolutionStore(Protocol):
@@ -639,7 +640,7 @@ async def _reverse_topological_power_sort(
# We await occasionally when we're working with large data sets to
# ensure that we don't block the reactor loop for too long.
if idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
event_to_pl = {}
for idx, event_id in enumerate(graph, start=1):
@@ -651,7 +652,7 @@ async def _reverse_topological_power_sort(
# We await occasionally when we're working with large data sets to
# ensure that we don't block the reactor loop for too long.
if idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
def _get_power_order(event_id: str) -> tuple[int, int, str]:
ev = event_map[event_id]
@@ -745,7 +746,7 @@ async def _iterative_auth_checks(
# We await occasionally when we're working with large data sets to
# ensure that we don't block the reactor loop for too long.
if idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
return resolved_state
@@ -796,7 +797,7 @@ async def _mainline_sort(
# We await occasionally when we're working with large data sets to
# ensure that we don't block the reactor loop for too long.
if idx != 0 and idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
idx += 1
@@ -814,7 +815,7 @@ async def _mainline_sort(
# We await occasionally when we're working with large data sets to
# ensure that we don't block the reactor loop for too long.
if idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
event_ids.sort(key=lambda ev_id: order_map[ev_id])
@@ -865,7 +866,7 @@ async def _get_mainline_depth_for_event(
idx += 1
if idx % _AWAIT_AFTER_ITERATIONS == 0:
await clock.sleep(0)
await clock.sleep(Duration(seconds=0))
# Didn't find a power level auth event, so we just return 0
return 0

View File

@@ -40,6 +40,7 @@ from synapse.storage.engines import PostgresEngine
from synapse.storage.types import Connection, Cursor
from synapse.types import JsonDict, StrCollection
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.json import json_encoder
from . import engines
@@ -162,7 +163,7 @@ class _BackgroundUpdateContextManager:
async def __aenter__(self) -> int:
if self._sleep:
await self._clock.sleep(self._sleep_duration_ms / 1000)
await self._clock.sleep(Duration(milliseconds=self._sleep_duration_ms))
return self._update_duration_ms

View File

@@ -32,6 +32,7 @@ from synapse.metrics.background_process_metrics import wrap_as_background_proces
from synapse.storage.database import LoggingTransaction
from synapse.storage.databases import Databases
from synapse.types.storage import _BackgroundUpdates
from synapse.util.duration import Duration
from synapse.util.stringutils import shortstr
if TYPE_CHECKING:
@@ -50,7 +51,7 @@ class PurgeEventsStorageController:
if hs.config.worker.run_background_tasks:
self._delete_state_loop_call = hs.get_clock().looping_call(
self._delete_state_groups_loop, 60 * 1000
self._delete_state_groups_loop, Duration(minutes=1)
)
self.stores.state.db_pool.updates.register_background_update_handler(

View File

@@ -62,6 +62,7 @@ from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3E
from synapse.storage.types import Connection, Cursor, SQLQueryParameters
from synapse.types import StrCollection
from synapse.util.async_helpers import delay_cancellation
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
if TYPE_CHECKING:
@@ -631,7 +632,7 @@ class DatabasePool:
# Check ASAP (and then later, every 1s) to see if we have finished
# background updates of tables that aren't safe to update.
self._clock.call_later(
0.0,
Duration(seconds=0),
self.hs.run_as_background_process,
"upsert_safety_check",
self._check_safe_to_upsert,
@@ -679,7 +680,7 @@ class DatabasePool:
# If there's any updates still running, reschedule to run.
if background_update_names:
self._clock.call_later(
15.0,
Duration(seconds=15),
self.hs.run_as_background_process,
"upsert_safety_check",
self._check_safe_to_upsert,
@@ -706,7 +707,7 @@ class DatabasePool:
"Total database time: %.3f%% {%s}", ratio * 100, top_three_counters
)
self._clock.looping_call(loop, 10000)
self._clock.looping_call(loop, Duration(seconds=10))
def new_transaction(
self,

View File

@@ -45,6 +45,7 @@ from synapse.storage.database import (
from synapse.storage.engines import PostgresEngine
from synapse.storage.util.id_generators import MultiWriterIdGenerator
from synapse.util.caches.descriptors import CachedFunction
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
if TYPE_CHECKING:
@@ -71,11 +72,11 @@ GET_E2E_CROSS_SIGNING_SIGNATURES_FOR_DEVICE_CACHE_NAME = (
# How long between cache invalidation table cleanups, once we have caught up
# with the backlog.
REGULAR_CLEANUP_INTERVAL_MS = Config.parse_duration("1h")
REGULAR_CLEANUP_INTERVAL = Duration(hours=1)
# How long between cache invalidation table cleanups, before we have caught
# up with the backlog.
CATCH_UP_CLEANUP_INTERVAL_MS = Config.parse_duration("1m")
CATCH_UP_CLEANUP_INTERVAL = Duration(minutes=1)
# Maximum number of cache invalidation rows to delete at once.
CLEAN_UP_MAX_BATCH_SIZE = 20_000
@@ -139,7 +140,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self.database_engine, PostgresEngine
):
self.hs.get_clock().call_later(
CATCH_UP_CLEANUP_INTERVAL_MS / 1000,
CATCH_UP_CLEANUP_INTERVAL,
self._clean_up_cache_invalidation_wrapper,
)
@@ -825,12 +826,12 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
# Vary how long we wait before calling again depending on whether we
# are still sifting through backlog or we have caught up.
if in_backlog:
next_interval = CATCH_UP_CLEANUP_INTERVAL_MS
next_interval = CATCH_UP_CLEANUP_INTERVAL
else:
next_interval = REGULAR_CLEANUP_INTERVAL_MS
next_interval = REGULAR_CLEANUP_INTERVAL
self.hs.get_clock().call_later(
next_interval / 1000,
next_interval,
self._clean_up_cache_invalidation_wrapper,
)

View File

@@ -32,6 +32,7 @@ from synapse.storage.database import (
)
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.util.duration import Duration
from synapse.util.json import json_encoder
if TYPE_CHECKING:
@@ -54,7 +55,7 @@ class CensorEventsStore(EventsWorkerStore, CacheInvalidationWorkerStore, SQLBase
hs.config.worker.run_background_tasks
and self.hs.config.server.redaction_retention_period is not None
):
hs.get_clock().looping_call(self._censor_redactions, 5 * 60 * 1000)
hs.get_clock().looping_call(self._censor_redactions, Duration(minutes=5))
@wrap_as_background_process("_censor_redactions")
async def _censor_redactions(self) -> None:

View File

@@ -42,6 +42,7 @@ from synapse.storage.databases.main.monthly_active_users import (
)
from synapse.types import JsonDict, UserID
from synapse.util.caches.lrucache import LruCache
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -437,7 +438,7 @@ class ClientIpWorkerStore(ClientIpBackgroundUpdateStore, MonthlyActiveUsersWorke
)
if hs.config.worker.run_background_tasks and self.user_ips_max_age:
self.clock.looping_call(self._prune_old_user_ips, 5 * 1000)
self.clock.looping_call(self._prune_old_user_ips, Duration(seconds=5))
if self._update_on_this_worker:
# This is the designated worker that can write to the client IP
@@ -448,7 +449,7 @@ class ClientIpWorkerStore(ClientIpBackgroundUpdateStore, MonthlyActiveUsersWorke
tuple[str, str, str], tuple[str, str | None, int]
] = {}
self.clock.looping_call(self._update_client_ips_batch, 5 * 1000)
self.clock.looping_call(self._update_client_ips_batch, Duration(seconds=5))
hs.register_async_shutdown_handler(
phase="before",
eventType="shutdown",

View File

@@ -48,9 +48,9 @@ from synapse.storage.database import (
)
from synapse.storage.util.id_generators import MultiWriterIdGenerator
from synapse.types import JsonDict, StrCollection
from synapse.util import Duration
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
from synapse.util.json import json_encoder
from synapse.util.stringutils import parse_and_validate_server_name
@@ -62,10 +62,10 @@ logger = logging.getLogger(__name__)
# How long to keep messages in the device federation inbox before deleting them.
DEVICE_FEDERATION_INBOX_CLEANUP_DELAY_MS = 7 * Duration.DAY_MS
DEVICE_FEDERATION_INBOX_CLEANUP_DELAY = Duration(days=7)
# How often to run the task to clean up old device_federation_inbox rows.
DEVICE_FEDERATION_INBOX_CLEANUP_INTERVAL_MS = 5 * Duration.MINUTE_MS
DEVICE_FEDERATION_INBOX_CLEANUP_INTERVAL = Duration(minutes=5)
# Update name for the device federation inbox received timestamp index.
DEVICE_FEDERATION_INBOX_RECEIVED_INDEX_UPDATE = (
@@ -152,7 +152,7 @@ class DeviceInboxWorkerStore(SQLBaseStore):
if hs.config.worker.run_background_tasks:
self.clock.looping_call(
run_as_background_process,
DEVICE_FEDERATION_INBOX_CLEANUP_INTERVAL_MS,
DEVICE_FEDERATION_INBOX_CLEANUP_INTERVAL,
"_delete_old_federation_inbox_rows",
self.server_name,
self._delete_old_federation_inbox_rows,
@@ -996,9 +996,10 @@ class DeviceInboxWorkerStore(SQLBaseStore):
def _delete_old_federation_inbox_rows_txn(txn: LoggingTransaction) -> bool:
# We delete at most 100 rows that are older than
# DEVICE_FEDERATION_INBOX_CLEANUP_DELAY_MS
# DEVICE_FEDERATION_INBOX_CLEANUP_DELAY
delete_before_ts = (
self.clock.time_msec() - DEVICE_FEDERATION_INBOX_CLEANUP_DELAY_MS
self.clock.time_msec()
- DEVICE_FEDERATION_INBOX_CLEANUP_DELAY.as_millis()
)
sql = """
WITH to_delete AS (
@@ -1028,7 +1029,7 @@ class DeviceInboxWorkerStore(SQLBaseStore):
# We sleep a bit so that we don't hammer the database in a tight
# loop first time we run this.
await self.clock.sleep(1)
await self.clock.sleep(Duration(seconds=1))
async def get_devices_with_messages(
self, user_id: str, device_ids: StrCollection

View File

@@ -62,6 +62,7 @@ from synapse.types import (
from synapse.util.caches.descriptors import cached, cachedList
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.cancellation import cancellable
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
from synapse.util.json import json_decoder, json_encoder
from synapse.util.stringutils import shortstr
@@ -191,7 +192,7 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
if hs.config.worker.run_background_tasks:
self.clock.looping_call(
self._prune_old_outbound_device_pokes, 60 * 60 * 1000
self._prune_old_outbound_device_pokes, Duration(hours=1)
)
def process_replication_rows(

View File

@@ -56,6 +56,7 @@ from synapse.types import JsonDict, StrCollection
from synapse.util.caches.descriptors import cached
from synapse.util.caches.lrucache import LruCache
from synapse.util.cancellation import cancellable
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
from synapse.util.json import json_encoder
@@ -155,7 +156,7 @@ class EventFederationWorkerStore(
if hs.config.worker.run_background_tasks:
hs.get_clock().looping_call(
self._delete_old_forward_extrem_cache, 60 * 60 * 1000
self._delete_old_forward_extrem_cache, Duration(hours=1)
)
# Cache of event ID to list of auth event IDs and their depths.
@@ -171,7 +172,9 @@ class EventFederationWorkerStore(
# index.
self.tests_allow_no_chain_cover_index = True
self.clock.looping_call(self._get_stats_for_federation_staging, 30 * 1000)
self.clock.looping_call(
self._get_stats_for_federation_staging, Duration(seconds=30)
)
if isinstance(self.database_engine, PostgresEngine):
self.db_pool.updates.register_background_validate_constraint_and_delete_rows(

View File

@@ -105,6 +105,7 @@ from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.stream import StreamWorkerStore
from synapse.types import JsonDict, StrCollection
from synapse.util.caches.descriptors import cached
from synapse.util.duration import Duration
from synapse.util.json import json_encoder
if TYPE_CHECKING:
@@ -270,15 +271,17 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
self._find_stream_orderings_for_times_txn(cur)
cur.close()
self.clock.looping_call(self._find_stream_orderings_for_times, 10 * 60 * 1000)
self.clock.looping_call(
self._find_stream_orderings_for_times, Duration(minutes=10)
)
self._rotate_count = 10000
self._doing_notif_rotation = False
if hs.config.worker.run_background_tasks:
self.clock.looping_call(self._rotate_notifs, 30 * 1000)
self.clock.looping_call(self._rotate_notifs, Duration(seconds=30))
self.clock.looping_call(
self._clear_old_push_actions_staging, 30 * 60 * 1000
self._clear_old_push_actions_staging, Duration(minutes=30)
)
self.db_pool.updates.register_background_index_update(
@@ -1817,7 +1820,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
return
# We sleep to ensure that we don't overwhelm the DB.
await self.clock.sleep(1.0)
await self.clock.sleep(Duration(seconds=1))
async def get_push_actions_for_user(
self,

View File

@@ -92,6 +92,7 @@ from synapse.util.caches.descriptors import cached, cachedList
from synapse.util.caches.lrucache import AsyncLruCache
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.cancellation import cancellable
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
from synapse.util.metrics import Measure
@@ -278,7 +279,7 @@ class EventsWorkerStore(SQLBaseStore):
# We periodically clean out old transaction ID mappings
self.clock.looping_call(
self._cleanup_old_transaction_ids,
5 * 60 * 1000,
Duration(minutes=5),
)
self._get_event_cache: AsyncLruCache[tuple[str], EventCacheEntry] = (

View File

@@ -38,6 +38,7 @@ from synapse.storage.database import (
)
from synapse.types import ISynapseReactor
from synapse.util.clock import Clock
from synapse.util.duration import Duration
from synapse.util.stringutils import random_string
if TYPE_CHECKING:
@@ -49,11 +50,13 @@ logger = logging.getLogger(__name__)
# How often to renew an acquired lock by updating the `last_renewed_ts` time in
# the lock table.
_RENEWAL_INTERVAL_MS = 30 * 1000
_RENEWAL_INTERVAL = Duration(seconds=30)
# How long before an acquired lock times out.
_LOCK_TIMEOUT_MS = 2 * 60 * 1000
_LOCK_REAP_INTERVAL = Duration(milliseconds=_LOCK_TIMEOUT_MS / 10.0)
class LockStore(SQLBaseStore):
"""Provides a best effort distributed lock between worker instances.
@@ -106,9 +109,7 @@ class LockStore(SQLBaseStore):
self._acquiring_locks: set[tuple[str, str]] = set()
self.clock.looping_call(
self._reap_stale_read_write_locks, _LOCK_TIMEOUT_MS / 10.0
)
self.clock.looping_call(self._reap_stale_read_write_locks, _LOCK_REAP_INTERVAL)
@wrap_as_background_process("LockStore._on_shutdown")
async def _on_shutdown(self) -> None:
@@ -410,7 +411,7 @@ class Lock:
def _setup_looping_call(self) -> None:
self._looping_call = self._clock.looping_call(
self._renew,
_RENEWAL_INTERVAL_MS,
_RENEWAL_INTERVAL,
self._server_name,
self._store,
self._hs,

View File

@@ -34,6 +34,7 @@ from synapse.storage.database import (
from synapse.storage.databases.main.event_push_actions import (
EventPushActionsWorkerStore,
)
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -78,7 +79,7 @@ class ServerMetricsStore(EventPushActionsWorkerStore, SQLBaseStore):
# Read the extrems every 60 minutes
if hs.config.worker.run_background_tasks:
self.clock.looping_call(self._read_forward_extremities, 60 * 60 * 1000)
self.clock.looping_call(self._read_forward_extremities, Duration(hours=1))
# Used in _generate_user_daily_visits to keep track of progress
self._last_user_visit_update = self._get_start_of_day()

View File

@@ -49,12 +49,13 @@ from synapse.storage.util.id_generators import IdGenerator
from synapse.storage.util.sequence import build_sequence_generator
from synapse.types import JsonDict, StrCollection, UserID, UserInfo
from synapse.util.caches.descriptors import cached
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
if TYPE_CHECKING:
from synapse.server import HomeServer
THIRTY_MINUTES_IN_MS = 30 * 60 * 1000
THIRTY_MINUTES = Duration(minutes=30)
logger = logging.getLogger(__name__)
@@ -213,7 +214,7 @@ class RegistrationWorkerStore(StatsStore, CacheInvalidationWorkerStore):
if hs.config.worker.run_background_tasks:
self.clock.call_later(
0.0,
Duration(seconds=0),
self._set_expiration_date_when_missing,
)
@@ -227,7 +228,7 @@ class RegistrationWorkerStore(StatsStore, CacheInvalidationWorkerStore):
# Create a background job for culling expired 3PID validity tokens
if hs.config.worker.run_background_tasks:
self.clock.looping_call(
self.cull_expired_threepid_validation_tokens, THIRTY_MINUTES_IN_MS
self.cull_expired_threepid_validation_tokens, THIRTY_MINUTES
)
async def register_user(
@@ -2738,9 +2739,7 @@ class RegistrationStore(RegistrationBackgroundUpdateStore):
# Create a background job for removing expired login tokens
if hs.config.worker.run_background_tasks:
self.clock.looping_call(
self._delete_expired_login_tokens, THIRTY_MINUTES_IN_MS
)
self.clock.looping_call(self._delete_expired_login_tokens, THIRTY_MINUTES)
async def add_access_token_to_user(
self,

View File

@@ -63,6 +63,7 @@ from synapse.types import (
get_domain_from_id,
)
from synapse.util.caches.descriptors import _CacheContext, cached, cachedList
from synapse.util.duration import Duration
from synapse.util.iterutils import batch_iter
from synapse.util.metrics import Measure
@@ -110,10 +111,10 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
self._known_servers_count = 1
self.hs.get_clock().looping_call(
self._count_known_servers,
60 * 1000,
Duration(minutes=1),
)
self.hs.get_clock().call_later(
1,
Duration(seconds=1),
self._count_known_servers,
)
federation_known_servers_gauge.register_hook(

View File

@@ -30,6 +30,7 @@ from synapse.storage.database import (
LoggingTransaction,
)
from synapse.types import JsonDict
from synapse.util.duration import Duration
from synapse.util.json import json_encoder
if TYPE_CHECKING:
@@ -55,7 +56,7 @@ class SessionStore(SQLBaseStore):
# Create a background job for culling expired sessions.
if hs.config.worker.run_background_tasks:
self.clock.looping_call(self._delete_expired_sessions, 30 * 60 * 1000)
self.clock.looping_call(self._delete_expired_sessions, Duration(minutes=30))
async def create_session(
self, session_type: str, value: JsonDict, expiry_ms: int

View File

@@ -20,6 +20,7 @@ import attr
from synapse.api.errors import SlidingSyncUnknownPosition
from synapse.logging.opentracing import log_kv
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.storage._base import SQLBaseStore, db_to_json
from synapse.storage.database import (
DatabasePool,
@@ -36,6 +37,7 @@ from synapse.types.handlers.sliding_sync import (
RoomSyncConfig,
)
from synapse.util.caches.descriptors import cached
from synapse.util.duration import Duration
from synapse.util.json import json_encoder
if TYPE_CHECKING:
@@ -45,6 +47,21 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# How often to update the `last_used_ts` column on
# `sliding_sync_connection_positions` when the client uses a connection
# position. We don't want to update it on every use to avoid excessive
# writes, but we want it to be reasonably up-to-date to help with
# cleaning up old connection positions.
UPDATE_INTERVAL_LAST_USED_TS = Duration(minutes=5)
# Time in milliseconds the connection hasn't been used before we consider it
# expired and delete it.
CONNECTION_EXPIRY = Duration(days=7)
# How often we run the background process to delete old sliding sync connections.
CONNECTION_EXPIRY_FREQUENCY = Duration(hours=1)
class SlidingSyncStore(SQLBaseStore):
def __init__(
self,
@@ -76,6 +93,12 @@ class SlidingSyncStore(SQLBaseStore):
replaces_index="sliding_sync_membership_snapshots_user_id",
)
if self.hs.config.worker.run_background_tasks:
self.clock.looping_call(
self.delete_old_sliding_sync_connections,
CONNECTION_EXPIRY_FREQUENCY,
)
async def get_latest_bump_stamp_for_room(
self,
room_id: str,
@@ -202,6 +225,7 @@ class SlidingSyncStore(SQLBaseStore):
"effective_device_id": device_id,
"conn_id": conn_id,
"created_ts": self.clock.time_msec(),
"last_used_ts": self.clock.time_msec(),
},
returning=("connection_key",),
)
@@ -384,7 +408,7 @@ class SlidingSyncStore(SQLBaseStore):
# The `previous_connection_position` is a user-supplied value, so we
# need to make sure that the one they supplied is actually theirs.
sql = """
SELECT connection_key
SELECT connection_key, last_used_ts
FROM sliding_sync_connection_positions
INNER JOIN sliding_sync_connections USING (connection_key)
WHERE
@@ -396,7 +420,23 @@ class SlidingSyncStore(SQLBaseStore):
if row is None:
raise SlidingSyncUnknownPosition()
(connection_key,) = row
(connection_key, last_used_ts) = row
# Update the `last_used_ts` if it's due to be updated. We don't update
# every time to avoid excessive writes.
now = self.clock.time_msec()
if (
last_used_ts is None
or now - last_used_ts > UPDATE_INTERVAL_LAST_USED_TS.as_millis()
):
self.db_pool.simple_update_txn(
txn,
table="sliding_sync_connections",
keyvalues={
"connection_key": connection_key,
},
updatevalues={"last_used_ts": now},
)
# Now that we have seen the client has received and used the connection
# position, we can delete all the other connection positions.
@@ -480,12 +520,30 @@ class SlidingSyncStore(SQLBaseStore):
logger.warning("Unrecognized sliding sync stream in DB %r", stream)
return PerConnectionStateDB(
last_used_ts=last_used_ts,
rooms=RoomStatusMap(rooms),
receipts=RoomStatusMap(receipts),
account_data=RoomStatusMap(account_data),
room_configs=room_configs,
)
@wrap_as_background_process("delete_old_sliding_sync_connections")
async def delete_old_sliding_sync_connections(self) -> None:
"""Delete sliding sync connections that have not been used for a long time."""
cutoff_ts = self.clock.time_msec() - CONNECTION_EXPIRY.as_millis()
def delete_old_sliding_sync_connections_txn(txn: LoggingTransaction) -> None:
sql = """
DELETE FROM sliding_sync_connections
WHERE last_used_ts IS NOT NULL AND last_used_ts < ?
"""
txn.execute(sql, (cutoff_ts,))
await self.db_pool.runInteraction(
"delete_old_sliding_sync_connections",
delete_old_sliding_sync_connections_txn,
)
@attr.s(auto_attribs=True, frozen=True)
class PerConnectionStateDB:
@@ -498,6 +556,8 @@ class PerConnectionStateDB:
When persisting this *only* contains updates to the state.
"""
last_used_ts: int | None
rooms: "RoomStatusMap[str]"
receipts: "RoomStatusMap[str]"
account_data: "RoomStatusMap[str]"
@@ -553,6 +613,7 @@ class PerConnectionStateDB:
)
return PerConnectionStateDB(
last_used_ts=per_connection_state.last_used_ts,
rooms=RoomStatusMap(rooms),
receipts=RoomStatusMap(receipts),
account_data=RoomStatusMap(account_data),
@@ -596,6 +657,7 @@ class PerConnectionStateDB:
}
return PerConnectionState(
last_used_ts=self.last_used_ts,
rooms=RoomStatusMap(rooms),
receipts=RoomStatusMap(receipts),
account_data=RoomStatusMap(account_data),

View File

@@ -740,7 +740,14 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
from_key: RoomStreamToken,
) -> StrCollection:
"""Return the rooms that probably have had updates since the given
token (changes that are > `from_key`)."""
token (changes that are > `from_key`).
May return false positives, but must not return false negatives.
If `have_finished_sliding_sync_background_jobs` is False, then we return
all the room IDs, as we can't be sure that the sliding sync table is
fully populated.
"""
# If the stream change cache is valid for the stream token, we can just
# use the result of that.
if from_key.stream >= self._events_stream_cache.get_earliest_known_position():
@@ -748,6 +755,11 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
room_ids, from_key.stream
)
if not self.have_finished_sliding_sync_background_jobs():
# If the table hasn't been populated yet, we have to assume all rooms
# have updates.
return room_ids
def get_rooms_that_have_updates_since_sliding_sync_table_txn(
txn: LoggingTransaction,
) -> StrCollection:

View File

@@ -37,6 +37,7 @@ from synapse.storage.database import (
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
from synapse.types import JsonDict, StrCollection
from synapse.util.caches.descriptors import cached, cachedList
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -81,7 +82,7 @@ class TransactionWorkerStore(CacheInvalidationWorkerStore):
super().__init__(database, db_conn, hs)
if hs.config.worker.run_background_tasks:
self.clock.looping_call(self._cleanup_transactions, 30 * 60 * 1000)
self.clock.looping_call(self._cleanup_transactions, Duration(minutes=30))
@wrap_as_background_process("cleanup_transactions")
async def _cleanup_transactions(self) -> None:

View File

@@ -0,0 +1,27 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2025 Element Creations, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Add a timestamp for when the sliding sync connection position was last used,
-- only updated with a small granularity.
--
-- This should be NOT NULL, but we need to consider existing rows. In future we
-- may want to either backfill this or delete all rows with a NULL value (and
-- then make it NOT NULL).
ALTER TABLE sliding_sync_connections ADD COLUMN last_used_ts BIGINT;
-- Note: We don't add an index on this column to allow HOT updates on PostgreSQL
-- to reduce the cost of the updates to the column. c.f.
-- https://www.postgresql.org/docs/current/storage-hot.html
--
-- We do query this column directly to find expired connections, but we expect
-- that to be an infrequent operation and a sequential scan should be fine.

View File

@@ -850,12 +850,16 @@ class PerConnectionState:
since the last time you made a sync request.
Attributes:
last_used_ts: The time this connection was last used, in milliseconds.
This is only accurate to `UPDATE_CONNECTION_STATE_EVERY_MS`.
rooms: The status of each room for the events stream.
receipts: The status of each room for the receipts stream.
room_configs: Map from room_id to the `RoomSyncConfig` of all
rooms that we have previously sent down.
"""
last_used_ts: int | None = None
rooms: RoomStatusMap[RoomStreamToken] = attr.Factory(RoomStatusMap)
receipts: RoomStatusMap[MultiWriterStreamToken] = attr.Factory(RoomStatusMap)
account_data: RoomStatusMap[int] = attr.Factory(RoomStatusMap)
@@ -867,6 +871,7 @@ class PerConnectionState:
room_configs = cast(MutableMapping[str, RoomSyncConfig], self.room_configs)
return MutablePerConnectionState(
last_used_ts=self.last_used_ts,
rooms=self.rooms.get_mutable(),
receipts=self.receipts.get_mutable(),
account_data=self.account_data.get_mutable(),
@@ -875,6 +880,7 @@ class PerConnectionState:
def copy(self) -> "PerConnectionState":
return PerConnectionState(
last_used_ts=self.last_used_ts,
rooms=self.rooms.copy(),
receipts=self.receipts.copy(),
account_data=self.account_data.copy(),
@@ -889,6 +895,8 @@ class PerConnectionState:
class MutablePerConnectionState(PerConnectionState):
"""A mutable version of `PerConnectionState`"""
last_used_ts: int | None
rooms: MutableRoomStatusMap[RoomStreamToken]
receipts: MutableRoomStatusMap[MultiWriterStreamToken]
account_data: MutableRoomStatusMap[int]

View File

@@ -41,15 +41,6 @@ if typing.TYPE_CHECKING:
logger = logging.getLogger(__name__)
class Duration:
"""Helper class that holds constants for common time durations in
milliseconds."""
MINUTE_MS = 60 * 1000
HOUR_MS = 60 * MINUTE_MS
DAY_MS = 24 * HOUR_MS
def unwrapFirstError(failure: Failure) -> Failure:
# Deprecated: you probably just want to catch defer.FirstError and reraise
# the subFailure's value, which will do a better job of preserving stacktraces.

View File

@@ -58,6 +58,7 @@ from synapse.logging.context import (
run_in_background,
)
from synapse.util.clock import Clock
from synapse.util.duration import Duration
logger = logging.getLogger(__name__)
@@ -640,7 +641,7 @@ class Linearizer:
# This needs to happen while we hold the lock. We could put it on the
# exit path, but that would slow down the uncontended case.
try:
await self._clock.sleep(0)
await self._clock.sleep(Duration(seconds=0))
except CancelledError:
self._release_lock(key, entry)
raise
@@ -818,7 +819,9 @@ def timeout_deferred(
# We don't track these calls since they are short.
delayed_call = clock.call_later(
timeout, time_it_out, call_later_cancel_on_shutdown=cancel_on_shutdown
Duration(seconds=timeout),
time_it_out,
call_later_cancel_on_shutdown=cancel_on_shutdown,
)
def convert_cancelled(value: Failure) -> Failure:

View File

@@ -27,7 +27,7 @@ from typing import (
from twisted.internet import defer
from synapse.util.async_helpers import DeferredEvent
from synapse.util.constants import MILLISECONDS_PER_SECOND
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -67,7 +67,7 @@ class BackgroundQueue(Generic[T]):
self._hs = hs
self._name = name
self._callback = callback
self._timeout_ms = timeout_ms
self._timeout_ms = Duration(milliseconds=timeout_ms)
# The queue of items to process.
self._queue: collections.deque[T] = collections.deque()
@@ -125,7 +125,7 @@ class BackgroundQueue(Generic[T]):
# just loop round, clear the event, recheck the queue, and then
# wait here again.
new_data = await self._wakeup_event.wait(
timeout_seconds=self._timeout_ms / MILLISECONDS_PER_SECOND
timeout_seconds=self._timeout_ms.as_secs()
)
if not new_data:
# Timed out waiting for new data, so exit the loop

Some files were not shown because too many files have changed in this diff Show More